query_id
stringlengths 32
32
| query
stringlengths 0
35.7k
| positive_passages
listlengths 1
7
| negative_passages
listlengths 22
29
| subset
stringclasses 2
values |
---|---|---|---|---|
8984257b3fea005a6bee6049c2375f5f | A Critical Review of Online Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries | [
{
"docid": "1f700c0c55b050db7c760f0c10eab947",
"text": "Cathy O’Neil’s Weapons of Math Destruction is a timely reminder of the power and perils of predictive algorithms and model-driven decision processes. The book deals in some depth with eight case studies of the abuses she associates with WMDs: “weapons of math destruction.” The cases include the havoc wrought by value-added models used to evaluate teacher performance and by the college ranking system introduced by U.S. News and World Report; the collateral damage of online advertising and models devised to track and monetize “eyeballs”; the abuses associated with the recidivism models used in judicial decisions; the inequities perpetrated by the use of personality tests in hiring decisions; the burdens placed on low-wage workers by algorithm-driven attempts to maximize labor efficiency; the injustices written into models that evaluate creditworthiness; the inequities produced by insurance companies’ risk models; and the potential assault on the democratic process by the use of big data in political campaigns. As this summary suggests, O’Neil had plenty of examples to choose from when she wrote the book, but since the publication of Weapons of Math Destruction, two more problems associated with model-driven decision procedures have surfaced, making O’Neil’s work even more essential reading. The first—the role played by fake news, much of it circulated on Facebook, in the 2016 election—has led to congressional investigations. The second—the failure of algorithm-governed oversight to recognize and delete gruesome posts on the Facebook Live streaming service—has caused CEO Mark Zuckerberg to announce the addition of 3,000 human screeners to the Facebook staff. While O’Neil’s book may seem too polemical to some readers and too cautious to others, it speaks forcefully to the cultural moment we share. O’Neil weaves the story of her own credentials and work experience into her analysis, because, as she explains, her training as a mathematician and her experience in finance shaped the way she now understands the world. O’Neil earned a PhD in mathematics from Harvard; taught at Barnard College, where her research area was algebraic number theory; and worked for the hedge fund D. E. Shaw, which uses mathematical analysis to guide investment decisions. When the financial crisis of 2008 revealed that even the most sophisticated models were incapable of anticipating risks associated with “black swans”—events whose rarity make them nearly impossible to predict—O’Neil left the world of corporate finance to join the RiskMetrics Group, where she helped market risk models to financial institutions eager to rehabilitate their image. Ultimately, she became disillusioned with the financial industry’s refusal to take seriously the limitations of risk management models and left RiskMetrics. She rebranded herself a “data scientist” and took a job at Intent Media, where she helped design algorithms that would make big data useful for all kinds of applications. All the while, as O’Neil describes it, she “worried about the separation between technical models and real people, and about the moral repercussions of that separation” (page 48). O’Neil eventually left Intent Media to devote her energies to inWeapons of Math Destruction",
"title": ""
}
] | [
{
"docid": "08e8629cf29da3532007c5cf5c57d8bb",
"text": "Social networks are growing in number and size, with hundreds of millions of user accounts among them. One added benefit of these networks is that they allow users to encode more information about their relationships than just stating who they know. In this work, we are particularly interested in trust relationships, and how they can be used in designing interfaces. In this paper, we present FilmTrust, a website that uses trust in web-based social networks to create predictive movie recommendations. Using the FilmTrust system as a foundation, we show that these recommendations are more accurate than other techniques when the user’s opinions about a film are divergent from the average. We discuss this technique both as an application of social network analysis, as well as how it suggests other analyses that can be performed to help improve collaborative filtering algorithms of all types.",
"title": ""
},
{
"docid": "7a8faa4e8ecef8e28aa2203f0aa9d888",
"text": "In today’s global marketplace, individual firms do not compete as independent entities rather as an integral part of a supply chain. This paper proposes a fuzzy mathematical programming model for supply chain planning which considers supply, demand and process uncertainties. The model has been formulated as a fuzzy mixed-integer linear programming model where data are ill-known andmodelled by triangular fuzzy numbers. The fuzzy model provides the decision maker with alternative decision plans for different degrees of satisfaction. This proposal is tested by using data from a real automobile supply chain. © 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ff029b2b9799ab1de433a3264d28d711",
"text": "This paper introduces and summarises the findings of a new shared task at the intersection of Natural Language Processing and Computer Vision: the generation of image descriptions in a target language, given an image and/or one or more descriptions in a different (source) language. This challenge was organised along with the Conference on Machine Translation (WMT16), and called for system submissions for two task variants: (i) a translation task, in which a source language image description needs to be translated to a target language, (optionally) with additional cues from the corresponding image, and (ii) a description generation task, in which a target language description needs to be generated for an image, (optionally) with additional cues from source language descriptions of the same image. In this first edition of the shared task, 16 systems were submitted for the translation task and seven for the image description task, from a total of 10 teams.",
"title": ""
},
{
"docid": "011d0fa5eac3128d5127a66741689df7",
"text": "Tweets often contain a large proportion of abbreviations, alternative spellings, novel words and other non-canonical language. These features are problematic for standard language analysis tools and it can be desirable to convert them to canonical form. We propose a novel text normalization model based on learning edit operations from labeled data while incorporating features induced from unlabeled data via character-level neural text embeddings. The text embeddings are generated using an Simple Recurrent Network. We find that enriching the feature set with text embeddings substantially lowers word error rates on an English tweet normalization dataset. Our model improves on stateof-the-art with little training data and without any lexical resources.",
"title": ""
},
{
"docid": "68fb48f456383db1865c635e64333d8a",
"text": "Documenting underwater archaeological sites is an extremely challenging problem. Sites covering large areas are particularly daunting for traditional techniques. In this paper, we present a novel approach to this problem using both an autonomous underwater vehicle (AUV) and a diver-controlled stereo imaging platform to document the submerged Bronze Age city at Pavlopetri, Greece. The result is a three-dimensional (3D) reconstruction covering 26,600 m2 at a resolution of 2 mm/pixel, the largest-scale underwater optical 3D map, at such a resolution, in the world to date. We discuss the advances necessary to achieve this result, including i) an approach to color correct large numbers of images at varying altitudes and over varying bottom types; ii) a large-scale bundle adjustment framework that is capable of handling upward of 400,000 stereo images; and iii) a novel approach to the registration and rapid documentation of an underwater excavations area that can quickly produce maps of site change. We present visual and quantitative comparisons to the authors’ previous underwater mapping approaches. C © 2016 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "b121ba0b5d24e0d53f85d04415b8c41d",
"text": "Until now, most systems for Internet of Things (IoT) management, have been designed in a Cloud-centric manner, getting benefits from the unified platform that the Cloud offers. However, a Cloud-centric infrastructure mainly achieves static sensor and data streaming systems, which do not support the direct configuration management of IoT components. To address this issue, a virtualization of IoT components (Virtual Resources) is introduced at the edge of the IoT network. This research also introduces permission-based Blockchain protocols to handle the provisioning of Virtual Resources directly onto edge devices. The architecture presented by this research focuses on the use of Virtual Resources and Blockchain protocols as management tools to distribute configuration tasks towards the edge of the IoT network. Results from lab experiments demonstrate the successful deployment and communication performance (response time in milliseconds) of Virtual Resources on two edge platforms, Raspberry Pi and Edison board. This work also provides performance evaluations of two permission-based blockchain protocol approaches. The first blockchain approach is a Blockchain as a Service (BaaS) in the Cloud, Bluemix. The second blockchain approach is a private cluster hosted in a Fog network, Multichain.",
"title": ""
},
{
"docid": "3149dd6f03208af01333dbe2c045c0c6",
"text": "Debates about human nature often revolve around what is built in. However, the hallmark of human nature is how much of a person's identity is not built in; rather, it is humans' great capacity to adapt, change, and grow. This nature versus nurture debate matters-not only to students of human nature-but to everyone. It matters whether people believe that their core qualities are fixed by nature (an entity theory, or fixed mindset) or whether they believe that their qualities can be developed (an incremental theory, or growth mindset). In this article, I show that an emphasis on growth not only increases intellectual achievement but can also advance conflict resolution between long-standing adversaries, decrease even chronic aggression, foster cross-race relations, and enhance willpower. I close by returning to human nature and considering how it is best conceptualized and studied.",
"title": ""
},
{
"docid": "19ebb5c0cdf90bf5aef36ad4b9f621a1",
"text": "There has been a dramatic increase in the number and complexity of new ventilation modes over the last 30 years. The impetus for this has been the desire to improve the safety, efficiency, and synchrony of ventilator-patient interaction. Unfortunately, the proliferation of names for ventilation modes has made understanding mode capabilities problematic. New modes are generally based on increasingly sophisticated closed-loop control systems or targeting schemes. We describe the 6 basic targeting schemes used in commercially available ventilators today: set-point, dual, servo, adaptive, optimal, and intelligent. These control systems are designed to serve the 3 primary goals of mechanical ventilation: safety, comfort, and liberation. The basic operations of these schemes may be understood by clinicians without any engineering background, and they provide the basis for understanding the wide variety of ventilation modes and their relative advantages for improving patient-ventilator synchrony. Conversely, their descriptions may provide engineers with a means to better communicate to end users.",
"title": ""
},
{
"docid": "5eeb17964742e1bf1e517afcb1963b02",
"text": "Global navigation satellite system reflectometry is a multistatic radar using navigation signals as signals of opportunity. It provides wide-swath and improved spatiotemporal sampling over current space-borne missions. The lack of experimental datasets from space covering signals from multiple constellations (GPS, GLONASS, Galileo, and Beidou) at dual-band (L1 and L2) and dual-polarization (right- and left-hand circular polarization), over the ocean, land, and cryosphere remains a bottleneck to further develop these techniques. 3Cat-2 is a 6-unit (3 × 2 elementary blocks of 10 × 10 × 10 cm3) CubeSat mission designed and implemented at the Universitat Politècnica de Catalunya-BarcelonaTech to explore fundamental issues toward an improvement in the understanding of the bistatic scattering properties of different targets. Since geolocalization of the specific reflection points is determined by the geometry only, a moderate pointing accuracy is only required to correct the antenna pattern in scatterometry measurements. This paper describes the mission analysis and the current status of the assembly, integration, and verification activities of both the engineering model and the flight model performed at Universitat Politècnica de Catalunya NanoSatLab premises. 3Cat-2 launch is foreseen for the second quarter of 2016 into a Sun-Synchronous orbit of 510-km height.",
"title": ""
},
{
"docid": "eb271acef996a9ba0f84a50b5055953b",
"text": "Makeup is widely used to improve facial attractiveness and is well accepted by the public. However, different makeup styles will result in significant facial appearance changes. It remains a challenging problem to match makeup and non-makeup face images. This paper proposes a learning from generation approach for makeup-invariant face verification by introducing a bi-level adversarial network (BLAN). To alleviate the negative effects from makeup, we first generate non-makeup images from makeup ones, and then use the synthesized nonmakeup images for further verification. Two adversarial networks in BLAN are integrated in an end-to-end deep network, with the one on pixel level for reconstructing appealing facial images and the other on feature level for preserving identity information. These two networks jointly reduce the sensing gap between makeup and non-makeup images. Moreover, we make the generator well constrained by incorporating multiple perceptual losses. Experimental results on three benchmark makeup face datasets demonstrate that our method achieves state-of-the-art verification accuracy across makeup status and can produce photo-realistic non-makeup",
"title": ""
},
{
"docid": "1785135fa0a35fd59a6181ec5886ddc1",
"text": "We aimed to describe the surgical technique and clinical outcomes of paraspinal-approach reduction and fixation (PARF) in a group of patients with Denis type B thoracolumbar burst fracture (TLBF) with neurological deficiencies. A total of 62 patients with Denis B TLBF with neurological deficiencies were included in this study between January 2009 and December 2011. Clinical evaluations including the Frankel scale, pain visual analog scale (VAS) and radiological assessment (CT scans for fragment reduction and X-ray for the Cobb angle, adjacent superior and inferior intervertebral disc height, and vertebral canal diameter) were performed preoperatively and at 3 days, 6 months, and 1 and 2 years postoperatively. All patients underwent successful PARF, and were followed-up for at least 2 years. Average surgical time, blood loss and incision length were recorded. The sagittal vertebral canal diameter was significantly enlarged. The canal stenosis index was also improved. Kyphosis was corrected and remained at 8.6±1.4o (P>0.05) 1 year postoperatively. Adjacent disc heights remained constant. Average Frankel grades were significantly improved at the end of follow-up. All 62 patients were neurologically assessed. Pain scores decreased at 6 months postoperatively, compared to before surgery (P<0.05). PARF provided excellent reduction for traumatic segmental kyphosis, and resulted in significant spinal canal clearance, which restored and maintained the vertebral body height of patients with Denis B TLBF with neurological deficits.",
"title": ""
},
{
"docid": "2dc2e201bee0f963355d10572ad71955",
"text": "This paper presents Dynamoth, a dynamic, scalable, channel-based pub/sub middleware targeted at large scale, distributed and latency constrained systems. Our approach provides a software layer that balances the load generated by a high number of publishers, subscribers and messages across multiple, standard pub/sub servers that can be deployed in the Cloud. In order to optimize Cloud infrastructure usage, pub/sub servers can be added or removed as needed. Balancing takes into account the live characteristics of each channel and is done in an hierarchical manner across channels (macro) as well as within individual channels (micro) to maintain acceptable performance and low latencies despite highly varying conditions. Load monitoring is performed in an unintrusive way, and rebalancing employs a lazy approach in order to minimize its temporal impact on performance while ensuring successful and timely delivery of all messages. Extensive real-world experiments that illustrate the practicality of the approach within a massively multiplayer game setting are presented. Results indicate that with a given number of servers, Dynamoth was able to handle 60% more simultaneous clients than the consistent hashing approach, and that it was properly able to deal with highly varying conditions in the context of large workloads.",
"title": ""
},
{
"docid": "23ed8f887128cb1cd6ea2f386c099a43",
"text": "The capability to overcome terrain irregularities or obstacles, named terrainability, is mostly dependant on the suspension mechanism of the rover and its control. For a given wheeled robot, the terrainability can be improved by using a sophisticated control, and is somewhat related to minimizing wheel slip. The proposed control method, named torque control, improves the rover terrainability by taking into account the whole mechanical structure. The rover model is based on the Newton-Euler equations and knowing the complete state of the mechanical structures allows us to compute the force distribution in the structure, and especially between the wheels and the ground. Thus, a set of torques maximizing the traction can be used to drive the rover. The torque control algorithm is presented in this paper, as well as tests showing its impact and improvement in terms of terrainability. Using the CRAB rover platform, we show that the torque control not only increases the climbing performance but also limits odometric errors and reduces the overall power consumption.",
"title": ""
},
{
"docid": "134578862a01dc4729999e9076362ee0",
"text": "PURPOSE\nBasal-like breast cancer is associated with high grade, poor prognosis, and younger patient age. Clinically, a triple-negative phenotype definition [estrogen receptor, progesterone receptor, and human epidermal growth factor receptor (HER)-2, all negative] is commonly used to identify such cases. EGFR and cytokeratin 5/6 are readily available positive markers of basal-like breast cancer applicable to standard pathology specimens. This study directly compares the prognostic significance between three- and five-biomarker surrogate panels to define intrinsic breast cancer subtypes, using a large clinically annotated series of breast tumors.\n\n\nEXPERIMENTAL DESIGN\nFour thousand forty-six invasive breast cancers were assembled into tissue microarrays. All had staging, pathology, treatment, and outcome information; median follow-up was 12.5 years. Cox regression analyses and likelihood ratio tests compared the prognostic significance for breast cancer death-specific survival (BCSS) of the two immunohistochemical panels.\n\n\nRESULTS\nAmong 3,744 interpretable cases, 17% were basal using the triple-negative definition (10-year BCSS, 6 7%) and 9% were basal using the five-marker method (10-year BCSS, 62%). Likelihood ratio tests of multivariable Cox models including standard clinical variables show that the five-marker panel is significantly more prognostic than the three-marker panel. The poor prognosis of triple-negative phenotype is conferred almost entirely by those tumors positive for basal markers. Among triple-negative patients treated with adjuvant anthracycline-based chemotherapy, the additional positive basal markers identified a cohort of patients with significantly worse outcome.\n\n\nCONCLUSIONS\nThe expanded surrogate immunopanel of estrogen receptor, progesterone receptor, human HER-2, EGFR, and cytokeratin 5/6 provides a more specific definition of basal-like breast cancer that better predicts breast cancer survival.",
"title": ""
},
{
"docid": "4b69831f2736ae08049be81e05dd4046",
"text": "One of the most important aspects in playing the piano is using the appropriate fingers to facilitate movement and transitions. The fingering arrangement depends to a ce rtain extent on the size of the musician’s hand. We hav e developed an automatic fingering system that, given a sequence of pitches, suggests which fingers should be used. The output can be personalized to agree with t he limitations of the user’s hand. We also consider this system to be the base of a more complex future system: a score reduction system that will reduce orchestra scor e to piano scores. This paper describes: • “Vertical cost” model: the stretch induced by a given hand position. • “Horizontal cost” model: transition between two hand positions. • A system that computes low-cost fingering for a given piece of music. • A machine learning technique used to learn the appropriate parameters in the models.",
"title": ""
},
{
"docid": "65385cdaac98022605efd2fd82bb211b",
"text": "As electric vehicles (EVs) take a greater share in the personal automobile market, their penetration may bring higher peak demand at the distribution level. This may cause potential transformer overloads, feeder congestions, and undue circuit faults. This paper focuses on the impact of charging EVs on a residential distribution circuit. Different EV penetration levels, EV types, and charging profiles are considered. In order to minimize the impact of charging EVs on a distribution circuit, a demand response strategy is proposed in the context of a smart distribution network. In the proposed DR strategy, consumers will have their own choices to determine which load to control and when. Consumer comfort indices are introduced to measure the impact of demand response on consumers' lifestyle. The proposed indices can provide electric utilities a better estimation of the customer acceptance of a DR program, and the capability of a distribution circuit to accommodate EV penetration.",
"title": ""
},
{
"docid": "952d97cc8302a6a1ab584ae32bfb64ee",
"text": "1 Background and Objective of the Survey Compared with conventional centralized systems, blockchain technologies used for transactions of value records, such as bitcoins, structurally have the characteristics that (i) enable the creation of a system that substantially ensures no downtime (ii) make falsification extremely hard, and (iii) realize inexpensive system. Blockchain technologies are expected to be utilized in diverse fields including IoT. Japanese companies just started technology verification independently, and there is a risk that the initiative might be taken by foreign companies in blockchain technologies, which are highly likely to serve as the next-generation platform for all industrial fields in the future. From such point of view, this survey was conducted for the purpose of comparing and analyzing details of numbers of blockchains and advantages/challenges therein; ascertaining promising fields in which the technology should be utilized; ascertaining the impact of the technology on society and the economy; and developing policy guidelines for encouraging industries to utilize the technology in the future. This report compiles the results of interviews with domestic and overseas companies involving blockchain technology and experts. The content of this report is mostly based on data as of the end of February 2016. As specifications of blockchains and the status of services being provided change by the minute, it is recommended to check the latest conditions when intending to utilize any related technologies in business, etc. Terms and abbreviations used in this report are defined as follows. Terms Explanations BTC Abbreviation used as a currency unit of bitcoins FinTech A coined term combining Finance and Technology; Technologies and initiatives to create new services and businesses by utilizing ICT in the financial business Virtual currency / Cryptocurrency Bitcoins or other information whose value is recognized only on the Internet Exchange Services to exchange virtual currency, such as bitcoins, with another virtual currency or with legal currency, such as Japanese yen or US dollars; Some exchange offers services for contracts for difference, such as foreign exchange margin transactions (FX transactions) Consensus A series of procedures from approving a transaction as an official one and mutually confirming said results by using the following consensus algorithm Consensus algorithm Algorithm in general for mutually approving a distributed ledger using Proof of Work and Proof of Stake, etc. Token Virtual currency unique to blockchains; Virtual currency used for paying fees for asset management, etc. on blockchains is referred to …",
"title": ""
},
{
"docid": "69e86a1f6f4d7f1039a3448e06df3725",
"text": "In this paper, a low profile LLC resonant converter with two planar transformers is proposed for a slim SMPS (Switching Mode Power Supply). Design procedures and voltage gain characteristics on the proposed planar transformer and converter are described in detail. Two planar transformers applied to LLC resonant converter are connected in series at primary and in parallel by the center-tap winding at secondary. Based on the theoretical analysis and simulation results of the voltage gain characteristics, a 300W LLC resonant converter for LED TV power module is designed and tested.",
"title": ""
},
{
"docid": "f9d4b66f395ec6660da8cb22b96c436c",
"text": "The purpose of the study was to measure objectively the home use of the reciprocating gait orthosis (RGO) and the electrically augmented (hybrid) RGO. It was hypothesised that RGO use would increase following provision of functional electrical stimulation (FES). Five adult subjects participated in the study with spinal cord lesions ranging from C2 (incomplete) to T6. Selection criteria included active RGO use and suitability for electrical stimulation. Home RGO use was measured for up to 18 months by determining the mean number of steps taken per week. During this time patients were supplied with the hybrid system. Three alternatives for the measurement of steps taken were investigated: a commercial digital pedometer, a magnetically actuated counter and a heel contact switch linked to an electronic counter. The latter was found to be the most reliable system and was used for all measurements. Additional information on RGO use was acquired using three patient diaries administered throughout the study and before and after the provision of the hybrid system. Testing of the original hypothesis was complicated by problems in finding a reliable measurement tool and difficulties with data collection. However, the results showed that overall use of the RGO, whether with or without stimulation, is low. Statistical analysis of the step counter results was not realistic. No statistically significant change in RGO use was found between the patient diaries. The study suggests that the addition of electrical stimulation does not increase RGO use. The study highlights the problem of objectively measuring orthotic use in the home.",
"title": ""
}
] | scidocsrr |
e0301c813aa0aeaac7d4039bc9b5e5ae | The roles of brand community and community engagement in building brand trust on social media | [
{
"docid": "64e0a1345e5a181191c54f6f9524c96d",
"text": "Social media based brand communities are communities initiated on the platform of social media. In this article, we explore whether brand communities based on social media (a special type of online brand communities) have positive effects on the main community elements and value creation practices in the communities as well as on brand trust and brand loyalty. A survey based empirical study with 441 respondents was conducted. The results of structural equation modeling show that brand communities established on social media have positive effects on community markers (i.e., shared consciousness, shared rituals and traditions, and obligations to society), which have positive effects on value creation practices (i.e., social networking, community engagement, impressions management, and brand use). Such communities could enhance brand loyalty through brand use and impression management practices. We show that brand trust has a full mediating role in converting value creation practices into brand loyalty. Implications for practice and future research opportunities are discussed.",
"title": ""
}
] | [
{
"docid": "89652309022bc00c7fd76c4fe1c5d644",
"text": "In first encounters people quickly form impressions of each other’s personality and interpersonal attitude. We conducted a study to investigate how this transfers to first encounters between humans and virtual agents. In the study, subjects’ avatars approached greeting agents in a virtual museum rendered in both first and third person perspective. Each agent exclusively exhibited nonverbal immediacy cues (smile, gaze and proximity) during the approach. Afterwards subjects judged its personality (extraversion) and interpersonal attitude (hostility/friendliness). We found that within only 12.5 seconds of interaction subjects formed impressions of the agents based on observed behavior. In particular, proximity had impact on judgments of extraversion whereas smile and gaze on friendliness. These results held for the different camera perspectives. Insights on how the interpretations might change according to the user’s own personality are also provided.",
"title": ""
},
{
"docid": "c1906bcb735d0c77057441f13ea282fc",
"text": "It has long been known that storage of information in working memory suffers as a function of proactive interference. Here we review the results of experiments using approaches from cognitive neuroscience to reveal a pattern of brain activity that is a signature of proactive interference. Many of these results derive from a single paradigm that requires one to resolve interference from a previous experimental trial. The importance of activation in left inferior frontal cortex is shown repeatedly using this task and other tasks. We review a number of models that might account for the behavioral and imaging findings about proactive interference, raising questions about the adequacy of these models.",
"title": ""
},
{
"docid": "c4ecf2d867a84a94ad34a1d4943071df",
"text": "This paper introduces our submission to the 2nd Facial Landmark Localisation Competition. We present a deep architecture to directly detect facial landmarks without using face detection as an initialization. The architecture consists of two stages, a Basic Landmark Prediction Stage and a Whole Landmark Regression Stage. At the former stage, given an input image, the basic landmarks of all faces are detected by a sub-network of landmark heatmap and affinity field prediction. At the latter stage, the coarse canonical face and the pose can be generated by a Pose Splitting Layer based on the visible basic landmarks. According to its pose, each canonical state is distributed to the corresponding branch of the shape regression sub-networks for the whole landmark detection. Experimental results show that our method obtains promising results on the 300-W dataset, and achieves superior performances over the baselines of the semi-frontal and the profile categories in this competition.",
"title": ""
},
{
"docid": "c6d2371a165acc46029eb4ad42df3270",
"text": "Video game playing is a popular activity and its enjoyment among frequent players has been associated with absorption and immersion experiences. This paper examines how immersion in the video game environment can influence the player during the game and afterwards (including fantasies, thoughts, and actions). This is what is described as Game Transfer Phenomena (GTP). GTP occurs when video game elements are associated with real life elements triggering subsequent thoughts, sensations and/or player actions. To investigate this further, a total of 42 frequent video game players aged between 15 and 21 years old were interviewed. Thematic analysis showed that many players experienced GTP, where players appeared to integrate elements of video game playing into their real lives. These GTP were then classified as either intentional or automatic experiences. Results also showed that players used video games for interacting with others as a form of amusement, modeling or mimicking video game content, and daydreaming about video games. Furthermore, the findings demonstrate how video games triggered intrusive thoughts, sensations, impulses, reflexes, visual illusions, and dissociations. DOI: 10.4018/ijcbpl.2011070102 16 International Journal of Cyber Behavior, Psychology and Learning, 1(3), 15-33, July-September 2011 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. 24/7 activity (e.g., Ng & Weimer-Hastings, 2005; Chappell, Eatough, Davies, & Griffiths, 2006; Grüsser, Thalemann, & Griffiths, 2007). Today’s video games have evolved due to technological advance, resulting in high levels of realism and emotional design that include diversity, experimentation, and (perhaps in some cases) sensory overload. Furthermore, video games have been considered as fantasy triggers because they offer ‘what if’ scenarios (Baranowski, Buday, Thompson, & Baranowski, 2008). What if the player could become someone else? What if the player could inhabit an improbable world? What if the player could interact with fantasy characters or situations (Woolley, 1995)? Entertainment media content can be very effective in capturing the minds and eliciting emotions in the individual. Research about novels, films, fairy tales and television programs has shown that entertainment can generate emotions such as joy, awe, compassion, fear and anger (Oatley, 1999; Tan 1996; Valkenburg Cantor & Peeters, 2000, cited in Jansz et al., 2005). Video games also have the capacity to generate such emotions and have the capacity for players to become both immersed in, and dissociated from, the video game. Dissociation and Immersion It is clear that dissociation is a somewhat “fuzzy” concept as there is no clear accepted definition of what it actually constitutes (Griffiths, Wood, Parke, & Parke, 2006). Most would agree that dissociation is a form of altered state of consciousness. However, dissociative behaviours lie on a continuum and range from individuals losing track of time, feeling like they are someone else, blacking out, not recalling how they got somewhere or what they did, and being in a trance like state (Griffiths et al., 2006). Studies have found that dissociation is related to an extensive involvement in fantasizing, and daydreaming (Giesbrecht, Geraerts, & Merckelbach, 2007). Dissociative phenomena of the non-pathological type include absorption and imaginative involvement (Griffith et al., 2006) and are psychological phenomena that can occur during video game playing. Anyone can, to some degree, experience dissociative states in their daily lives (Giesbrecht et al., 2007). Furthermore, these states can happen episodically and can be situationally triggered (Griffiths et al., 2006). When people become engaged in games they may experience psychological absorption. More commonly known as ‘immersion’, this refers to when individual logical integration of thoughts, feelings and experiences is suspended (Funk, Chan, Brouwer, & Curtiss, 2006; Wood, Griffiths, & Parke, 2007). This can incur an altered state of consciousness such as altered time perception and change in degree of control over cognitive functioning (Griffiths et al., 2006). Video game enjoyment has been associated with absorption and immersion experiences (IJsselsteijn, Kort, de Poels, Jurgelionis, & Belotti, 2007). How an individual can get immersed in video games has been explained by the phenomenon of ‘flow’ (Csikszentmihalyi, 1988). Flow refers to the optimum experience a person achieves when performing an activity (e.g., video game playing) and may be induced, in part, by the structural characteristics of the activity itself. Structural characteristics of video games (i.e., the game elements that are incorporated into the game by the games designers) are usually based on a balance between skill and challenge (Wood et al., 2004; King, Delfabbro, & Griffiths, 2010), and help make playing video games an intrinsically rewarding activity (Csikszentmihalyi, 1988; King, et al. 2010). Studying Video Game Playing Studying the effects of video game playing requires taking in consideration four independent dimensions suggested by Gentile and Stone (2005); amount, content, form, and mechanism. The amount is understood as the time spent playing and gaming habits. Content refers to the message and topic delivered by the video game. Form focuses on the types of activity necessary to perform in the video game. The mechanism refers to the input-output devices used, which 17 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/article/game-transfer-phenomena-videogame/58041?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Communications and Social Science, InfoSciCommunications, Online Engagement, and Media eJournal Collection, InfoSci-Educational Leadership, Administration, and Technologies eJournal Collection, InfoSci-Healthcare Administration, Clinical Practice, and Bioinformatics eJournal Collection, InfoSci-Select, InfoSci-Journal Disciplines Library Science, Information Studies, and Education, InfoSci-Journal Disciplines Medicine, Healthcare, and Life Science. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=2",
"title": ""
},
{
"docid": "2390d3d6c51c4a6857c517eb2c2cb3c0",
"text": "It is common for organizations to maintain multiple variants of a given business process, such as multiple sales processes for different products or multiple bookkeeping processes for different countries. Conventional business process modeling languages do not explicitly support the representation of such families of process variants. This gap triggered significant research efforts over the past decade, leading to an array of approaches to business process variability modeling. In general, each of these approaches extends a conventional process modeling language with constructs to capture customizable process models. A customizable process model represents a family of process variants in a way that a model of each variant can be derived by adding or deleting fragments according to customization options or according to a domain model. This survey draws up a systematic inventory of approaches to customizable process modeling and provides a comparative evaluation with the aim of identifying common and differentiating modeling features, providing criteria for selecting among multiple approaches, and identifying gaps in the state of the art. The survey puts into evidence an abundance of customizable process-modeling languages, which contrasts with a relative scarcity of available tool support and empirical comparative evaluations.",
"title": ""
},
{
"docid": "9676c561df01b794aba095dc66b684f8",
"text": "The differentiation of B lymphocytes in the bone marrow is guided by the surrounding microenvironment determined by cytokines, adhesion molecules, and the extracellular matrix. These microenvironmental factors are mainly provided by stromal cells. In this paper, we report the identification of a VCAM-1-positive stromal cell population by flow cytometry. This population showed the expression of cell surface markers known to be present on stromal cells (CD10, CD13, CD90, CD105) and had a fibroblastoid phenotype in vitro. Single cell RT-PCR analysis of its cytokine expression pattern revealed transcripts for haematopoietic cytokines important for either the early B lymphopoiesis like flt3L or the survival of long-lived plasma cells like BAFF or both processes like SDF-1. Whereas SDF-1 transcripts were detectable in all VCAM-1-positive cells, flt3L and BAFF were only expressed by some cells suggesting the putative existence of different subpopulations with distinct functional properties. In summary, the VCAM-1-positive cell population seems to be a candidate stromal cell population supporting either developing B cells and/or long-lived plasma cells in human bone marrow.",
"title": ""
},
{
"docid": "9c28badf1e53e69452c1d7aad2a87fab",
"text": "While an al dente character of 5G is yet to emerge, network densification, miscellany of node types, split of control and data plane, network virtualization, heavy and localized cache, infrastructure sharing, concurrent operation at multiple frequency bands, simultaneous use of different medium access control and physical layers, and flexible spectrum allocations can be envisioned as some of the potential ingredients of 5G. It is not difficult to prognosticate that with such a conglomeration of technologies, the complexity of operation and OPEX can become the biggest challenge in 5G. To cope with similar challenges in the context of 3G and 4G networks, recently, self-organizing networks, or SONs, have been researched extensively. However, the ambitious quality of experience requirements and emerging multifarious vision of 5G, and the associated scale of complexity and cost, demand a significantly different, if not totally new, approach toward SONs in order to make 5G technically as well as financially feasible. In this article we first identify what challenges hinder the current self-optimizing networking paradigm from meeting the requirements of 5G. We then propose a comprehensive framework for empowering SONs with big data to address the requirements of 5G. Under this framework we first characterize big data in the context of future mobile networks, identifying its sources and future utilities. We then explicate the specific machine learning and data analytics tools that can be exploited to transform big data into the right data that provides a readily useable knowledge base to create end-to-end intelligence of the network. We then explain how a SON engine can build on the dynamic models extractable from the right data. The resultant dynamicity of a big data empowered SON (BSON) makes it more agile and can essentially transform the SON from being a reactive to proactive paradigm and hence act as a key enabler for 5G's extremely low latency requirements. Finally, we demonstrate the key concepts of our proposed BSON framework through a case study of a problem that the classic 3G/4G SON fails to solve.",
"title": ""
},
{
"docid": "12af7a639f885a173950304cf44b5a42",
"text": "Objective:To compare fracture rates in four diet groups (meat eaters, fish eaters, vegetarians and vegans) in the Oxford cohort of the European Prospective Investigation into Cancer and Nutrition (EPIC-Oxford).Design:Prospective cohort study of self-reported fracture risk at follow-up.Setting:The United Kingdom.Subjects:A total of 7947 men and 26 749 women aged 20–89 years, including 19 249 meat eaters, 4901 fish eaters, 9420 vegetarians and 1126 vegans, recruited by postal methods and through general practice surgeries.Methods:Cox regression.Results:Over an average of 5.2 years of follow-up, 343 men and 1555 women reported one or more fractures. Compared with meat eaters, fracture incidence rate ratios in men and women combined adjusted for sex, age and non-dietary factors were 1.01 (95% CI 0.88–1.17) for fish eaters, 1.00 (0.89–1.13) for vegetarians and 1.30 (1.02–1.66) for vegans. After further adjustment for dietary energy and calcium intake the incidence rate ratio among vegans compared with meat eaters was 1.15 (0.89–1.49). Among subjects consuming at least 525 mg/day calcium the corresponding incidence rate ratios were 1.05 (0.90–1.21) for fish eaters, 1.02 (0.90–1.15) for vegetarians and 1.00 (0.69–1.44) for vegans.Conclusions:In this population, fracture risk was similar for meat eaters, fish eaters and vegetarians. The higher fracture risk in the vegans appeared to be a consequence of their considerably lower mean calcium intake. An adequate calcium intake is essential for bone health, irrespective of dietary preferences.Sponsorship:The EPIC-Oxford study is supported by The Medical Research Council and Cancer Research UK.",
"title": ""
},
{
"docid": "b1e039673d60defd9b8699074235cf1b",
"text": "Sentiment classification has undergone significant development in recent years. However, most existing studies assume the balance between negative and positive samples, which may not be true in reality. In this paper, we investigate imbalanced sentiment classification instead. In particular, a novel clustering-based stratified under-sampling framework and a centroid-directed smoothing strategy are proposed to address the imbalanced class and feature distribution problems respectively. Evaluation across different datasets shows the effectiveness of both the under-sampling framework and the smoothing strategy in handling the imbalanced problems in real sentiment classification applications.",
"title": ""
},
{
"docid": "8aacdb790ddec13f396a0591c0cd227a",
"text": "This paper reports on a qualitative study of journal entries written by students in six health professions participating in the Interprofessional Health Mentors program at the University of British Columbia, Canada. The study examined (1) what health professions students learn about professional language and communication when given the opportunity, in an interprofessional group with a patient or client, to explore the uses, meanings, and effects of common health care terms, and (2) how health professional students write about their experience of discussing common health care terms, and what this reveals about how students see their development of professional discourse and participation in a professional discourse community. Using qualitative thematic analysis to address the first question, the study found that discussion of these health care terms provoked learning and reflection on how words commonly used in one health profession can be understood quite differently in other health professions, as well as on how health professionals' language choices may be perceived by patients and clients. Using discourse analysis to address the second question, the study further found that many of the students emphasized accuracy and certainty in language through clear definitions and intersubjective agreement. However, when prompted by the discussion they were willing to consider other functions and effects of language.",
"title": ""
},
{
"docid": "26feac05cc1827728cbcb6be3b4bf6d1",
"text": "This paper presents a Linux kernel module, DigSig, which helps system administrators control Executable and Linkable Format (ELF) binary execution and library loading based on the presence of a valid digital signature. By preventing attackers from replacing libraries and sensitive, privileged system daemons with malicious code, DigSig increases the difficulty of hiding illicit activities such as access to compromised systems. DigSig provides system administrators with an efficient tool which mitigates the risk of running malicious code at run time. This tool adds extra functionality previously unavailable for the Linux operating system: kernel level RSA signature verification with caching and revocation of signatures.",
"title": ""
},
{
"docid": "a134fe9ffdf7d99593ad9cdfd109b89d",
"text": "A hybrid particle swarm optimization (PSO) for the job shop problem (JSP) is proposed in this paper. In previous research, PSO particles search solutions in a continuous solution space. Since the solution space of the JSP is discrete, we modified the particle position representation, particle movement, and particle velocity to better suit PSO for the JSP. We modified the particle position based on preference list-based representation, particle movement based on swap operator, and particle velocity based on the tabu list concept in our algorithm. Giffler and Thompson’s heuristic is used to decode a particle position into a schedule. Furthermore, we applied tabu search to improve the solution quality. The computational results show that the modified PSO performs better than the original design, and that the hybrid PSO is better than other traditional metaheuristics. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "76f033087b24fdb7494dd7271adbb346",
"text": "Navigation research is attracting renewed interest with the advent of learning-based methods. However, this new line of work is largely disconnected from well-established classic navigation approaches. In this paper, we take a step towards coordinating these two directions of research. We set up classic and learning-based navigation systems in common simulated environments and thoroughly evaluate them in indoor spaces of varying complexity, with access to different sensory modalities. Additionally, we measure human performance in the same environments. We find that a classic pipeline, when properly tuned, can perform very well in complex cluttered environments. On the other hand, learned systems can operate more robustly with a limited sensor suite. Both approaches are still far from human-level performance.",
"title": ""
},
{
"docid": "21d84bd9ea7896892a3e69a707b03a6a",
"text": "Tahoe is a system for secure, distributed storage. It uses capabilities for access control, cryptography for confidentiality and integrity, and erasure coding for fault-tolerance. It has been deployed in a commercial backup service and is currently operational. The implementation is Open Source.",
"title": ""
},
{
"docid": "3230fba68358a08ab9112887bdd73bb9",
"text": "The local field potential (LFP) reflects activity of many neurons in the vicinity of the recording electrode and is therefore useful for studying local network dynamics. Much of the nature of the LFP is, however, still unknown. There are, for instance, contradicting reports on the spatial extent of the region generating the LFP. Here, we use a detailed biophysical modeling approach to investigate the size of the contributing region by simulating the LFP from a large number of neurons around the electrode. We find that the size of the generating region depends on the neuron morphology, the synapse distribution, and the correlation in synaptic activity. For uncorrelated activity, the LFP represents cells in a small region (within a radius of a few hundred micrometers). If the LFP contributions from different cells are correlated, the size of the generating region is determined by the spatial extent of the correlated activity.",
"title": ""
},
{
"docid": "e00295dc86476d1d350d11068439fe87",
"text": "A 10-bit LCD column driver, consisting of piecewise linear digital to analog converters (DACs), is proposed. Piecewise linear compensation is utilized to reduce the die area and to increase the effective color depth. The data conversion is carried out by a resistor string type DAC (R-DAC) and a charge sharing DAC, which are used for the most significant bit and least significant bit data conversions, respectively. Gamma correction voltages are applied to the R-DAC to lit the inverse of the liquid crystal trans-mittance-voltage characteristic. The gamma correction can also be digitally fine-tuned in the timing controller or column drivers. A prototype 10-bit LCD column driver implemented in a 0.35-mum CMOS technology demonstrates that the settling time is within 3 mus and the average die size per channel is 0.063 mm2, smaller than those of column drivers based exclusively on R-DACs.",
"title": ""
},
{
"docid": "4c261e2b54a12270f158299733942a5f",
"text": "Applying Data Mining (DM) in education is an emerging interdisciplinary research field also known as Educational Data Mining (EDM). Ensemble techniques have been successfully applied in the context of supervised learning to increase the accuracy and stability of prediction. In this paper, we present a hybrid procedure based on ensemble classification and clustering that enables academicians to firstly predict students’ academic performance and then place each student in a well-defined cluster for further advising. Additionally, it endows instructors an anticipated estimation of their students’ capabilities during team forming and in-class participation. For ensemble classification, we use multiple classifiers (Decision Trees-J48, Naïve Bayes and Random Forest) to improve the quality of student data by eliminating noisy instances, and hence improving predictive accuracy. We then use the approach of bootstrap (sampling with replacement) averaging, which consists of running k-means clustering algorithm to convergence of the training data and averaging similar cluster centroids to obtain a single model. We empirically compare our technique with other ensemble techniques on real world education datasets.",
"title": ""
},
{
"docid": "2a7b7d9fab496be18f6bf50add2f7b1e",
"text": "BACKROUND\nSuperior Mesenteric Artery Syndrome (SMAS) is a rare disorder caused by compression of the third portion of the duodenum by the SMA. Once a conservative approach fails, usual surgical strategies include Duodenojejunostomy and Strong's procedure. The latter avoids potential anastomotic risks and complications. Robotic Strong's procedure (RSP) combines both the benefits of a minimal invasive approach and also enchased robotic accuracy and efficacy.\n\n\nMETHODS\nFor a young girl who was unsuccessfully treated conservatively, the paper describes the RSP surgical technique. To the authors' knowledge, this is the first report in the literature.\n\n\nRESULTS\nMinimal blood loss, short operative time, short hospital stay and early recovery were the short-term benefits. Significant weight gain was achieved three months after the surgery.\n\n\nCONCLUSION\nBased on primary experience, it is suggested that RSP is a very effective alternative in treating SMAS.",
"title": ""
},
{
"docid": "d18c53be23600c9b0ae2efa215c7c4af",
"text": "The problem of large-scale image search has been traditionally addressed with the bag-of-visual-words (BOV). In this article, we propose to use as an alternative the Fisher kernel framework. We first show why the Fisher representation is well-suited to the retrieval problem: it describes an image by what makes it different from other images. One drawback of the Fisher vector is that it is high-dimensional and, as opposed to the BOV, it is dense. The resulting memory and computational costs do not make Fisher vectors directly amenable to large-scale retrieval. Therefore, we compress Fisher vectors to reduce their memory footprint and speed-up the retrieval. We compare three binarization approaches: a simple approach devised for this representation and two standard compression techniques. We show on two publicly available datasets that compressed Fisher vectors perform very well using as little as a few hundreds of bits per image, and significantly better than a very recent compressed BOV approach.",
"title": ""
},
{
"docid": "c32c1c16aec9bc6dcfb5fa8fb4f25140",
"text": "Logo detection is a challenging task with many practical applications in our daily life and intellectual property protection. The two main obstacles here are lack of public logo datasets and effective design of logo detection structure. In this paper, we first manually collected and annotated 6,400 images and mix them with FlickrLogo-32 dataset, forming a larger dataset. Secondly, we constructed Faster R-CNN frameworks with several widely used classification models for logo detection. Furthermore, the transfer learning method was introduced in the training process. Finally, clustering was used to guarantee suitable hyper-parameters and more precise anchors of RPN. Experimental results show that the proposed framework outper-forms the state of-the-art methods with a noticeable margin.",
"title": ""
}
] | scidocsrr |
b0772812a9182f6354e8b447ff0558a0 | Maximum Power Point Tracking for PV system under partial shading condition via particle swarm optimization | [
{
"docid": "470093535d4128efa9839905ab2904a5",
"text": "Photovolatic systems normally use a maximum power point tracking (MPPT) technique to continuously deliver the highest possible power to the load when variations in the insolation and temperature occur. It overcomes the problem of mismatch between the solar arrays and the given load. A simple method of tracking the maximum power points (MPP’s) and forcing the system to operate close to these points is presented. The principle of energy conservation is used to derive the largeand small-signal model and transfer function. By using the proposed model, the drawbacks of the state-space-averaging method can be overcome. The TI320C25 digital signal processor (DSP) was used to implement the proposed MPPT controller, which controls the dc/dc converter in the photovoltaic system. Simulations and experimental results show excellent performance.",
"title": ""
}
] | [
{
"docid": "e4132ac9af863c2c17489817898dbd1c",
"text": "This paper presents automatic parallel parking for car-like vehicle, with highlights on a path planning algorithm for arbitrary initial angle using two tangential arcs of different radii. The algorithm is divided into three parts. Firstly, a simple kinematic model of the vehicle is established based on Ackerman steering geometry; secondly, not only a minimal size of the parking space is analyzed based on the size and the performance of the vehicle but also an appropriate target point is chosen based on the size of the parking space and the vehicle; Finally, a path is generated based on two tangential arcs of different radii. The simulation results show that the feasibility of the proposed algorithm.",
"title": ""
},
{
"docid": "26095dbc82b68c32881ad9316256bc42",
"text": "BACKGROUND\nSchizophrenia causes great suffering for patients and families. Today, patients are treated with medications, but unfortunately many still have persistent symptoms and an impaired quality of life. During the last 20 years of research in cognitive behavioral therapy (CBT) for schizophrenia, evidence has been found that the treatment is good for patients but it is not satisfactory enough, and more studies are being carried out hopefully to achieve further improvement.\n\n\nPURPOSE\nClinical trials and meta-analyses are being used to try to prove the efficacy of CBT. In this article, we summarize recent research using the cognitive model for people with schizophrenia.\n\n\nMETHODS\nA systematic search was carried out in PubMed (Medline). Relevant articles were selected if they contained a description of cognitive models for schizophrenia or psychotic disorders.\n\n\nRESULTS\nThere is now evidence that positive and negative symptoms exist in a continuum, from normality (mild form and few symptoms) to fully developed disease (intensive form with many symptoms). Delusional patients have reasoning bias such as jumping to conclusions, and those with hallucination have impaired self-monitoring and experience their own thoughts as voices. Patients with negative symptoms have negative beliefs such as low expectations regarding pleasure and success. In the entire patient group, it is common to have low self-esteem.\n\n\nCONCLUSIONS\nThe cognitive model integrates very well with the aberrant salience model. It takes into account neurobiology, cognitive, emotional and social processes. The therapist uses this knowledge when he or she chooses techniques for treatment of patients.",
"title": ""
},
{
"docid": "49ff105e4bd35d88e2cbf988e22a7a3a",
"text": "Personality testing is a popular method that used to be commonly employed in selection decisions in organizational settings. However, it is also a controversial practice according to a number researcher who claims that especially explicit measures of personality may be prone to the negative effects of faking and response distortion. The first aim of the present paper is to summarize Morgeson, Morgeson, Campion, Dipboye, Hollenbeck, Murphy and Schmitt’s paper that discussed the limitations of personality testing for performance ratings in relation to its basic conclusions about faking and response distortion. Secondly, the results of Rosse, Stecher, Miller and Levin’s study that investigated the effects of faking in personality testing on selection decisions will be discussed in detail. Finally, recent research findings related to implicit personality measures will be introduced along with the examples of the results related to the implications of those measures for response distortion in personality research and the suggestions for future research.",
"title": ""
},
{
"docid": "1d1f14cb78693e56d014c89eacfcc3ef",
"text": "We undertook a meta-analysis of six Crohn's disease genome-wide association studies (GWAS) comprising 6,333 affected individuals (cases) and 15,056 controls and followed up the top association signals in 15,694 cases, 14,026 controls and 414 parent-offspring trios. We identified 30 new susceptibility loci meeting genome-wide significance (P < 5 × 10−8). A series of in silico analyses highlighted particular genes within these loci and, together with manual curation, implicated functionally interesting candidate genes including SMAD3, ERAP2, IL10, IL2RA, TYK2, FUT2, DNMT3A, DENND1B, BACH2 and TAGAP. Combined with previously confirmed loci, these results identify 71 distinct loci with genome-wide significant evidence for association with Crohn's disease.",
"title": ""
},
{
"docid": "9a7016a02eda7fcae628197b0625832b",
"text": "We present a vertical-silicon-nanowire-based p-type tunneling field-effect transistor (TFET) using CMOS-compatible process flow. Following our recently reported n-TFET , a low-temperature dopant segregation technique was employed on the source side to achieve steep dopant gradient, leading to excellent tunneling performance. The fabricated p-TFET devices demonstrate a subthreshold swing (SS) of 30 mV/decade averaged over a decade of drain current and an Ion/Ioff ratio of >; 105. Moreover, an SS of 50 mV/decade is maintained for three orders of drain current. This demonstration completes the complementary pair of TFETs to implement CMOS-like circuits.",
"title": ""
},
{
"docid": "c4fe9fd7e506e18f1a38bc71b7434b99",
"text": "We introduce Evenly Cascaded convolutional Network (ECN), a neural network taking inspiration from the cascade algorithm of wavelet analysis. ECN employs two feature streams - a low-level and high-level steam. At each layer these streams interact, such that low-level features are modulated using advanced perspectives from the high-level stream. ECN is evenly structured through resizing feature map dimensions by a consistent ratio, which removes the burden of ad-hoc specification of feature map dimensions. ECN produces easily interpretable features maps, a result whose intuition can be understood in the context of scale-space theory. We demonstrate that ECN’s design facilitates the training process through providing easily trainable shortcuts. We report new state-of-the-art results for small networks, without the need for additional treatment such as pruning or compression - a consequence of ECN’s simple structure and direct training. A 6-layered ECN design with under 500k parameters achieves 95.24% and 78.99% accuracy on CIFAR-10 and CIFAR-100 datasets, respectively, outperforming the current state-of-the-art on small parameter networks, and a 3 million parameter ECN produces results competitive to the state-of-the-art.",
"title": ""
},
{
"docid": "2d0cc4c7ca6272200bb1ed1c9bba45f0",
"text": "Advanced Driver Assistance Systems (ADAS) based on video camera tends to be generalized in today's automotive. However, if most of these systems perform nicely in good weather conditions, they perform very poorly under adverse weather particularly under rain. We present a novel approach that aims at detecting raindrops on a car windshield using only images from an in-vehicle camera. Based on the photometric properties of raindrops, the algorithm relies on image processing technics to highlight raindrops. Its results can be further used for image restoration and vision enhancement and hence it is a valuable tool for ADAS.",
"title": ""
},
{
"docid": "65a990303d1d6efd3aea5307e7db9248",
"text": "The presentation of news articles to meet research needs has traditionally been a document-centric process. Yet users often want to monitor developing news stories based on an event, rather than by examining an exhaustive list of retrieved documents. In this work, we illustrate a news retrieval system, eventNews, and an underlying algorithm which is event-centric. Through this system, news articles are clustered around a single news event or an event and its sub-events. The algorithm presented can leverage the creation of new Reuters stories and their compact labels as seed documents for the clustering process. The system is configured to generate top-level clusters for news events based on an editorially supplied topical label, known as a ‘slugline,’ and to generate sub-topic-focused clusters based on the algorithm. The system uses an agglomerative clustering algorithm to gather and structure documents into distinct result sets. Decisions on whether to merge related documents or clusters are made according to the similarity of evidence derived from two distinct sources, one, relying on a digital signature based on the unstructured text in the document, the other based on the presence of named entity tags that have been assigned to the document by a named entity tagger, in this case Thomson Reuters’ Calais engine. Copyright c © 2016 for the individual papers by the paper’s authors. Copying permitted for private and academic purposes. This volume is published and copyrighted by its editors. In: M. Martinez, U. Kruschwitz, G. Kazai, D. Corney, F. Hopfgartner, R. Campos and D. Albakour (eds.): Proceedings of the NewsIR’16 Workshop at ECIR, Padua, Italy, 20-March-2016, published at http://ceur-ws.org",
"title": ""
},
{
"docid": "6e8cf6a53e1a9d571d5e5d1644c56e57",
"text": "Previous research on relation classification has verified the effectiveness of using dependency shortest paths or subtrees. In this paper, we further explore how to make full use of the combination of these dependency information. We first propose a new structure, termed augmented dependency path (ADP), which is composed of the shortest dependency path between two entities and the subtrees attached to the shortest path. To exploit the semantic representation behind the ADP structure, we develop dependency-based neural networks (DepNN): a recursive neural network designed to model the subtrees, and a convolutional neural network to capture the most important features on the shortest path. Experiments on the SemEval-2010 dataset show that our proposed method achieves state-of-art results.",
"title": ""
},
{
"docid": "9814af3a2c855717806ad7496d21f40e",
"text": "This chapter gives an extended introduction to the lightweight profiles OWL EL, OWL QL, and OWL RL of the Web Ontology Language OWL. The three ontology language standards are sublanguages of OWL DL that are restricted in ways that significantly simplify ontological reasoning. Compared to OWL DL as a whole, reasoning algorithms for the OWL profiles show higher performance, are easier to implement, and can scale to larger amounts of data. Since ontological reasoning is of great importance for designing and deploying OWL ontologies, the profiles are highly attractive for many applications. These advantages come at a price: various modelling features of OWL are not available in all or some of the OWL profiles. Moreover, the profiles are mutually incomparable in the sense that each of them offers a combination of features that is available in none of the others. This chapter provides an overview of these differences and explains why some of them are essential to retain the desired properties. To this end, we recall the relationship between OWL and description logics (DLs), and show how each of the profiles is typically treated in reasoning algorithms.",
"title": ""
},
{
"docid": "1f93c117c048be827d0261f419c9cce3",
"text": "Due to increasing number of internet users, popularity of Broadband Internet also increasing. Hence the connection cost should be decrease due to Wi Fi connectivity and built-in sensors in devices as well the maximum number of devices should be connected through a common medium. To meet all these requirements, the technology so called Internet of Things is evolved. Internet of Things (IoT) can be considered as a connection of computing devices like smart phones, coffee maker, washing machines, wearable device with an internet. IoT create network and connect \"things\" and people together by creating relationship between either people-people, people-things or things-things. As the number of device connection is increased, it increases the Security risk. Security is the biggest issue for IoT at any companies across the globe. Furthermore, privacy and data sharing can again be considered as a security concern for IoT. Companies, those who use IoT technique, need to find a way to store, track, analyze and make sense of the large amounts of data that will be generated. Few security techniques of IoT are necessary to implement to protect your confidential and important data as well for device protection through some internet security threats.",
"title": ""
},
{
"docid": "e62e09ce3f4f135b12df4d643df02de6",
"text": "Septic arthritis/tenosynovitis in the horse can have life-threatening consequences. The purpose of this cross-sectional retrospective study was to describe ultrasound characteristics of septic arthritis/tenosynovitis in a group of horses. Diagnosis of septic arthritis/tenosynovitis was based on historical and clinical findings as well as the results of the synovial fluid analysis and/or positive synovial culture. Ultrasonographic findings recorded were degree of joint/sheath effusion, degree of synovial membrane thickening, echogenicity of the synovial fluid, and presence of hyperechogenic spots and fibrinous loculations. Ultrasonographic findings were tested for dependence on the cause of sepsis, time between admission and beginning of clinical signs, and the white blood cell counts in the synovial fluid. Thirty-eight horses with confirmed septic arthritis/tenosynovitis of 43 joints/sheaths were included. Degree of effusion was marked in 81.4% of cases, mild in 16.3%, and absent in 2.3%. Synovial thickening was mild in 30.9% of cases and moderate/severe in 69.1%. Synovial fluid was anechogenic in 45.2% of cases and echogenic in 54.8%. Hyperechogenic spots were identified in 32.5% of structures and fibrinous loculations in 64.3%. Relationships between the degree of synovial effusion, degree of the synovial thickening, presence of fibrinous loculations, and the time between admission and beginning of clinical signs were identified, as well as between the presence of fibrinous loculations and the cause of sepsis (P ≤ 0.05). Findings indicated that ultrasonographic findings of septic arthritis/tenosynovitis may vary in horses, and may be influenced by time between admission and beginning of clinical signs.",
"title": ""
},
{
"docid": "41d97d98a524e5f1e45ae724017819d9",
"text": "Dynamically changing (reconfiguring) the membership of a replicated distributed system while preserving data consistency and system availability is a challenging problem. In this paper, we show that reconfiguration can be simplified by taking advantage of certain properties commonly provided by Primary/Backup systems. We describe a new reconfiguration protocol, recently implemented in Apache Zookeeper. It fully automates configuration changes and minimizes any interruption in service to clients while maintaining data consistency. By leveraging the properties already provided by Zookeeper our protocol is considerably simpler than state of the art.",
"title": ""
},
{
"docid": "9d75520f138bcf7c529488f29d01efbb",
"text": "High utilization of cargo volume is an essential factor in the success of modern enterprises in the market. Although mathematical models have been presented for container loading problems in the literature, there is still a lack of studies that consider practical constraints. In this paper, a Mixed Integer Linear Programming is developed for the problem of packing a subset of rectangular boxes inside a container such that the total value of the packed boxes is maximized while some realistic constraints, such as vertical stability, are considered. The packing is orthogonal, and the boxes can be freely rotated into any of the six orientations. Moreover, a sequence triple-based solution methodology is proposed, simulated annealing is used as modeling technique, and the situation where some boxes are preplaced in the container is investigated. These preplaced boxes represent potential obstacles. Numerical experiments are conducted for containers with and without obstacles. The results show that the simulated annealing approach is successful and can handle large number of packing instances.",
"title": ""
},
{
"docid": "d5907911dfa7340b786f85618702ac12",
"text": "In recent years many popular data visualizations have emerged that are created largely by designers whose main area of expertise is not computer science. Designers generate these visualizations using a handful of design tools and environments. To better inform the development of tools intended for designers working with data, we set out to understand designers' challenges and perspectives. We interviewed professional designers, conducted observations of designers working with data in the lab, and observed designers working with data in team settings in the wild. A set of patterns emerged from these observations from which we extract a number of themes that provide a new perspective on design considerations for visualization tool creators, as well as on known engineering problems.",
"title": ""
},
{
"docid": "baad4c23994bafbdfba2a3d566c83558",
"text": "Memories today expose an all-or-nothing correctness model that incurs significant costs in performance, energy, area, and design complexity. But not all applications need high-precision storage for all of their data structures all of the time. This article proposes mechanisms that enable applications to store data approximately and shows that doing so can improve the performance, lifetime, or density of solid-state memories. We propose two mechanisms. The first allows errors in multilevel cells by reducing the number of programming pulses used to write them. The second mechanism mitigates wear-out failures and extends memory endurance by mapping approximate data onto blocks that have exhausted their hardware error correction resources. Simulations show that reduced-precision writes in multilevel phase-change memory cells can be 1.7 × faster on average and using failed blocks can improve array lifetime by 23% on average with quality loss under 10%.",
"title": ""
},
{
"docid": "a31652c0236fb5da569ffbf326eb29e5",
"text": "Since 2012, citizens in Alaska, Colorado, Oregon, and Washington have voted to legalize the recreational use of marijuana by adults. Advocates of legalization have argued that prohibition wastes scarce law enforcement resources by selectively arresting minority users of a drug that has fewer adverse health effects than alcohol.1,2 It would be better, they argue, to legalize, regulate, and tax marijuana, like alcohol.3 Opponents of legalization argue that it will increase marijuana use among youth because it will make marijuana more available at a cheaper price and reduce the perceived risks of its use.4 Cerdá et al5 have assessed these concerns by examining the effects of marijuana legalization in Colorado and Washington on attitudes toward marijuana and reported marijuana use among young people. They used surveys from Monitoring the Future between 2010 and 2015 to examine changes in the perceived risks of occasional marijuana use and self-reported marijuana use in the last 30 days among students in eighth, 10th, and 12th grades in Colorado and Washington before and after legalization. They compared these changes with changes among students in states in the contiguous United States that had not legalized marijuana (excluding Oregon, which legalized in 2014). The perceived risks of using marijuana declined in all states, but there was a larger decline in perceived risks and a larger increase in marijuana use in the past 30 days among eighth and 10th graders from Washington than among students from other states. They did not find any such differences between students in Colorado and students in other US states that had not legalized, nor did they find any of these changes in 12th graders in Colorado or Washington. If the changes observed in Washington are attributable to legalization, why were there no changes found in Colorado? The authors suggest that this may have been because Colorado’s medical marijuana laws were much more liberal before legalization than those in Washington. After 2009, Colorado permitted medical marijuana to be supplied through for-profit dispensaries and allowed advertising of medical marijuana products. This hypothesisissupportedbyotherevidencethattheperceivedrisks of marijuana use decreased and marijuana use increased among young people in Colorado after these changes in 2009.6",
"title": ""
},
{
"docid": "42d3f666325c3c9e2d61fcbad3c6659a",
"text": "Supernumerary or accessory nostrils are a very rare type of congenital nasal anomaly, with only a few cases reported in the literature. They can be associated with such malformations as facial clefts and they can be unilateral or bilateral, with most cases reported being unilateral. The accessory nostril may or may not communicate with the ipsilateral nasal cavity, probably depending on the degree of embryological progression of the anomaly. A case of simple supernumerary left nostril with no nasal cavity communication and with a normally developed nose is presented. The surgical treatment is described and the different speculative theories related to the embryogenesis of supernumerary nostrils are also reviewed.",
"title": ""
},
{
"docid": "468cdc4decf3871314ce04d6e49f6fad",
"text": "Documents come naturally with structure: a section contains paragraphs which itself contains sentences; a blog page contains a sequence of comments and links to related blogs. Structure, of course, implies something about shared topics. In this paper we take the simplest form of structure, a document consisting of multiple segments, as the basis for a new form of topic model. To make this computationally feasible, and to allow the form of collapsed Gibbs sampling that has worked well to date with topic models, we use the marginalized posterior of a two-parameter Poisson-Dirichlet process (or Pitman-Yor process) to handle the hierarchical modelling. Experiments using either paragraphs or sentences as segments show the method significantly outperforms standard topic models on either whole document or segment, and previous segmented models, based on the held-out perplexity measure.",
"title": ""
},
{
"docid": "578130d8ef9d18041c84ed226af8c84a",
"text": "Ranking and scoring are ubiquitous. We consider the setting in which an institution, called a ranker, evaluates a set of individuals based on demographic, behavioral or other characteristics. The final output is a ranking that represents the relative quality of the individuals. While automatic and therefore seemingly objective, rankers can, and often do, discriminate against individuals and systematically disadvantage members of protected groups. This warrants a careful study of the fairness of a ranking scheme, to enable data science for social good applications, among others.\n In this paper we propose fairness measures for ranked outputs. We develop a data generation procedure that allows us to systematically control the degree of unfairness in the output, and study the behavior of our measures on these datasets. We then apply our proposed measures to several real datasets, and detect cases of bias. Finally, we show preliminary results of incorporating our ranked fairness measures into an optimization framework, and show potential for improving fairness of ranked outputs while maintaining accuracy.\n The code implementing all parts of this work is publicly available at https://github.com/DataResponsibly/FairRank.",
"title": ""
}
] | scidocsrr |
6b5140a6b1b2d1da1a1552aa0b4eeeb2 | Deep Q-learning From Demonstrations | [
{
"docid": "a3bce6c544a08e48a566a189f66d0131",
"text": "Model-free episodic reinforcement learning problems define the environment reward with functions that often provide only sparse information throughout the task. Consequently, agents are not given enough feedback about the fitness of their actions until the task ends with success or failure. Previous work addresses this problem with reward shaping. In this paper we introduce a novel approach to improve modelfree reinforcement learning agents’ performance with a three step approach. Specifically, we collect demonstration data, use the data to recover a linear function using inverse reinforcement learning and we use the recovered function for potential-based reward shaping. Our approach is model-free and scalable to high dimensional domains. To show the scalability of our approach we present two sets of experiments in a two dimensional Maze domain, and the 27 dimensional Mario AI domain. We compare the performance of our algorithm to previously introduced reinforcement learning from demonstration algorithms. Our experiments show that our approach outperforms the state-of-the-art in cumulative reward, learning rate and asymptotic performance.",
"title": ""
}
] | [
{
"docid": "8e6d17b6d7919d76cebbcefcc854573e",
"text": "Vincent Larivière École de bibliothéconomie et des sciences de l’information, Université de Montréal, C.P. 6128, Succ. CentreVille, Montréal, QC H3C 3J7, Canada, and Observatoire des Sciences et des Technologies (OST), Centre Interuniversitaire de Recherche sur la Science et la Technologie (CIRST), Université du Québec à Montréal, CP 8888, Succ. Centre-Ville, Montréal, QC H3C 3P8, Canada. E-mail: [email protected]",
"title": ""
},
{
"docid": "c1d5f28d264756303fded5faa65587a2",
"text": "English vocabulary learning and ubiquitous learning have separately received considerable attention in recent years. However, research on English vocabulary learning in ubiquitous learning contexts has been less studied. In this study, we develop a ubiquitous English vocabulary learning (UEVL) system to assist students in experiencing a systematic vocabulary learning process in which ubiquitous technology is used to develop the system, and video clips are used as the material. Afterward, the technology acceptance model and partial least squares approach are used to explore students’ perspectives on the UEVL system. The results indicate that (1) both the system characteristics and the material characteristics of the UEVL system positively and significantly influence the perspectives of all students on the system; (2) the active students are interested in perceived usefulness; (3) the passive students are interested in perceived ease of use. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0df26f2f40e052cde72048b7538548c3",
"text": "Keshif is an open-source, web-based data exploration environment that enables data analytics novices to create effective visual and interactive dashboards and explore relations with minimal learning time, and data analytics experts to explore tabular data in multiple perspectives rapidly with minimal setup time. In this paper, we present a high-level overview of the exploratory features and design characteristics of Keshif, as well as its API and a selection of its implementation specifics. We conclude with a discussion of its use as an open-source project.",
"title": ""
},
{
"docid": "b0a37782d653fa03843ecdc118a56034",
"text": "Non-frontal lip views contain useful information which can be used to enhance the performance of frontal view lipreading. However, the vast majority of recent lipreading works, including the deep learning approaches which significantly outperform traditional approaches, have focused on frontal mouth images. As a consequence, research on joint learning of visual features and speech classification from multiple views is limited. In this work, we present an end-to-end multi-view lipreading system based on Bidirectional Long-Short Memory (BLSTM) networks. To the best of our knowledge, this is the first model which simultaneously learns to extract features directly from the pixels and performs visual speech classification from multiple views and also achieves state-of-the-art performance. The model consists of multiple identical streams, one for each view, which extract features directly from different poses of mouth images. The temporal dynamics in each stream/view are modelled by a BLSTM and the fusion of multiple streams/views takes place via another BLSTM. An absolute average improvement of 3% and 3.8% over the frontal view performance is reported on the OuluVS2 database when the best two (frontal and profile) and three views (frontal, profile, 45◦) are combined, respectively. The best three-view model results in a 10.5% absolute improvement over the current multi-view state-of-the-art performance on OuluVS2, without using external databases for training, achieving a maximum classification accuracy of 96.9%.",
"title": ""
},
{
"docid": "c02697087e8efd4c1ba9f9a26fa1115b",
"text": "OBJECTIVE\nTo estimate the current prevalence of limb loss in the United States and project the future prevalence to the year 2050.\n\n\nDESIGN\nEstimates were constructed using age-, sex-, and race-specific incidence rates for amputation combined with age-, sex-, and race-specific assumptions about mortality. Incidence rates were derived from the 1988 to 1999 Nationwide Inpatient Sample of the Healthcare Cost and Utilization Project, corrected for the likelihood of reamputation among those undergoing amputation for vascular disease. Incidence rates were assumed to remain constant over time and applied to historic mortality and population data along with the best available estimates of relative risk, future mortality, and future population projections. To investigate the sensitivity of our projections to increasing or decreasing incidence, we developed alternative sets of estimates of limb loss related to dysvascular conditions based on assumptions of a 10% or 25% increase or decrease in incidence of amputations for these conditions.\n\n\nSETTING\nCommunity, nonfederal, short-term hospitals in the United States.\n\n\nPARTICIPANTS\nPersons who were discharged from a hospital with a procedure code for upper-limb or lower-limb amputation or diagnosis code of traumatic amputation.\n\n\nINTERVENTIONS\nNot applicable.\n\n\nMAIN OUTCOME MEASURES\nPrevalence of limb loss by age, sex, race, etiology, and level in 2005 and projections to the year 2050.\n\n\nRESULTS\nIn the year 2005, 1.6 million persons were living with the loss of a limb. Of these subjects, 42% were nonwhite and 38% had an amputation secondary to dysvascular disease with a comorbid diagnosis of diabetes mellitus. It is projected that the number of people living with the loss of a limb will more than double by the year 2050 to 3.6 million. If incidence rates secondary to dysvascular disease can be reduced by 10%, this number would be lowered by 225,000.\n\n\nCONCLUSIONS\nOne in 190 Americans is currently living with the loss of a limb. Unchecked, this number may double by the year 2050.",
"title": ""
},
{
"docid": "031562142f7a2ffc64156f9d09865604",
"text": "The demand for video content is continuously increasing as video sharing on the Internet is becoming enormously popular recently. This demand, with its high bandwidth requirements, has a considerable impact on the load of the network infrastructure. As more users access videos from their mobile devices, the load on the current wireless infrastructure (which has limited capacity) will be even more significant. Based on observations from many local video sharing scenarios, in this paper, we study the tradeoffs of using Wi-Fi ad-hoc mode versus infrastructure mode for video streaming between adjacent devices. We thus show the potential of direct device-to-device communication as a way to reduce the load on the wireless infrastructure and to improve user experiences. Setting up experiments for WiFi devices connected in ad-hoc mode, we collect measurements for various video streaming scenarios and compare them to the case where the devices are connected through access points. The results show the improvements in latency, jitter and loss rate. More importantly, the results show that the performance in direct device-to-device streaming is much more stable in contrast to the access point case, where different factors affect the performance causing widely unpredictable qualities.",
"title": ""
},
{
"docid": "9bfba29f44c585df56062582d4e35ba5",
"text": "We address the problem of optimizing recommender systems for multiple relevance objectives that are not necessarily aligned. Specifically, given a recommender system that optimizes for one aspect of relevance, semantic matching (as defined by any notion of similarity between source and target of recommendation; usually trained on CTR), we want to enhance the system with additional relevance signals that will increase the utility of the recommender system, but that may simultaneously sacrifice the quality of the semantic match. The issue is that semantic matching is only one relevance aspect of the utility function that drives the recommender system, albeit a significant aspect. In talent recommendation systems, job posters want candidates who are a good match to the job posted, but also prefer those candidates to be open to new opportunities. Recommender systems that recommend discussion groups must ensure that the groups are relevant to the users' interests, but also need to favor active groups over inactive ones. We refer to these additional relevance signals (job-seeking intent and group activity) as extraneous features, and they account for aspects of the utility function that are not captured by the semantic match (i.e. post-CTR down-stream utilities that reflect engagement: time spent reading, sharing, commenting, etc). We want to include these extraneous features into the recommendations, but we want to do so while satisfying the following requirements: 1) we do not want to drastically sacrifice the quality of the semantic match, and 2) we want to quantify exactly how the semantic match would be affected as we control the different aspects of the utility function. In this paper, we present an approach that satisfies these requirements.\n We frame our approach as a general constrained optimization problem and suggest ways in which it can be solved efficiently by drawing from recent research on optimizing non-smooth rank metrics for information retrieval. Our approach features the following characteristics: 1) it is model and feature agnostic, 2) it does not require additional labeled training data to be collected, and 3) it can be easily incorporated into an existing model as an additional stage in the computation pipeline. We validate our approach in a revenue-generating recommender system that ranks billions of candidate recommendations on a daily basis and show that a significant improvement in the utility of the recommender system can be achieved with an acceptable and predictable degradation in the semantic match quality of the recommendations.",
"title": ""
},
{
"docid": "a3a29e4f0c25c5f1e09b590048a4a1c0",
"text": "We present DeepPicar, a low-cost deep neural network based autonomous car platform. DeepPicar is a small scale replication of a real self-driving car called DAVE-2 by NVIDIA. DAVE-2 uses a deep convolutional neural network (CNN), which takes images from a front-facing camera as input and produces car steering angles as output. DeepPicar uses the same network architecture—9 layers, 27 million connections and 250K parameters—and can drive itself in real-time using a web camera and a Raspberry Pi 3 quad-core platform. Using DeepPicar, we analyze the Pi 3's computing capabilities to support end-to-end deep learning based real-time control of autonomous vehicles. We also systematically compare other contemporary embedded computing platforms using the DeepPicar's CNN-based real-time control workload. We find that all tested platforms, including the Pi 3, are capable of supporting the CNN-based real-time control, from 20 Hz up to 100 Hz, depending on hardware platform. However, we find that shared resource contention remains an important issue that must be considered in applying CNN models on shared memory based embedded computing platforms; we observe up to 11.6X execution time increase in the CNN based control loop due to shared resource contention. To protect the CNN workload, we also evaluate state-of-the-art cache partitioning and memory bandwidth throttling techniques on the Pi 3. We find that cache partitioning is ineffective, while memory bandwidth throttling is an effective solution.",
"title": ""
},
{
"docid": "bfe76736623dfc3271be4856f5dc2eef",
"text": "Fact-related information contained in fictional narratives may induce substantial changes in readers’ real-world beliefs. Current models of persuasion through fiction assume that these effects occur because readers are psychologically transported into the fictional world of the narrative. Contrary to general dual-process models of persuasion, models of persuasion through fiction also imply that persuasive effects of fictional narratives are persistent and even increase over time (absolute sleeper effect). In an experiment designed to test this prediction, 81 participants read either a fictional story that contained true as well as false assertions about realworld topics or a control story. There were large short-term persuasive effects of false information, and these effects were even larger for a group with a two-week assessment delay. Belief certainty was weakened immediately after reading but returned to baseline level after two weeks, indicating that beliefs acquired by reading fictional narratives are integrated into realworld knowledge.",
"title": ""
},
{
"docid": "9688efb8845895d49029c07d397a336b",
"text": "Familial hypercholesterolaemia (FH) leads to elevated plasma levels of LDL-cholesterol and increased risk of premature atherosclerosis. Dietary treatment is recommended to all patients with FH in combination with lipid-lowering drug therapy. Little is known about how children with FH and their parents respond to dietary advice. The aim of the present study was to characterise the dietary habits in children with FH. A total of 112 children and young adults with FH and a non-FH group of children (n 36) were included. The children with FH had previously received dietary counselling. The FH subjects were grouped as: 12-14 years (FH (12-14)) and 18-28 years (FH (18-28)). Dietary data were collected by SmartDiet, a short self-instructing questionnaire on diet and lifestyle where the total score forms the basis for an overall assessment of the diet. Clinical and biochemical data were retrieved from medical records. The SmartDiet scores were significantly improved in the FH (12-14) subjects compared with the non-FH subjects (SmartDiet score of 31 v. 28, respectively). More FH (12-14) subjects compared with non-FH children consumed low-fat milk (64 v. 18 %, respectively), low-fat cheese (29 v. 3%, respectively), used margarine with highly unsaturated fat (74 v. 14 %, respectively). In all, 68 % of the FH (12-14) subjects and 55 % of the non-FH children had fish for dinner twice or more per week. The FH (18-28) subjects showed the same pattern in dietary choices as the FH (12-14) children. In contrast to the choices of low-fat dietary items, 50 % of the FH (12-14) subjects consumed sweet spreads or sweet drinks twice or more per week compared with only 21 % in the non-FH group. In conclusion, ordinary out-patient dietary counselling of children with FH seems to have a long-lasting effect, as the diet of children and young adults with FH consisted of more products that are favourable with regard to the fatty acid composition of the diet.",
"title": ""
},
{
"docid": "136278bd47962b54b644a77bbdaf77e3",
"text": "In this paper, we consider the grayscale template-matching problem, invariant to rotation, scale, translation, brightness and contrast, without previous operations that discard grayscale information, like detection of edges, detection of interest points or segmentation/binarization of the images. The obvious “brute force” solution performs a series of conventional template matchings between the image to analyze and the template query shape rotated by every angle, translated to every position and scaled by every factor (within some specified range of scale factors). Clearly, this takes too long and thus is not practical. We propose a technique that substantially accelerates this searching, while obtaining the same result as the original brute force algorithm. In some experiments, our algorithm was 400 times faster than the brute force algorithm. Our algorithm consists of three cascaded filters. These filters successively exclude pixels that have no chance of matching the template from further processing.",
"title": ""
},
{
"docid": "d6b87f5b6627f1a1ac5cc951c7fe0f28",
"text": "Despite a strong nonlinear behavior and a complex design, the interior permanent-magnet (IPM) machine is proposed as a good candidate among the PM machines owing to its interesting peculiarities, i.e., higher torque in flux-weakening operation, higher fault tolerance, and ability to adopt low-cost PMs. A second trend in designing PM machines concerns the adoption of fractional-slot (FS) nonoverlapped coil windings, which reduce the end winding length and consequently the Joule losses and the cost. Therefore, the adoption of an IPM machine with an FS winding aims to combine both advantages: high torque and efficiency in a wide operating region. However, the combination of an anisotropic rotor and an FS winding stator causes some problems. The interaction between the magnetomotive force harmonics due to the stator current and the rotor anisotropy causes a very high torque ripple. This paper illustrates a procedure in designing an IPM motor with the FS winding exhibiting a low torque ripple. The design strategy is based on two consecutive steps: at first, the winding is optimized by taking a multilayer structure, and then, the rotor geometry is optimized by adopting a nonsymmetric structure. As an example, a 12-slot 10-pole IPM machine is considered, achieving a torque ripple lower than 1.5% at full load.",
"title": ""
},
{
"docid": "ad091e4f66adb26d36abfc40377ee6ab",
"text": "This chapter provides a self-contained first introduction to description logics (DLs). The main concepts and features are explained with examples before syntax and semantics of the DL SROIQ are defined in detail. Additional sections review light-weight DL languages, discuss the relationship to the Web Ontology Language OWL and give pointers to further reading.",
"title": ""
},
{
"docid": "d38df66fe85b4d12093965e649a70fe1",
"text": "We describe the CoNLL-2002 shared task: language-independent named entity recognition. We give background information on the data sets and the evaluation method, present a general overview of the systems that have taken part in the task and discuss their performance.",
"title": ""
},
{
"docid": "f783860e569d9f179466977db544bd01",
"text": "In medical research, continuous variables are often converted into categorical variables by grouping values into two or more categories. We consider in detail issues pertaining to creating just two groups, a common approach in clinical research. We argue that the simplicity achieved is gained at a cost; dichotomization may create rather than avoid problems, notably a considerable loss of power and residual confounding. In addition, the use of a data-derived 'optimal' cutpoint leads to serious bias. We illustrate the impact of dichotomization of continuous predictor variables using as a detailed case study a randomized trial in primary biliary cirrhosis. Dichotomization of continuous data is unnecessary for statistical analysis and in particular should not be applied to explanatory variables in regression models.",
"title": ""
},
{
"docid": "b83a0341f2ead9c72eda4217e0f31ea2",
"text": "Time-series classification has attracted considerable research attention due to the various domains where time-series data are observed, ranging from medicine to econometrics. Traditionally, the focus of time-series classification has been on short time-series data composed of a few patterns exhibiting variabilities, while recently there have been attempts to focus on longer series composed of multiple local patrepeating with an arbitrary irregularity. The primary contribution of this paper relies on presenting a method which can detect local patterns in repetitive time-series via fitting local polynomial functions of a specified degree. We capture the repetitiveness degrees of time-series datasets via a new measure. Furthermore, our method approximates local polynomials in linear time and ensures an overall linear running time complexity. The coefficients of the polynomial functions are converted to symbolic words via equi-area discretizations of the coefficients' distributions. The symbolic polynomial words enable the detection of similar local patterns by assigning the same word to similar polynomials. Moreover, a histogram of the frequencies of the words is constructed from each time-series' bag of words. Each row of the histogram enables a new representation for the series and symbolizes the occurrence of local patterns and their frequencies. In an experimental comparison against state-of-the-art baselines on repetitive datasets, our method demonstrates significant improvements in terms of prediction accuracy.",
"title": ""
},
{
"docid": "baa3d41ba1970125301b0fdd9380a966",
"text": "This article provides an alternative perspective for measuring author impact by applying PageRank algorithm to a coauthorship network. A weighted PageRank algorithm considering citation and coauthorship network topology is proposed. We test this algorithm under different damping factors by evaluating author impact in the informetrics research community. In addition, we also compare this weighted PageRank with the h-index, citation, and program committee (PC) membership of the International Society for Scientometrics and Informetrics (ISSI) conferences. Findings show that this weighted PageRank algorithm provides reliable results in measuring author impact.",
"title": ""
},
{
"docid": "c410b6cd3f343fc8b8c21e23e58013cd",
"text": "Virtualization is increasingly being used to address server management and administration issues like flexible resource allocation, service isolation and workload migration. In a virtualized environment, the virtual machine monitor (VMM) is the primary resource manager and is an attractive target for implementing system features like scheduling, caching, and monitoring. However, the lackof runtime information within the VMM about guest operating systems, sometimes called the semantic gap, is a significant obstacle to efficiently implementing some kinds of services.In this paper we explore techniques that can be used by a VMM to passively infer useful information about a guest operating system's unified buffer cache and virtual memory system. We have created a prototype implementation of these techniques inside the Xen VMM called Geiger and show that it can accurately infer when pages are inserted into and evicted from a system's buffer cache. We explore several nuances involved in passively implementing eviction detection that have not previously been addressed, such as the importance of tracking disk block liveness, the effect of file system journaling, and the importance of accounting for the unified caches found in modern operating systems.Using case studies we show that the information provided by Geiger enables a VMM to implement useful VMM-level services. We implement a novel working set size estimator which allows the VMM to make more informed memory allocation decisions. We also show that a VMM can be used to drastically improve the hit rate in remote storage caches by using eviction-based cache placement without modifying the application or operating system storage interface. Both case studies hint at a future where inference techniques enable a broad new class of VMM-level functionality.",
"title": ""
},
{
"docid": "2a56b6e6dcab0817e6ab4dfa8826fc49",
"text": "Considerable data and analysis support the detection of one or more supernovae (SNe) at a distance of about 50 pc, ∼2.6 million years ago. This is possibly related to the extinction event around that time and is a member of a series of explosions that formed the Local Bubble in the interstellar medium. We build on previous work, and propagate the muon flux from SN-initiated cosmic rays from the surface to the depths of the ocean. We find that the radiation dose from the muons will exceed the total present surface dose from all sources at depths up to 1 km and will persist for at least the lifetime of marine megafauna. It is reasonable to hypothesize that this increase in radiation load may have contributed to a newly documented marine megafaunal extinction at that time.",
"title": ""
},
{
"docid": "764a1d2571ed45dd56aea44efd4f5091",
"text": "BACKGROUND\nThere exists some ambiguity regarding the exact anatomical limits of the orbicularis retaining ligament, particularly its medial boundary in both the superior and inferior orbits. Precise understanding of this anatomy is necessary during periorbital rejuvenation.\n\n\nMETHODS\nSixteen fresh hemifacial cadaver dissections were performed in the anatomy laboratory to evaluate the anatomy of the orbicularis retaining ligament. Dissection was assisted by magnification with loupes and the operating microscope.\n\n\nRESULTS\nA ligamentous system was found that arises from the inferior and superior orbital rim that is truly periorbital. This ligament spans the entire circumference of the orbit from the medial to the lateral canthus. There exists a fusion line between the orbital septum and the orbicularis retaining ligament in the superior orbit, indistinguishable from the arcus marginalis of the inferior orbital rim. Laterally, the orbicularis retaining ligament contributes to the lateral canthal ligament, consistent with previous studies. No contribution to the medial canthus was identified in this study.\n\n\nCONCLUSIONS\nThe orbicularis retaining ligament is a true, circumferential \"periorbital\" structure. This ligament may serve two purposes: (1) to act as a fixation point for the orbicularis muscle of the upper and lower eyelids and (2) to protect the ocular globe. With techniques of periorbital injection with fillers and botulinum toxin becoming ever more popular, understanding the orbicularis retaining ligament's function as a partitioning membrane is mandatory for avoiding ocular complications. As a support structure, examples are shown of how manipulation of this ligament may benefit canthopexy, septal reset, and brow-lift procedures as described by Hoxworth.",
"title": ""
}
] | scidocsrr |
8a69f2cdc23badb693bf45b084f5a6b8 | Forecasting time series with complex seasonal patterns using exponential smoothing | [
{
"docid": "ca29fee64e9271e8fce675e970932af1",
"text": "This paper considers univariate online electricity demand forecasting for lead times from a half-hour-ahead to a day-ahead. A time series of demand recorded at half-hourly intervals contains more than one seasonal pattern. A within-day seasonal cycle is apparent from the similarity of the demand profile from one day to the next, and a within-week seasonal cycle is evident when one compares the demand on the corresponding day of adjacent weeks. There is strong appeal in using a forecasting method that is able to capture both seasonalities. The multiplicative seasonal ARIMA model has been adapted for this purpose. In this paper, we adapt the Holt-Winters exponential smoothing formulation so that it can accommodate two seasonalities. We correct for residual autocorrelation using a simple autoregressive model. The forecasts produced by the new double seasonal Holt-Winters method outperform those from traditional Holt-Winters and from a well-specified multiplicative double seasonal ARIMA model.",
"title": ""
}
] | [
{
"docid": "b1d2def5ce60ff9e787eb32a3b0431a6",
"text": "OSHA Region VIII office and the HBA of Metropolitan Denver who made this research possible and the Centers for Disease Control and Prevention, the National Institute for Occupational Safety and Health (NIOSH) for their support and funding via the awards 1 R03 OH04199-0: Occupational Low Back Pain in Residential Carpentry: Ergonomic Elements of Posture and Strain within the HomeSafe Pilot Program sponsored by OSHA and the HBA. Correspondence and requests for offprints should be sent to David P. Gilkey, Department of Environmental and Radiological Health Sciences, Colorado State University, Ft. Collins, CO 80523-1681, USA. E-mail: <[email protected]>. Low Back Pain Among Residential Carpenters: Ergonomic Evaluation Using OWAS and 2D Compression Estimation",
"title": ""
},
{
"docid": "cfd3548d7cf15b411b49eb77543d7903",
"text": "INTRODUCTION\nLiquid injectable silicone (LIS) has been used for soft tissue augmentation in excess of 50 years. Until recently, all literature on penile augmentation with LIS consisted of case reports or small cases series, most involving surgical intervention to correct the complications of LIS. New formulations of LIS and new methodologies for injection have renewed interest in this procedure.\n\n\nAIM\nWe reported a case of penile augmentation with LIS and reviewed the pertinent literature.\n\n\nMETHODS\nComprehensive literature review was performed using PubMed. We performed additional searches based on references from relevant review articles.\n\n\nRESULTS\nInjection of medical grade silicone for soft tissue augmentation has a role in carefully controlled study settings. Historically, the use of LIS for penile augmentation has had poor outcomes and required surgical intervention to correct complications resulting from LIS.\n\n\nCONCLUSIONS\nWe currently discourage the use of LIS for penile augmentation until carefully designed and evaluated trials have been completed.",
"title": ""
},
{
"docid": "e33129014269c9cf1579c5912f091916",
"text": "Cloud service brokerage has been identified as a key concern for future cloud technology development and research. We compare service brokerage solutions. A range of specific concerns like architecture, programming and quality will be looked at. We apply a 2-pronged classification and comparison framework. We will identify challenges and wider research objectives based on an identification of cloud broker architecture concerns and technical requirements for service brokerage solutions. We will discuss complex cloud architecture concerns such as commoditisation and federation of integrated, vertical cloud stacks.",
"title": ""
},
{
"docid": "4f42f1a6a9804f292b81313d9e8e04bf",
"text": "An integrated high performance, highly reliable, scalable, and secure communications network is critical for the successful deployment and operation of next-generation electricity generation, transmission, and distribution systems — known as “smart grids.” Much of the work done to date to define a smart grid communications architecture has focused on high-level service requirements with little attention to implementation challenges. This paper investigates in detail a smart grid communication network architecture that supports today's grid applications (such as supervisory control and data acquisition [SCADA], mobile workforce communication, and other voice and data communication) and new applications necessitated by the introduction of smart metering and home area networking, support of demand response applications, and incorporation of renewable energy sources in the grid. We present design principles for satisfying the diverse quality of service (QoS) and reliability requirements of smart grids.",
"title": ""
},
{
"docid": "c724224060408a1e13b135cb7c2bb9e4",
"text": "Large datasets are increasingly common and are often difficult to interpret. Principal component analysis (PCA) is a technique for reducing the dimensionality of such datasets, increasing interpretability but at the same time minimizing information loss. It does so by creating new uncorrelated variables that successively maximize variance. Finding such new variables, the principal components, reduces to solving an eigenvalue/eigenvector problem, and the new variables are defined by the dataset at hand, not a priori, hence making PCA an adaptive data analysis technique. It is adaptive in another sense too, since variants of the technique have been developed that are tailored to various different data types and structures. This article will begin by introducing the basic ideas of PCA, discussing what it can and cannot do. It will then describe some variants of PCA and their application.",
"title": ""
},
{
"docid": "f296b374b635de4f4c6fc9c6f415bf3e",
"text": "People increasingly use the Internet for obtaining information regarding diseases, diagnoses and available treatments. Currently, many online health portals already provide non-personalized health information in the form of articles. However, it can be challenging to find information relevant to one's condition, interpret this in context, and understand the medical terms and relationships. Recommender Systems (RS) already help these systems perform precise information filtering. In this short paper, we look one step ahead and show the progress made towards RS helping users find personalized, complex medical interventions or support them with preventive healthcare measures. We identify key challenges that need to be addressed for RS to offer the kind of decision support needed in high-risk domains like healthcare.",
"title": ""
},
{
"docid": "8c51c464d9137eec4600a5df5c6b451a",
"text": "An increasing number of disasters (natural and man-made) with a large number of victims and significant social and economical losses are observed in the past few years. Although particular events can always be attributed to fate, it is improving the disaster management that have to contribute to decreasing damages and ensuring proper care for citizens in affected areas. Some of the lessons learned in the last several years give clear indications that the availability, management and presentation of geo-information play a critical role in disaster management. However, all the management techniques that are being developed are understood by, and confined to the intellectual community and hence lack mass participation. Awareness of the disasters is the only effective way in which one can bring about mass participation. Hence, any disaster management is successful only when the general public has some awareness about the disaster. In the design of such awareness program, intelligent mapping through analysis and data sharing also plays a very vital role. The analytical capabilities of GIS support all aspects of disaster management: planning, response and recovery, and records management. The proposed GIS based awareness program in this paper would improve the currently practiced disaster management programs and if implemented, would result in a proper dosage of awareness and caution to the general public, which in turn would help to cope with the dangerous activities of disasters in future.",
"title": ""
},
{
"docid": "c2e0b234898df278ee57ae5827faadeb",
"text": "In this paper, we consider the problem of single image super-resolution and propose a novel algorithm that outperforms state-of-the-art methods without the need of learning patches pairs from external data sets. We achieve this by modeling images and, more precisely, lines of images as piecewise smooth functions and propose a resolution enhancement method for this type of functions. The method makes use of the theory of sampling signals with finite rate of innovation (FRI) and combines it with traditional linear reconstruction methods. We combine the two reconstructions by leveraging from the multi-resolution analysis in wavelet theory and show how an FRI reconstruction and a linear reconstruction can be fused using filter banks. We then apply this method along vertical, horizontal, and diagonal directions in an image to obtain a single-image super-resolution algorithm. We also propose a further improvement of the method based on learning from the errors of our super-resolution result at lower resolution levels. Simulation results show that our method outperforms state-of-the-art algorithms under different blurring kernels.",
"title": ""
},
{
"docid": "d612aeb7f7572345bab8609571f4030d",
"text": "In conventional supervised training, a model is trained to fit all the training examples. However, having a monolithic model may not always be the best strategy, as examples could vary widely. In this work, we explore a different learning protocol that treats each example as a unique pseudo-task, by reducing the original learning problem to a few-shot meta-learning scenario with the help of a domain-dependent relevance function.1 When evaluated on the WikiSQL dataset, our approach leads to faster convergence and achieves 1.1%–5.4% absolute accuracy gains over the non-meta-learning counterparts.",
"title": ""
},
{
"docid": "f8d256bf6fea179847bfb4cc8acd986d",
"text": "We present a logic for stating properties such as, “after a request for service there is at least a 98% probability that the service will be carried out within 2 seconds”. The logic extends the temporal logic CTL by Emerson, Clarke and Sistla with time and probabilities. Formulas are interpreted over discrete time Markov chains. We give algorithms for checking that a given Markov chain satisfies a formula in the logic. The algorithms require a polynomial number of arithmetic operations, in size of both the formula and the Markov chain. A simple example is included to illustrate the algorithms.",
"title": ""
},
{
"docid": "cccecb08c92f8bcec4a359373a20afcb",
"text": "To solve the problem of the false matching and low robustness in detecting copy-move forgeries, a new method was proposed in this study. It involves the following steps: first, establish a Gaussian scale space; second, extract the orientated FAST key points and the ORB features in each scale space; thirdly, revert the coordinates of the orientated FAST key points to the original image and match the ORB features between every two different key points using the hamming distance; finally, remove the false matched key points using the RANSAC algorithm and then detect the resulting copy-move regions. The experimental results indicate that the new algorithm is effective for geometric transformation, such as scaling and rotation, and exhibits high robustness even when an image is distorted by Gaussian blur, Gaussian white noise and JPEG recompression; the new algorithm even has great detection on the type of hiding object forgery.",
"title": ""
},
{
"docid": "65b2d6ea5e1089c52378b4fd6386224c",
"text": "In traffic environment, conventional FMCW radar with triangular transmit waveform may bring out many false targets in multi-target situations and result in a high false alarm rate. An improved FMCW waveform and multi-target detection algorithm for vehicular applications is presented. The designed waveform in each small cycle is composed of two-segment: LFM section and constant frequency section. They have the same duration, yet in two adjacent small cycles the two LFM slopes are opposite sign and different size. Then the two adjacent LFM bandwidths are unequal. Within a determinate frequency range, the constant frequencies are modulated by a unique PN code sequence for different automotive radar in a big period. Corresponding to the improved waveform, which combines the advantages of both FSK and FMCW formats, a judgment algorithm is used in the continuous small cycle to further eliminate the false targets. The combination of unambiguous ranges and relative velocities can confirm and cancel most false targets in two adjacent small cycles.",
"title": ""
},
{
"docid": "9abd7aedf336f32abed7640dd3f4d619",
"text": "BACKGROUND\nAlthough evidence-based and effective treatments are available for people with depression, a substantial number does not seek or receive help. Therefore, it is important to gain a better understanding of the reasons why people do or do not seek help. This study examined what predisposing and need factors are associated with help-seeking among people with major depression.\n\n\nMETHODS\nA cross-sectional study was conducted in 102 subjects with major depression. Respondents were recruited from the general population in collaboration with three Municipal Health Services (GGD) across different regions in the Netherlands. Inclusion criteria were: being aged 18 years or older, a high score on a screening instrument for depression (K10 > 20), and a diagnosis of major depression established through the Composite International Diagnostic Interview (CIDI 2.1).\n\n\nRESULTS\nOf the total sample, 65 % (n = 66) had received help in the past six months. Results showed that respondents with a longer duration of symptoms and those with lower personal stigma were more likely to seek help. Other determinants were not significantly related to help-seeking.\n\n\nCONCLUSIONS\nLonger duration of symptoms was found to be an important determinant of help-seeking among people with depression. It is concerning that stigma was related to less help-seeking. Knowledge and understanding of depression should be promoted in society, hopefully leading to reduced stigma and increased help-seeking.",
"title": ""
},
{
"docid": "dc75c32aceb78acd8267e7af442b992c",
"text": "While pulmonary embolism (PE) causes approximately 100 000-180 000 deaths per year in the United States, mortality is restricted to patients who have massive or submassive PEs. This state of the art review familiarizes the reader with these categories of PE. The review discusses the following topics: pathophysiology, clinical presentation, rationale for stratification, imaging, massive PE management and outcomes, submassive PE management and outcomes, and future directions. It summarizes the most up-to-date literature on imaging, systemic thrombolysis, surgical embolectomy, and catheter-directed therapy for submassive and massive PE and gives representative examples that reflect modern practice. © RSNA, 2017.",
"title": ""
},
{
"docid": "25d913188ee5790d5b3a9f5fb8b68dda",
"text": "RPL, the routing protocol proposed by IETF for IPv6/6LoWPAN Low Power and Lossy Networks has significant complexity. Another protocol called LOADng, a lightweight variant of AODV, emerges as an alternative solution. In this paper, we compare the performance of the two protocols in a Home Automation scenario with heterogenous traffic patterns including a mix of multipoint-to-point and point-to-multipoint routes in realistic dense non-uniform network topologies. We use Contiki OS and Cooja simulator to evaluate the behavior of the ContikiRPL implementation and a basic non-optimized implementation of LOADng. Unlike previous studies, our results show that RPL provides shorter delays, less control overhead, and requires less memory than LOADng. Nevertheless, enhancing LOADng with more efficient flooding and a better route storage algorithm may improve its performance.",
"title": ""
},
{
"docid": "5124bfe94345f2abe6f91fe717731945",
"text": "Recently, IT trends such as big data, cloud computing, internet of things (IoT), 3D visualization, network, and so on demand terabyte/s bandwidth computer performance in a graphics card. In order to meet these performance, terabyte/s bandwidth graphics module using 2.5D-IC with high bandwidth memory (HBM) technology has been emerged. Due to the difference in scale of interconnect pitch between GPU or HBM and package substrate, the HBM interposer is certainly required for terabyte/s bandwidth graphics module. In this paper, the electrical performance of the HBM interposer channel in consideration of the manufacturing capabilities is analyzed by simulation both the frequency- and time-domain. Furthermore, although the silicon substrate is most widely employed for the HBM interposer fabrication, the organic and glass substrate are also proposed to replace the high cost and high loss silicon substrate. Therefore, comparison and analysis of the electrical performance of the HBM interposer channel using silicon, organic, and glass substrate are conducted.",
"title": ""
},
{
"docid": "342b57da0f0fcf190f926dfe0744977d",
"text": "Spike timing-dependent plasticity (STDP) as a Hebbian synaptic learning rule has been demonstrated in various neural circuits over a wide spectrum of species, from insects to humans. The dependence of synaptic modification on the order of pre- and postsynaptic spiking within a critical window of tens of milliseconds has profound functional implications. Over the past decade, significant progress has been made in understanding the cellular mechanisms of STDP at both excitatory and inhibitory synapses and of the associated changes in neuronal excitability and synaptic integration. Beyond the basic asymmetric window, recent studies have also revealed several layers of complexity in STDP, including its dependence on dendritic location, the nonlinear integration of synaptic modification induced by complex spike trains, and the modulation of STDP by inhibitory and neuromodulatory inputs. Finally, the functional consequences of STDP have been examined directly in an increasing number of neural circuits in vivo.",
"title": ""
},
{
"docid": "58fffa67053a82875177f32e126c2e43",
"text": "Cracking-resistant password vaults have been recently proposed with the goal of thwarting offline attacks. This requires the generation of synthetic password vaults that are statistically indistinguishable from real ones. In this work, we establish a conceptual link between this problem and steganography, where the stego objects must be undetectable among cover objects. We compare the two frameworks and highlight parallels and differences. Moreover, we transfer results obtained in the steganography literature into the context of decoy generation. Our results include the infeasibility of perfectly secure decoy vaults and the conjecture that secure decoy vaults are at least as hard to construct as secure steganography.",
"title": ""
},
{
"docid": "49a54c57984c3feaef32b708ae328109",
"text": "While it has a long history, the last 30 years have brought considerable advances to the discipline of forensic anthropology worldwide. Every so often it is essential that these advances are noticed and trends assessed. It is also important to identify those research areas that are needed for the forthcoming years. The purpose of this special issue is to examine some of the examples of research that might identify the trends in the 21st century. Of the 14 papers 5 dealt with facial features and identification such as facial profile determination and skull-photo superimposition. Age (fetus and cranial thickness), sex (supranasal region, arm and leg bones) and stature (from the arm bones) estimation were represented by five articles. Others discussed the estimation of time since death, skull color and diabetes, and a case study dealing with a mummy and skeletal analysis in comparison with DNA identification. These papers show that age, sex, and stature are still important issues of the discipline. Research on the human face is moving from hit and miss case studies to a more scientifically sound direction. A lack of studies on trauma and taphonomy is very clear. Anthropologists with other scientists can develop research areas to make the identification process more reliable. Research should include the assessment of animal attacks on human remains, factors affecting decomposition rates, and aging of the human face. Lastly anthropologists should be involved in the education of forensic pathologists about osteological techniques and investigators regarding archaeology of crime scenes.",
"title": ""
}
] | scidocsrr |
c46ef737772868f2a42597ffa10ec0c8 | Crowdsourcing and language studies: the new generation of linguistic data | [
{
"docid": "c6ad70b8b213239b0dd424854af194e2",
"text": "The neural mechanisms underlying the processing of conventional and novel conceptual metaphorical sentences were examined with event-related potentials (ERPs). Conventional metaphors were created based on the Contemporary Theory of Metaphor and were operationally defined as familiar and readily interpretable. Novel metaphors were unfamiliar and harder to interpret. Using a sensicality judgment task, we compared ERPs elicited by the same target word when it was used to end anomalous, novel metaphorical, conventional metaphorical and literal sentences. Amplitudes of the N400 ERP component (320-440 ms) were more negative for anomalous sentences, novel metaphors, and conventional metaphors compared with literal sentences. Within a later window (440-560 ms), ERPs associated with conventional metaphors converged to the same level as literal sentences while the novel metaphors stayed anomalous throughout. The reported results were compatible with models assuming an initial stage for metaphor mappings from one concept to another and that these mappings are cognitively taxing.",
"title": ""
},
{
"docid": "37913e0bfe44ab63c0c229c20b53c779",
"text": "The authors present several versions of a general model, titled the E-Z Reader model, of eye movement control in reading. The major goal of the modeling is to relate cognitive processing (specifically aspects of lexical access) to eye movements in reading. The earliest and simplest versions of the model (E-Z Readers 1 and 2) merely attempt to explain the total time spent on a word before moving forward (the gaze duration) and the probability of fixating a word; later versions (E-Z Readers 3-5) also attempt to explain the durations of individual fixations on individual words and the number of fixations on individual words. The final version (E-Z Reader 5) appears to be psychologically plausible and gives a good account of many phenomena in reading. It is also a good tool for analyzing eye movement data in reading. Limitations of the model and directions for future research are also discussed.",
"title": ""
},
{
"docid": "f66854fd8e3f29ae8de75fc83d6e41f5",
"text": "This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature.",
"title": ""
}
] | [
{
"docid": "6c93139f503e8a88fcc8292d64d5b5fb",
"text": "Chatbots use a database of responses often culled from a corpus of text generated for a different purpose, for example film scripts or interviews. One consequence of this approach is a mismatch between the data and the inputs generated by participants. We describe an approach that while starting from an existing corpus (of interviews) makes use of crowdsourced data to augment the response database, focusing on responses that people judge as inappropriate. The long term goal is to create a data set of more appropriate chat responses; the short term consequence appears to be the identification and replacement of particularly inappropriate responses. We found the version with the expanded database was rated significantly better in terms of the response level appropriateness and the overall ability to engage users. We also describe strategies we developed that target certain breakdowns discovered during data collection. Both the source code of the chatbot, TickTock, and the data collected are publicly available.",
"title": ""
},
{
"docid": "5c6401477feb7336d9e9eaf491fd5549",
"text": "Responses to domestic violence have focused, to date, primarily on intervention after the problem has already been identified and harm has occurred. There are, however, new domestic violence prevention strategies emerging, and prevention approaches from the public health field can serve as models for further development of these strategies. This article describes two such models. The first involves public health campaigns that identify and address the underlying causes of a problem. Although identifying the underlying causes of domestic violence is difficult--experts do not agree on causation, and several different theories exist--these theories share some common beliefs that can serve as a foundation for prevention strategies. The second public health model can be used to identify opportunities for domestic violence prevention along a continuum of possible harm: (1) primary prevention to reduce the incidence of the problem before it occurs; (2) secondary prevention to decrease the prevalence after early signs of the problem; and (3) tertiary prevention to intervene once the problem is already clearly evident and causing harm. Examples of primary prevention include school-based programs that teach students about domestic violence and alternative conflict-resolution skills, and public education campaigns to increase awareness of the harms of domestic violence and of services available to victims. Secondary prevention programs could include home visiting for high-risk families and community-based programs on dating violence for adolescents referred through child protective services (CPS). Tertiary prevention includes the many targeted intervention programs already in place (and described in other articles in this journal issue). Early evaluations of existing prevention programs show promise, but results are still preliminary and programs remain small, locally based, and scattered throughout the United States and Canada. What is needed is a broadly based, comprehensive prevention strategy that is supported by sound research and evaluation, receives adequate public backing, and is based on a policy of zero tolerance for domestic violence.",
"title": ""
},
{
"docid": "0b5ca91480dfff52de5c1d65c3b32f3d",
"text": "Spotting anomalies in large multi-dimensional databases is a crucial task with many applications in finance, health care, security, etc. We introduce COMPREX, a new approach for identifying anomalies using pattern-based compression. Informally, our method finds a collection of dictionaries that describe the norm of a database succinctly, and subsequently flags those points dissimilar to the norm---with high compression cost---as anomalies.\n Our approach exhibits four key features: 1) it is parameter-free; it builds dictionaries directly from data, and requires no user-specified parameters such as distance functions or density and similarity thresholds, 2) it is general; we show it works for a broad range of complex databases, including graph, image and relational databases that may contain both categorical and numerical features, 3) it is scalable; its running time grows linearly with respect to both database size as well as number of dimensions, and 4) it is effective; experiments on a broad range of datasets show large improvements in both compression, as well as precision in anomaly detection, outperforming its state-of-the-art competitors.",
"title": ""
},
{
"docid": "9bbc279974aaa899d12fee26948ce029",
"text": "Data-flow testing (DFT) is a family of testing strategies designed to verify the interactions between each program variable’s definition and its uses. Such a test objective of interest is referred to as a def-use pair. DFT selects test data with respect to various test adequacy criteria (i.e., data-flow coverage criteria) to exercise each pair. The original conception of DFT was introduced by Herman in 1976. Since then, a number of studies have been conducted, both theoretically and empirically, to analyze DFT’s complexity and effectiveness. In the past four decades, DFT has been continuously concerned, and various approaches from different aspects are proposed to pursue automatic and efficient data-flow testing. This survey presents a detailed overview of data-flow testing, including challenges and approaches in enforcing and automating it: (1) it introduces the data-flow analysis techniques that are used to identify def-use pairs; (2) it classifies and discusses techniques for data-flow-based test data generation, such as search-based testing, random testing, collateral-coverage-based testing, symbolic-execution-based testing, and model-checking-based testing; (3) it discusses techniques for tracking data-flow coverage; (4) it presents several DFT applications, including software fault localization, web security testing, and specification consistency checking; and (5) it summarizes recent advances and discusses future research directions toward more practical data-flow testing.",
"title": ""
},
{
"docid": "f9cf436f8b5c40598b2c24930c735c1b",
"text": "We present a joint theoretical and experimental investigation of the absorption spectra of silver clusters Ag(n) (4<or=n<or=22). The experimental spectra of clusters isolated in an Ar matrix are compared with the calculated ones in the framework of the time-dependent density functional theory. The analysis of the molecular transitions indicates that the s-electrons are responsible for the optical response of small clusters (n<or=8) while the d-electrons play a crucial role in the optical excitations for larger n values.",
"title": ""
},
{
"docid": "1014a33211c9ca3448fa02cf734a5775",
"text": "We propose a general method called truncated gradient to induce sparsity in the weights of online learning algorithms with convex loss functions. This method has several essential properties: 1. The degree of sparsity is continuous a parameter controls the rate of sparsi cation from no sparsi cation to total sparsi cation. 2. The approach is theoretically motivated, and an instance of it can be regarded as an online counterpart of the popular L1-regularization method in the batch setting. We prove that small rates of sparsi cation result in only small additional regret with respect to typical online learning guarantees. 3. The approach works well empirically. We apply the approach to several datasets and nd that for datasets with large numbers of features, substantial sparsity is discoverable.",
"title": ""
},
{
"docid": "264338f11dbd4d883e791af8c15aeb0d",
"text": "With the advent of deep neural networks, learning-based approaches for 3D reconstruction have gained popularity. However, unlike for images, in 3D there is no canonical representation which is both computationally and memory efficient yet allows for representing high-resolution geometry of arbitrary topology. Many of the state-of-the-art learningbased 3D reconstruction approaches can hence only represent very coarse 3D geometry or are limited to a restricted domain. In this paper, we propose occupancy networks, a new representation for learning-based 3D reconstruction methods. Occupancy networks implicitly represent the 3D surface as the continuous decision boundary of a deep neural network classifier. In contrast to existing approaches, our representation encodes a description of the 3D output at infinite resolution without excessive memory footprint. We validate that our representation can efficiently encode 3D structure and can be inferred from various kinds of input. Our experiments demonstrate competitive results, both qualitatively and quantitatively, for the challenging tasks of 3D reconstruction from single images, noisy point clouds and coarse discrete voxel grids. We believe that occupancy networks will become a useful tool in a wide variety of learning-based 3D tasks.",
"title": ""
},
{
"docid": "9c82588d5e82df20e2156ca1bda91f09",
"text": "Lean and simulation analysis are driven by the same objective, how to better design and improve processes making the companies more competitive. The adoption of lean has been widely spread in companies from public to private sectors and simulation is nowadays becoming more and more popular. Several authors have pointed out the benefits of combining simulation and lean, however, they are still rarely used together in practice. Optimization as an additional technique to this combination is even a more powerful approach especially when designing and improving complex processes with multiple conflicting objectives. This paper presents the mutual benefits that are gained when combining lean, simulation and optimization and how they overcome each other's limitations. A framework including the three concepts, some of the barriers for its implementation and a real-world industrial example are also described.",
"title": ""
},
{
"docid": "6cd317113158241a98517ad5a8247174",
"text": "Feature Oriented Programming (FOP) is an emerging paradigmfor application synthesis, analysis, and optimization. Atarget application is specified declaratively as a set of features,like many consumer products (e.g., personal computers,automobiles). FOP technology translates suchdeclarative specifications into efficient programs.",
"title": ""
},
{
"docid": "31338a16eca7c0f60b789c38f2774816",
"text": "As a promising area in artificial intelligence, a new learning paradigm, called Small Sample Learning (SSL), has been attracting prominent research attention in the recent years. In this paper, we aim to present a survey to comprehensively introduce the current techniques proposed on this topic. Specifically, current SSL techniques can be mainly divided into two categories. The first category of SSL approaches can be called “concept learning”, which emphasizes learning new concepts from only few related observations. The purpose is mainly to simulate human learning behaviors like recognition, generation, imagination, synthesis and analysis. The second category is called “experience learning”, which usually co-exists with the large sample learning manner of conventional machine learning. This category mainly focuses on learning with insufficient samples, and can also be called small data learning in some literatures. More extensive surveys on both categories of SSL techniques are introduced and some neuroscience evidences are provided to clarify the rationality of the entire SSL regime, and the relationship with human learning process. Some discussions on the main challenges and possible future research directions along this line are also presented.",
"title": ""
},
{
"docid": "7f5d032cc176ae27a5bcd9c601e3b9bd",
"text": "The grand challenge of neuromorphic computation is to develop a flexible brain-inspired architecture capable of a wide array of real-time applications, while striving towards the ultra-low power consumption and compact size of biological neural systems. Toward this end, we fabricated a building block of a modular neuromorphic architecture, a neurosynaptic core. Our implementation consists of 256 integrate-and-fire neurons and a 1,024×256 SRAM crossbar memory for synapses that fits in 4.2mm2 using a 45nm SOI process and consumes just 45pJ per spike. The core is fully configurable in terms of neuron parameters, axon types, and synapse states and its fully digital implementation achieves one-to-one correspondence with software simulation models. One-to-one correspondence allows us to introduce an abstract neural programming model for our chip, a contract guaranteeing that any application developed in software functions identically in hardware. This contract allows us to rapidly test and map applications from control, machine vision, and classification. To demonstrate, we present four test cases (i) a robot driving in a virtual environment, (ii) the classic game of pong, (iii) visual digit recognition and (iv) an autoassociative memory.",
"title": ""
},
{
"docid": "bafdfa2ecaeb18890ab8207ef1bc4f82",
"text": "This content analytic study investigated the approaches of two mainstream newspapers—The New York Times and the Chicago Tribune—to cover the gay marriage issue. The study used the Massachusetts legitimization of gay marriage as a dividing point to look at what kinds of specific political or social topics related to gay marriage were highlighted in the news media. The study examined how news sources were framed in the coverage of gay marriage, based upon the newspapers’ perspectives and ideologies. The results indicated that The New York Times was inclined to emphasize the topic of human equality related to the legitimization of gay marriage. After the legitimization, The New York Times became an activist for gay marriage. Alternatively, the Chicago Tribune highlighted the importance of human morality associated with the gay marriage debate. The perspective of the Chicago Tribune was not dramatically influenced by the legitimization. It reported on gay marriage in terms of defending American traditions and family values both before and after the gay marriage legitimization. Published by Elsevier Inc on behalf of Western Social Science Association. Gay marriage has been a controversial issue in the United States, especially since the Massachusetts Supreme Judicial Court officially authorized it. Although the practice has been widely discussed for several years, the acceptance of gay marriage does not seem to be concordant with mainstream American values. This is in part because gay marriage challenges the traditional value of the family institution. In the United States, people’s perspectives of and attitudes toward gay marriage have been mostly polarized. Many people optimistically ∗ Corresponding author. E-mail addresses: [email protected], [email protected] (P.-L. Pan). 0362-3319/$ – see front matter. Published by Elsevier Inc on behalf of Western Social Science Association. doi:10.1016/j.soscij.2010.02.002 P.-L. Pan et al. / The Social Science Journal 47 (2010) 630–645 631 support gay legal rights and attempt to legalize it in as many states as possible, while others believe legalizing homosexuality may endanger American society and moral values. A number of forces and factors may expand this divergence between the two polarized perspectives, including family, religion and social influences. Mass media have a significant influence on socialization that cultivates individual’s belief about the world as well as affects individual’s values on social issues (Comstock & Paik, 1991). Moreover, news media outlets become a strong factor in influencing people’s perceptions of and attitudes toward gay men and lesbians because the news is one of the most powerful media to influence people’s attitudes toward gay marriage (Anderson, Fakhfakh, & Kondylis, 1999). Some mainstream newspapers are considered as media elites (Lichter, Rothman, & Lichter, 1986). Furthermore, numerous studies have demonstrated that mainstream newspapers would produce more powerful influences on people’s perceptions of public policies and political issues than television news (e.g., Brians & Wattenberg, 1996; Druckman, 2005; Eveland, Seo, & Marton, 2002) Gay marriage legitimization, a specific, divisive issue in the political and social dimensions, is concerned with several political and social issues that have raised fundamental questions about Constitutional amendments, equal rights, and American family values. The role of news media becomes relatively important while reporting these public debates over gay marriage, because not only do the news media affect people’s attitudes toward gays and lesbians by positively or negatively reporting the gay and lesbian issue, but also shape people’s perspectives of the same-sex marriage policy by framing the recognition of gay marriage in the news coverage. The purpose of this study is designed to examine how gay marriage news is described in the news coverage of The New York Times and the Chicago Tribune based upon their divisive ideological framings. 1. Literature review 1.1. Homosexual news coverage over time Until the 1940s, news media basically ignored the homosexual issue in the United States (Alwood, 1996; Bennett, 1998). According to Bennett (1998), of the 356 news stories about gays and lesbians that appeared in Time and Newsweek from 1947 to 1997, the Kinsey report on male sexuality published in 1948 was the first to draw reporters to the subject of homosexuality. From the 1940s to 1950s, the homosexual issue was reported as a social problem. Approximately 60% of the articles described homosexuals as a direct threat to the strength of the U.S. military, the security of the U.S. government, and the safety of ordinary Americans during this period. By the 1960s, the gay and lesbian issue began to be discussed openly in the news media. However, these portrayals were covered in the context of crime stories and brief items that ridiculed effeminate men or masculine women (Miller, 1991; Streitmatter, 1993). In 1963, a cover story, “Let’s Push Homophile Marriage,” was the first to treat gay marriage as a matter of winning legal recognition (Stewart-Winter, 2006). However, this cover story did not cause people to pay positive attention to gay marriage, but raised national debates between punishment and pity of homosexuals. Specifically speaking, although numerous arti632 P.-L. Pan et al. / The Social Science Journal 47 (2010) 630–645 cles reported before the 1960s provided growing visibility for homosexuals, they were still highly critical of them (Bennett, 1998). In September 1967, the first hard-hitting gay newspaper—the Los Angeles Advocate—began publication. Different from other earlier gay and lesbian publications, its editorial mix consisted entirely of non-fiction materials, including news stories, editorials, and columns (Cruikshank, 1992; Streitmatter, 1993). The Advocate was the first gay publication to operate as an independent business financed entirely by advertising and circulation, rather than by subsidies from a membership organization (Streitmatter, 1995a, 1995b). After the Stonewall Rebellion in June 1969 in New York City ignited the modern phase of the gay and lesbian liberation movement, the number and circulation of the gay and lesbian press exploded (Streitmatter, 1998). Therefore, gay rights were discussed in the news media during the early 1970s. Homosexuals began to organize a series of political actions associated with gay rights, which was widely covered by the news media, while a backlash also appeared against the gay-rights movements, particularly among fundamentalist Christians (Alwood, 1996; Bennett, 1998). Later in the 1970s, the genre entered a less political phrase by exploring the dimensions of the developing culture of gay and lesbian. The news media plumbed the breadth and depth of topics ranging from the gay and lesbian sensibility in art and literature to sex, spirituality, personal appearance, dyke separatism, lesbian mothers, drag queen, leather men, and gay bathhouses (Streitmatter, 1995b). In the 1980s, the gay and lesbian issue confronted a most formidable enemy when AIDS/HIV, one of the most devastating diseases in the history of medicine, began killing gay men at an alarming rate. Accordingly, AIDS/HIV became the biggest gay story reported by the news media. Numerous news media outlets linked the AIDS/HIV epidemic with homosexuals, which implied the notion of the promiscuous gay and lesbian lifestyle. The gays and lesbians, therefore, were described as a dangerous minority in the news media during the 1980s (Altman, 1986; Cassidy, 2000). In the 1990s, issues about the growing visibility of gays and lesbians and their campaign for equal rights were frequently covered in the news media, primarily because of AIDS and the debate over whether the ban on gays in the military should be lifted. The increasing visibility of gay people resulted in the emergence of lifestyle magazines (Bennett, 1998; Streitmatter, 1998). The Out, a lifestyle magazine based in New York City but circulated nationally, led the new phase, since its upscale design and fashion helped attract mainstream advertisers. This magazine, which devalued news in favor of stories on entertainment and fashions, became the first gay and lesbian publication sold in mainstream bookstores and featured on the front page of The New York Times (Streitmatter, 1998). From the late 1990s to the first few years of the 2000s, homosexuals were described as a threat to children’s development as well as a danger to family values in the news media. The legitimacy of same-sex marriage began to be discussed, because news coverage dominated the issue of same-sex marriage more frequently than before (Bennett, 1998). According to Gibson (2004), The New York Times first announced in August 2002 that its Sunday Styles section would begin publishing reports of same-sex commitment ceremonies along with the traditional heterosexual wedding announcements. Moreover, many newspapers joined this trend. Gibson (2004) found that not only the national newspapers, such as The New York Times, but also other regional newspapers, such as the Houston Chronicle and the Seattle Times, reported surprisingly large P.-L. Pan et al. / The Social Science Journal 47 (2010) 630–645 633 number of news stories about the everyday lives of gays and lesbians, especially since the Massachusetts Supreme Judicial Court ruled in November 2003 that same-sex couples had the same right to marry as heterosexuals. Previous studies investigated the increased amount of news coverage of gay and lesbian issues in the past six decades, but they did not analyze how homosexuals are framed in the news media in terms of public debates on the gay marriage issue. These studies failed to examine how newspapers report this national debate on gay marriage as well as what kinds of news frames are used in reporting this controversial issue. 1.2. Framing gay and lesbian partnersh",
"title": ""
},
{
"docid": "fa3587a9f152db21ec7fe5e935ebf8ba",
"text": "Person re-identification has been usually solved as either the matching of single-image representation (SIR) or the classification of cross-image representation (CIR). In this work, we exploit the connection between these two categories of methods, and propose a joint learning frame-work to unify SIR and CIR using convolutional neural network (CNN). Specifically, our deep architecture contains one shared sub-network together with two sub-networks that extract the SIRs of given images and the CIRs of given image pairs, respectively. The SIR sub-network is required to be computed once for each image (in both the probe and gallery sets), and the depth of the CIR sub-network is required to be minimal to reduce computational burden. Therefore, the two types of representation can be jointly optimized for pursuing better matching accuracy with moderate computational cost. Furthermore, the representations learned with pairwise comparison and triplet comparison objectives can be combined to improve matching performance. Experiments on the CUHK03, CUHK01 and VIPeR datasets show that the proposed method can achieve favorable accuracy while compared with state-of-the-arts.",
"title": ""
},
{
"docid": "70038e828b49a4093f4375084c248fd6",
"text": "Use of reporter genes provides a convenient way to study the activity and regulation of promoters and examine the rate and control of gene transcription. Many reporter genes and transfection methods can be efficiently used for this purpose. To investigate gene regulation and signaling pathway interactions during ovarian follicle development, we have examined promoter activities of several key follicle-regulating genes in the mouse ovary. In this chapter, we describe use of luciferase and beta-galactosidase genes as reporters and a cationic liposome mediated cell transfection method for studying regulation of activin subunit- and estrogen receptor alpha (ERalpha)-promoter activities. We have demonstrated that estrogen suppresses activin subunit gene promoter activity while activin increases ERalpha promoter activity and increases functional ER activity, suggesting a reciprocal regulation between activin and estrogen signaling in the ovary. We also discuss more broadly some key considerations in the use of reporter genes and cell-based transfection assays in endocrine research.",
"title": ""
},
{
"docid": "56a490b515dc9be979a54f62db5d5bca",
"text": "We searched for quantitative trait loci (QTL) associated with the palm oil fatty acid composition of mature fruits of the oil palm E. guineensis Jacq. in comparison with its wild relative E. oleifera (H.B.K) Cortés. The oil palm cross LM2T x DA10D between two heterozygous parents was considered in our experiment as an intraspecific representative of E. guineensis. Its QTLs were compared to QTLs published for the same traits in an interspecific Elaeis pseudo-backcross used as an indirect representative of E. oleifera. Few correlations were found in E. guineensis between pulp fatty acid proportions and yield traits, allowing for the rather independent selection of both types of traits. Sixteen QTLs affecting palm oil fatty acid proportions and iodine value were identified in oil palm. The phenotypic variation explained by the detected QTLs was low to medium in E. guineensis, ranging between 10% and 36%. The explained cumulative variation was 29% for palmitic acid C16:0 (one QTL), 68% for stearic acid C18:0 (two QTLs), 50% for oleic acid C18:1 (three QTLs), 25% for linoleic acid C18:2 (one QTL), and 40% (two QTLs) for the iodine value. Good marker co-linearity was observed between the intraspecific and interspecific Simple Sequence Repeat (SSR) linkage maps. Specific QTL regions for several traits were found in each mapping population. Our comparative QTL results in both E. guineensis and interspecific materials strongly suggest that, apart from two common QTL zones, there are two specific QTL regions with major effects, which might be one in E. guineensis, the other in E. oleifera, which are independent of each other and harbor QTLs for several traits, indicating either pleiotropic effects or linkage. Using QTL maps connected by highly transferable SSR markers, our study established a good basis to decipher in the future such hypothesis at the Elaeis genus level.",
"title": ""
},
{
"docid": "1ce09062b1ced2cd643c04f7c075c4f1",
"text": "We propose a new approach to the task of fine grained entity type classifications based on label embeddings that allows for information sharing among related labels. Specifically, we learn an embedding for each label and each feature such that labels which frequently co-occur are close in the embedded space. We show that it outperforms state-of-the-art methods on two fine grained entity-classification benchmarks and that the model can exploit the finer-grained labels to improve classification of standard coarse types.",
"title": ""
},
{
"docid": "cba787c228bba0a0b94faa52e94ec3dc",
"text": "Purpose. The aim of the present prospective study was to investigate correlations between 3D facial soft tissue scan and lateral cephalometric radiography measurements. Materials and Methods. The study sample comprised 312 subjects of Caucasian ethnic origin. Exclusion criteria were all the craniofacial anomalies, noticeable asymmetries, and previous or current orthodontic treatment. A cephalometric analysis was developed employing 11 soft tissue landmarks and 14 sagittal and 14 vertical angular measurements corresponding to skeletal cephalometric variables. Cephalometric analyses on lateral cephalometric radiographies were performed for all subjects. The measurements were analysed in terms of their reliability and gender-age specific differences. Then, the soft tissue values were analysed for any correlations with lateral cephalometric radiography variables using Pearson correlation coefficient analysis. Results. Low, medium, and high correlations were found for sagittal and vertical measurements. Sagittal measurements seemed to be more reliable in providing a soft tissue diagnosis than vertical measurements. Conclusions. Sagittal parameters seemed to be more reliable in providing a soft tissue diagnosis similar to lateral cephalometric radiography. Vertical soft tissue measurements meanwhile showed a little less correlation with the corresponding cephalometric values perhaps due to the low reproducibility of cranial base and mandibular landmarks.",
"title": ""
},
{
"docid": "f5519eff0c13e0ee42245fdf2627b8ae",
"text": "An efficient vehicle tracking system is designed and implemented for tracking the movement of any equipped vehicle from any location at any time. The proposed system made good use of a popular technology that combines a Smartphone application with a microcontroller. This will be easy to make and inexpensive compared to others. The designed in-vehicle device works using Global Positioning System (GPS) and Global system for mobile communication / General Packet Radio Service (GSM/GPRS) technology that is one of the most common ways for vehicle tracking. The device is embedded inside a vehicle whose position is to be determined and tracked in real-time. A microcontroller is used to control the GPS and GSM/GPRS modules. The vehicle tracking system uses the GPS module to get geographic coordinates at regular time intervals. The GSM/GPRS module is used to transmit and update the vehicle location to a database. A Smartphone application is also developed for continuously monitoring the vehicle location. The Google Maps API is used to display the vehicle on the map in the Smartphone application. Thus, users will be able to continuously monitor a moving vehicle on demand using the Smartphone application and determine the estimated distance and time for the vehicle to arrive at a given destination. In order to show the feasibility and effectiveness of the system, this paper presents experimental results of the vehicle tracking system and some experiences on practical implementations.",
"title": ""
},
{
"docid": "95ff1a86eedad42b0d869cca0d7d6e33",
"text": "360° videos give viewers a spherical view and immersive experience of surroundings. However, one challenge of watching 360° videos is continuously focusing and re-focusing intended targets. To address this challenge, we developed two Focus Assistance techniques: Auto Pilot (directly bringing viewers to the target), and Visual Guidance (indicating the direction of the target). We conducted an experiment to measure viewers' video-watching experience and discomfort using these techniques and obtained their qualitative feedback. We showed that: 1) Focus Assistance improved ease of focus. 2) Focus Assistance techniques have specificity to video content. 3) Participants' preference of and experience with Focus Assistance depended not only on individual difference but also on their goal of watching the video. 4) Factors such as view-moving-distance, salience of the intended target and guidance, and language comprehension affected participants' video-watching experience. Based on these findings, we provide design implications for better 360° video focus assistance.",
"title": ""
}
] | scidocsrr |
08e7c93152438f6295877905b1ca7584 | Predicting Bike Usage for New York City's Bike Sharing System | [
{
"docid": "db422d1fcb99b941a43e524f5f2897c2",
"text": "AN INDIVIDUAL CORRELATION is a correlation in which the statistical object or thing described is indivisible. The correlation between color and illiteracy for persons in the United States, shown later in Table I, is an individual correlation, because the kind of thing described is an indivisible unit, a person. In an individual correlation the variables are descriptive properties of individuals, such as height, income, eye color, or race, and not descriptive statistical constants such as rates or means. In an ecological correlation the statistical object is a group of persons. The correlation between the percentage of the population which is Negro and the percentage of the population which is illiterate for the 48 states, shown later as Figure 2, is an ecological correlation. The thing described is the population of a state, and not a single individual. The variables are percentages, descriptive properties of groups, and not descriptive properties of individuals. Ecological correlations are used in an impressive number of quantitative sociological studies, some of which by now have attained the status of classics: Cowles’ ‘‘Statistical Study of Climate in Relation to Pulmonary Tuberculosis’’; Gosnell’s ‘‘Analysis of the 1932 Presidential Vote in Chicago,’’ Factorial and Correlational Analysis of the 1934 Vote in Chicago,’’ and the more elaborate factor analysis in Machine Politics; Ogburn’s ‘‘How women vote,’’ ‘‘Measurement of the Factors in the Presidential Election of 1928,’’ ‘‘Factors in the Variation of Crime Among Cities,’’ and Groves and Ogburn’s correlation analyses in American Marriage and Family Relationships; Ross’ study of school attendance in Texas; Shaw’s Delinquency Areas study of the correlates of delinquency, as well as The more recent analyses in Juvenile Delinquency in Urban Areas; Thompson’s ‘‘Some Factors Influencing the Ratios of Children to Women in American Cities, 1930’’; Whelpton’s study of the correlates of birth rates, in ‘‘Geographic and Economic Differentials in Fertility;’’ and White’s ‘‘The Relation of Felonies to Environmental Factors in Indianapolis.’’ Although these studies and scores like them depend upon ecological correlations, it is not because their authors are interested in correlations between the properties of areas as such. Even out-and-out ecologists, in studying delinquency, for example, rely primarily upon data describing individuals, not areas. In each study which uses ecological correlations, the obvious purpose is to discover something about the behavior of individuals. Ecological correlations are used simply because correlations between the properties of individuals are not available. In each instance, however, the substitution is made tacitly rather than explicitly. The purpose of this paper is to clarify the ecological correlation problem by stating, mathematically, the exact relation between ecological and individual correlations, and by showing the bearing of that relation upon the practice of using ecological correlations as substitutes for individual correlations.",
"title": ""
}
] | [
{
"docid": "3925371ff139ca9cd23222db78f8694a",
"text": "In this paper, we investigate how the Gauss–Newton Hessian matrix affects the basin of convergence in Newton-type methods. Although the Newton algorithm is theoretically superior to the Gauss–Newton algorithm and the Levenberg–Marquardt (LM) method as far as their asymptotic convergence rate is concerned, the LM method is often preferred in nonlinear least squares problems in practice. This paper presents a theoretical analysis of the advantage of the Gauss–Newton Hessian matrix. It is proved that the Gauss–Newton approximation function is the only nonnegative convex quadratic approximation that retains a critical property of the original objective function: taking the minimal value of zero on an (n − 1)-dimensional manifold (or affine subspace). Due to this property, the Gauss–Newton approximation does not change the zero-on-(n − 1)-D “structure” of the original problem, explaining the reason why the Gauss–Newton Hessian matrix is preferred for nonlinear least squares problems, especially when the initial point is far from the solution.",
"title": ""
},
{
"docid": "b917ec2f16939a819625b6750597c40c",
"text": "In an increasing number of scientific disciplines, large data collections are emerging as important community resources. In domains as diverse as global climate change, high energy physics, and computational genomics, the volume of interesting data is already measured in terabytes and will soon total petabytes. The communities of researchers that need to access and analyze this data (often using sophisticated and computationally expensive techniques) are often large and are almost always geographically distributed, as are the computing and storage resources that these communities rely upon to store and analyze their data [17]. This combination of large dataset size, geographic distribution of users and resources, and computationally intensive analysis results in complex and stringent performance demands that are not satisfied by any existing data management infrastructure. A large scientific collaboration may generate many queries, each involving access to—or supercomputer-class computations on—gigabytes or terabytes of data. Efficient and reliable execution of these queries may require careful management of terabyte caches, gigabit/s data transfer over wide area networks, coscheduling of data transfers and supercomputer computation, accurate performance estimations to guide the selection of dataset replicas, and other advanced techniques that collectively maximize use of scarce storage, networking, and computing resources. The literature offers numerous point solutions that address these issues (e.g., see [17, 14, 19, 3]). But no integrating architecture exists that allows us to identify requirements and components common to different systems and hence apply different technologies in a coordinated fashion to a range of dataintensive petabyte-scale application domains. Motivated by these considerations, we have launched a collaborative effort to design and produce such an integrating architecture. We call this architecture the data grid, to emphasize its role as a specialization and extension of the “Grid” that has emerged recently as an integrating infrastructure for distributed computation [10, 20, 15]. Our goal in this effort is to define the requirements that a data grid must satisfy and the components and APIs that will be required in its implementation. We hope that the definition of such an architecture will accelerate progress on petascale data-intensive computing by enabling the integration of currently disjoint approaches, encouraging the deployment of basic enabling technologies, and revealing technology gaps that require further research and development. In addition, we plan to construct a reference implementation for this architecture so as to enable large-scale experimentation.",
"title": ""
},
{
"docid": "14a45e3e7aadee56b7d2e28c692aba9f",
"text": "Radiation therapy as a mode of cancer treatment is well-established. Telecobalt and telecaesium units were used extensively during the early days. Now, medical linacs offer more options for treatment delivery. However, such systems are prohibitively expensive and beyond the reach of majority of the worlds population living in developing and under-developed countries. In India, there is shortage of cancer treatment facilities, mainly due to the high cost of imported machines. Realizing the need of technology for affordable radiation therapy machines, Bhabha Atomic Research Centre (BARC), the premier nuclear research institute of Government of India, started working towards a sophisticated telecobalt machine. The Bhabhatron is the outcome of the concerted efforts of BARC and Panacea Medical Technologies Pvt. Ltd., India. It is not only less expensive, but also has a number of advanced features. It incorporates many safety and automation features hitherto unavailable in the most advanced telecobalt machine presently available. This paper describes various features available in Bhabhatron-II. The authors hope that this machine has the potential to make safe and affordable radiation therapy accessible to the common people in India as well as many other countries.",
"title": ""
},
{
"docid": "15f75935c0a17f52790be930d656d171",
"text": "It is a well-known issue that attack primitives which exploit memory corruption vulnerabilities can abuse the ability of processes to automatically restart upon termination. For example, network services like FTP and HTTP servers are typically restarted in case a crash happens and this can be used to defeat Address Space Layout Randomization (ASLR). Furthermore, recently several techniques evolved that enable complete process memory scanning or code-reuse attacks against diversified and unknown binaries based on automated restarts of server applications. Until now, it is believed that client applications are immune against exploit primitives utilizing crashes. Due to their hard crash policy, such applications do not restart after memory corruption faults, making it impossible to touch memory more than once with wrong permissions. In this paper, we show that certain client application can actually survive crashes and are able to tolerate faults, which are normally critical and force program termination. To this end, we introduce a crash-resistance primitive and develop a novel memory scanning method with memory oracles without the need for control-flow hijacking. We show the practicability of our methods for 32-bit Internet Explorer 11 on Windows 8.1, and Mozilla Firefox 64-bit (Windows 8.1 and Linux 3.17.1). Furthermore, we demonstrate the advantages an attacker gains to overcome recent code-reuse defenses. Latest advances propose fine-grained re-randomization of the address space and code layout, or hide sensitive information such as code pointers to thwart tampering or misuse. We show that these defenses need improvements since crash-resistance weakens their security assumptions. To this end, we introduce the concept of CrashResistant Oriented Programming (CROP). We believe that our results and the implications of memory oracles will contribute to future research on defensive schemes against code-reuse attacks.",
"title": ""
},
{
"docid": "c56d4eff5b23f804834c698e77f3d806",
"text": " In many applications within the engineering world, an isolated generator is needed (e.g. in ships). Diesel units (diesel engine and synchronous generator) are the most common solution. However, the diesel engine can be eliminated if the energy from another source (e.g. the prime mover in a ship) is used to move the generator. This is the case for the Shaft Coupled Generator, where the coupling between the mover and the generator is made via a hydrostatic transmission. So that the mover can have different speeds and the generator is able to keep a constant frequency. The main problem of this system is the design of a speed governor that make possible the desired behaviour. In this paper a simulation model is presented in order to analyse the behaviour of this kind of systems and to help in the speed governor design. The model is achieved with an parameter identification process also depicted in the paper. A comparison between simulation results and measurements is made to shown the model validity. KeywordsModelling, Identification, Hydrostatic Transmission.",
"title": ""
},
{
"docid": "83da776714bf49c3bbb64976d20e26a2",
"text": "Orthogonal frequency division multiplexing (OFDM) has been widely adopted in modern wireless communication systems due to its robustness against the frequency selectivity of wireless channels. For coherent detection, channel estimation is essential for receiver design. Channel estimation is also necessary for diversity combining or interference suppression where there are multiple receive antennas. In this paper, we will present a survey on channel estimation for OFDM. This survey will first review traditional channel estimation approaches based on channel frequency response (CFR). Parametric model (PM)-based channel estimation, which is particularly suitable for sparse channels, will be also investigated in this survey. Following the success of turbo codes and low-density parity check (LDPC) codes, iterative processing has been widely adopted in the design of receivers, and iterative channel estimation has received a lot of attention since that time. Iterative channel estimation will be emphasized in this survey as the emerging iterative receiver improves system performance significantly. The combination of multiple-input multiple-output (MIMO) and OFDM has been widely accepted in modern communication systems, and channel estimation in MIMO-OFDM systems will also be addressed in this survey. Open issues and future work are discussed at the end of this paper.",
"title": ""
},
{
"docid": "9e11005f60aa3f53481ac3543a18f32f",
"text": "Deep residual networks (ResNets) have significantly pushed forward the state-ofthe-art on image classification, increasing in performance as networks grow both deeper and wider. However, memory consumption becomes a bottleneck, as one needs to store the activations in order to calculate gradients using backpropagation. We present the Reversible Residual Network (RevNet), a variant of ResNets where each layer’s activations can be reconstructed exactly from the next layer’s. Therefore, the activations for most layers need not be stored in memory during backpropagation. We demonstrate the effectiveness of RevNets on CIFAR-10, CIFAR-100, and ImageNet, establishing nearly identical classification accuracy to equally-sized ResNets, even though the activation storage requirements are independent of depth.",
"title": ""
},
{
"docid": "9dfef5bc76b78e7577b9eb377b830a9e",
"text": "Patients with Parkinson's disease may have difficulties in speaking because of the reduced coordination of the muscles that control breathing, phonation, articulation and prosody. Symptoms that may occur because of changes are weakening of the volume of the voice, voice monotony, changes in the quality of the voice, speed of speech, uncontrolled repetition of words. The evaluation of some of the disorders mentioned can be achieved through measuring the variation of parameters in an objective manner. It may be done to evaluate the response to the treatments with intra-daily frequency pre / post-treatment, as well as in the long term. Software systems allow these measurements also by recording the patient's voice. This allows to carry out a large number of tests by means of a larger number of patients and a higher frequency of the measurements. The main goal of our work was to design and realize Voxtester, an effective and simple to use software system useful to measure whether changes in voice emission are sensitive to pharmacologic treatments. Doctors and speech therapists can easily use it without going into the technical details, and we think that this goal is reached only by Voxtester, up to date.",
"title": ""
},
{
"docid": "e62ad0c67fa924247f05385bda313a38",
"text": "Artificial neural networks have been recognized as a powerful tool for pattern classification problems, but a number of researchers have also suggested that straightforward neural-network approaches to pattern recognition are largely inadequate for difficult problems such as handwritten numeral recognition. In this paper, we present three sophisticated neural-network classifiers to solve complex pattern recognition problems: multiple multilayer perceptron (MLP) classifier, hidden Markov model (HMM)/MLP hybrid classifier, and structure-adaptive self-organizing map (SOM) classifier. In order to verify the superiority of the proposed classifiers, experiments were performed with the unconstrained handwritten numeral database of Concordia University, Montreal, Canada. The three methods have produced 97.35%, 96.55%, and 96.05% of the recognition rates, respectively, which are better than those of several previous methods reported in the literature on the same database.",
"title": ""
},
{
"docid": "b7bf7d430e4132a4d320df3a155ee74c",
"text": "We present Wave menus, a variant of multi-stroke marking menus designed for improving the novice mode of marking while preserving their efficiency in the expert mode of marking. Focusing on the novice mode, a criteria-based analysis of existing marking menus motivates the design of Wave menus. Moreover a user experiment is presented that compares four hierarchical marking menus in novice mode. Results show that Wave and compound-stroke menus are significantly faster and more accurate than multi-stroke menus in novice mode, while it has been shown that in expert mode the multi-stroke menus and therefore the Wave menus outperform the compound-stroke menus. Wave menus also require significantly less screen space than compound-stroke menus. As a conclusion, Wave menus offer the best performance for both novice and expert modes in comparison with existing multi-level marking menus, while requiring less screen space than compound-stroke menus.",
"title": ""
},
{
"docid": "b2a670d90d53825c53d8ce0082333db6",
"text": "Social media platforms facilitate the emergence of citizen communities that discuss real-world events. Their content reflects a variety of intent ranging from social good (e.g., volunteering to help) to commercial interest (e.g., criticizing product features). Hence, mining intent from social data can aid in filtering social media to support organizations, such as an emergency management unit for resource planning. However, effective intent mining is inherently challenging due to ambiguity in interpretation, and sparsity of relevant behaviors in social data. In this paper, we address the problem of multiclass classification of intent with a use-case of social data generated during crisis events. Our novel method exploits a hybrid feature representation created by combining top-down processing using knowledge-guided patterns with bottom-up processing using a bag-of-tokens model. We employ pattern-set creation from a variety of knowledge sources including psycholinguistics to tackle the ambiguity challenge, social behavior about conversations to enrich context, and contrast patterns to tackle the sparsity challenge. Our results show a significant absolute gain up to 7% in the F1 score relative to a baseline using bottom-up processing alone, within the popular multiclass frameworks of One-vs-One and One-vs-All. Intent mining can help design efficient cooperative information systems between citizens and organizations for serving organizational information needs.",
"title": ""
},
{
"docid": "0d78cb5ff93351db949ffc1c01c3d540",
"text": "Self-Organizing Map is an unsupervised neural network which combines vector quantization and vector projection. This makes it a powerful visualization tool. SOM Toolbox implements the SOM in the Matlab 5 computing environment. In this paper, computational complexity of SOM and the applicability of the Toolbox are investigated. It is seen that the Toolbox is easily applicable to small data sets (under 10000 records) but can also be applied in case of medium sized data sets. The prime limiting factor is map size: the Toolbox is mainly suitable for training maps with 1000 map units or less.",
"title": ""
},
{
"docid": "88b167a7eb0debcd5c5e0f5f5605a14b",
"text": "Understanding language requires both linguistic knowledge and knowledge about how the world works, also known as common-sense knowledge. We attempt to characterize the kinds of common-sense knowledge most often involved in recognizing textual entailments. We identify 20 categories of common-sense knowledge that are prevalent in textual entailment, many of which have received scarce attention from researchers building collections of knowledge.",
"title": ""
},
{
"docid": "dd9e3513c4be6100b5d3b3f25469f028",
"text": "Software testing is the process to uncover requirement, design and coding errors in the program. It is used to identify the correctness, completeness, security and quality of software products against a specification. Software testing is the process used to measure the quality of developed computer software. It exhibits all mistakes, errors and flaws in the developed software. There are many approaches to software testing, but effective testing of complex product is essentially a process of investigation, not merely a matter of creating and following route procedure. It is not possible to find out all the errors in the program. This fundamental problem in testing thus throws an open question, as to what would be the strategy we should adopt for testing. In our paper, we have described and compared the three most prevalent and commonly used software testing techniques for detecting errors, they are: white box testing, black box testing and grey box testing. KeywordsBlack Box; Grey Box; White Box.",
"title": ""
},
{
"docid": "a931f939e2e0c0f2f8940796ee23e957",
"text": "PURPOSE OF REVIEW\nMany patients requiring cardiac arrhythmia device surgery are on chronic oral anticoagulation therapy. The periprocedural management of their anticoagulation presents a dilemma to physicians, particularly in the subset of patients with moderate-to-high risk of arterial thromboembolic events. Physicians have responded by treating patients with bridging anticoagulation while oral anticoagulation is temporarily discontinued. However, there are a number of downsides to bridging anticoagulation around device surgery; there is a substantial risk of significant device pocket hematoma with important clinical sequelae; bridging anticoagulation may lead to more arterial thromboembolic events and bridging anticoagulation is expensive.\n\n\nRECENT FINDINGS\nIn response to these issues, a number of centers have explored the option of performing device surgery without cessation of oral anticoagulation. The observational data suggest a greatly reduced hematoma rate with this strategy. Despite these encouraging results, most physicians are reluctant to move to operating on continued Coumadin in the absence of confirmatory data from a randomized trial.\n\n\nSUMMARY\nWe have designed a prospective, single-blind, randomized, controlled trial to address this clinical question. In the conventional arm, patients will be bridged. In the experimental arm, patients will continue on oral anticoagulation and the primary outcome is clinically significant hematoma. Our study has clinical relevance to at least 70 000 patients per year in North America.",
"title": ""
},
{
"docid": "ecb93affc7c9b0e4bf86949d3f2006d4",
"text": "We present data-dependent learning bounds for the general scenario of non-stationary nonmixing stochastic processes. Our learning guarantees are expressed in terms of a datadependent measure of sequential complexity and a discrepancy measure that can be estimated from data under some mild assumptions. We also also provide novel analysis of stable time series forecasting algorithm using this new notion of discrepancy that we introduce. We use our learning bounds to devise new algorithms for non-stationary time series forecasting for which we report some preliminary experimental results. An extended abstract has appeared in (Kuznetsov and Mohri, 2015).",
"title": ""
},
{
"docid": "d5ca07ff7bf01edcebb81dad6bff3a22",
"text": "The goals of our work are twofold: gain insight into how humans interact with complex data and visualizations thereof in order to make discoveries; and use our findings to develop a dialogue system for exploring data visualizations. Crucial to both goals is understanding and modeling of multimodal referential expressions, in particular those that include deictic gestures. In this paper, we discuss how context information affects the interpretation of requests and their attendant referring expressions in our data. To this end, we have annotated our multimodal dialogue corpus for context and both utterance and gesture information; we have analyzed whether a gesture co-occurs with a specific request or with the context surrounding the request; we have started addressing multimodal co-reference resolution by using Kinect to detect deictic gestures; and we have started identifying themes found in the annotated context, especially in what follows the request.",
"title": ""
},
{
"docid": "8b08fbd7610e68e39026011fec7034ec",
"text": "Smart grid initiatives will produce a grid that is increasingly dependent on its cyber infrastructure in order to support the numerous power applications necessary to provide improved grid monitoring and control capabilities. However, recent findings documented in government reports and other literature, indicate the growing threat of cyber-based attacks in numbers and sophistication targeting the nation's electric grid and other critical infrastructures. Specifically, this paper discusses cyber-physical security of Wide-Area Monitoring, Protection and Control (WAMPAC) from a coordinated cyber attack perspective and introduces a game-theoretic approach to address the issue. Finally, the paper briefly describes how cyber-physical testbeds can be used to evaluate the security research and perform realistic attack-defense studies for smart grid type environments.",
"title": ""
},
{
"docid": "b19aab238e0eafef52974a87300750a3",
"text": "This paper introduces a method to detect a fault associated with critical components/subsystems of an engineered system. It is required, in this case, to detect the fault condition as early as possible, with specified degree of confidence and a prescribed false alarm rate. Innovative features of the enabling technologies include a Bayesian estimation algorithm called particle filtering, which employs features or condition indicators derived from sensor data in combination with simple models of the system's degrading state to detect a deviation or discrepancy between a baseline (no-fault) distribution and its current counterpart. The scheme requires a fault progression model describing the degrading state of the system in the operation. A generic model based on fatigue analysis is provided and its parameters adaptation is discussed in detail. The scheme provides the probability of abnormal condition and the presence of a fault is confirmed for a given confidence level. The efficacy of the proposed approach is illustrated with data acquired from bearings typically found on aircraft and monitored via a properly instrumented test rig.",
"title": ""
},
{
"docid": "ed1a3ca3e558eeb33e2841fa4b9c28d2",
"text": "© 2010 ETRI Journal, Volume 32, Number 4, August 2010 In this paper, we present a low-voltage low-dropout voltage regulator (LDO) for a system-on-chip (SoC) application which, exploiting the multiplication of the Miller effect through the use of a current amplifier, is frequency compensated up to 1-nF capacitive load. The topology and the strategy adopted to design the LDO and the related compensation frequency network are described in detail. The LDO works with a supply voltage as low as 1.2 V and provides a maximum load current of 50 mA with a drop-out voltage of 200 mV: the total integrated compensation capacitance is about 40 pF. Measurement results as well as comparison with other SoC LDOs demonstrate the advantage of the proposed topology.",
"title": ""
}
] | scidocsrr |
36f97bf0a09158177f72c49d2613db44 | Automatic Sentiment Analysis for Unstructured Data | [
{
"docid": "a178871cd82edaa05a0b0befacb7fc38",
"text": "The main applications and challenges of one of the hottest research areas in computer science.",
"title": ""
}
] | [
{
"docid": "7e7d4a3ab8fe57c6168835fa1ab3b413",
"text": "Massively parallel architectures such as the GPU are becoming increasingly important due to the recent proliferation of data. In this paper, we propose a key class of hybrid parallel graphlet algorithms that leverages multiple CPUs and GPUs simultaneously for computing k-vertex induced subgraph statistics (called graphlets). In addition to the hybrid multi-core CPU-GPU framework, we also investigate single GPU methods (using multiple cores) and multi-GPU methods that leverage all available GPUs simultaneously for computing induced subgraph statistics. Both methods leverage GPU devices only, whereas the hybrid multicore CPU-GPU framework leverages all available multi-core CPUs and multiple GPUs for computing graphlets in large networks. Compared to recent approaches, our methods are orders of magnitude faster, while also more cost effective enjoying superior performance per capita and per watt. In particular, the methods are up to 300 times faster than a recent state-of-the-art method. To the best of our knowledge, this is the first work to leverage multiple CPUs and GPUs simultaneously for computing induced subgraph statistics.",
"title": ""
},
{
"docid": "1186fa429d435d0e2009e8b155cf92cc",
"text": "Recommender Systems are software tools and techniques for suggesting items to users by considering their preferences in an automated fashion. The suggestions provided are aimed at support users in various decisionmaking processes. Technically, recommender system has their origins in different fields such as Information Retrieval (IR), text classification, machine learning and Decision Support Systems (DSS). Recommender systems are used to address the Information Overload (IO) problem by recommending potentially interesting or useful items to users. They have proven to be worthy tools for online users to deal with the IO and have become one of the most popular and powerful tools in E-commerce. Many existing recommender systems rely on the Collaborative Filtering (CF) and have been extensively used in E-commerce .They have proven to be very effective with powerful techniques in many famous E-commerce companies. This study presents an overview of the field of recommender systems with current generation of recommendation methods and examines comprehensively CF systems with its algorithms.",
"title": ""
},
{
"docid": "06597c7f7d76cb3749d13b597b903570",
"text": "2.1 Summary ............................................... 5 2.2 Definition .............................................. 6 2.3 History ................................................... 6 2.4 Overview of Currently Used Classification Systems and Terminology 7 2.5 Currently Used Terms in Classification of Osteomyelitis of the Jaws .................. 11 2.5.1 Acute/Subacute Osteomyelitis .............. 11 2.5.2 Chronic Osteomyelitis ........................... 11 2.5.3 Chronic Suppurative Osteomyelitis: Secondary Chronic Osteomyelitis .......... 11 2.5.4 Chronic Non-suppurative Osteomyelitis 11 2.5.5 Diffuse Sclerosing Osteomyelitis, Primary Chronic Osteomyelitis, Florid Osseous Dysplasia, Juvenile Chronic Osteomyelitis ............. 11 2.5.6 SAPHO Syndrome, Chronic Recurrent Multifocal Osteomyelitis (CRMO) ........... 13 2.5.7 Periostitis Ossificans, Garrès Osteomyelitis ............................. 13 2.5.8 Other Commonly Used Terms ................ 13 2.6 Osteomyelitis of the Jaws: The Zurich Classification System ........... 16 2.6.1 General Aspects of the Zurich Classification System ............................. 16 2.6.2 Acute Osteomyelitis and Secondary Chronic Osteomyelitis ........................... 17 2.6.3 Clinical Presentation ............................. 26 2.6.4 Primary Chronic Osteomyelitis .............. 34 2.7 Differential Diagnosis ............................ 48 2.7.1 General Considerations ......................... 48 2.7.2 Differential Diagnosis of Acute and Secondary Chronic Osteomyelitis ... 50 2.7.3 Differential Diagnosis of Primary Chronic Osteomyelitis ........................... 50 2.1 Summary",
"title": ""
},
{
"docid": "bb3295be91f0365d0d101e08ca4f5f5f",
"text": "Autonomous driving with high velocity is a research hotspot which challenges the scientists and engineers all over the world. This paper proposes a scheme of indoor autonomous car based on ROS which combines the method of Deep Learning using Convolutional Neural Network (CNN) with statistical approach using liDAR images and achieves a robust obstacle avoidance rate in cruise mode. In addition, the design and implementation of autonomous car are also presented in detail which involves the design of Software Framework, Hector Simultaneously Localization and Mapping (Hector SLAM) by Teleoperation, Autonomous Exploration, Path Plan, Pose Estimation, Command Processing, and Data Recording (Co- collection). what’s more, the schemes of outdoor autonomous car, communication, and security are also discussed. Finally, all functional modules are integrated in nVidia Jetson TX1.",
"title": ""
},
{
"docid": "74f95681ad04646bd5a221870948e43b",
"text": "Crimes will somehow influence organizations and institutions when occurred frequently in a society. Thus, it seems necessary to study reasons, factors and relations between occurrence of different crimes and finding the most appropriate ways to control and avoid more crimes. The main objective of this paper is to classify clustered crimes based on occurrence frequency during different years. Data mining is used extensively in terms of analysis, investigation and discovery of patterns for occurrence of different crimes. We applied a theoretical model based on data mining techniques such as clustering and classification to real crime dataset recorded by police in England and Wales within 1990 to 2011. We assigned weights to the features in order to improve the quality of the model and remove low value of them. The Genetic Algorithm (GA) is used for optimizing of Outlier Detection operator parameters using RapidMiner tool. Keywords—crime; clustering; classification; genetic algorithm; weighting; rapidminer",
"title": ""
},
{
"docid": "009f1283d0bd29d99a2de3695157ffd7",
"text": "Convolutional networks for image classification progressively reduce resolution until the image is represented by tiny feature maps in which the spatial structure of the scene is no longer discernible. Such loss of spatial acuity can limit image classification accuracy and complicate the transfer of the model to downstream applications that require detailed scene understanding. These problems can be alleviated by dilation, which increases the resolution of output feature maps without reducing the receptive field of individual neurons. We show that dilated residual networks (DRNs) outperform their non-dilated counterparts in image classification without increasing the models depth or complexity. We then study gridding artifacts introduced by dilation, develop an approach to removing these artifacts (degridding), and show that this further increases the performance of DRNs. In addition, we show that the accuracy advantage of DRNs is further magnified in downstream applications such as object localization and semantic segmentation.",
"title": ""
},
{
"docid": "099bd9e751b8c1e3a07ee06f1ba4b55b",
"text": "This paper presents a robust stereo-vision-based drivable road detection and tracking system that was designed to navigate an intelligent vehicle through challenging traffic scenarios and increment road safety in such scenarios with advanced driver-assistance systems (ADAS). This system is based on a formulation of stereo with homography as a maximum a posteriori (MAP) problem in a Markov random held (MRF). Under this formulation, we develop an alternating optimization algorithm that alternates between computing the binary labeling for road/nonroad classification and learning the optimal parameters from the current input stereo pair itself. Furthermore, online extrinsic camera parameter reestimation and automatic MRF parameter tuning are performed to enhance the robustness and accuracy of the proposed system. In the experiments, the system was tested on our experimental intelligent vehicles under various real challenging scenarios. The results have substantiated the effectiveness and the robustness of the proposed system with respect to various challenging road scenarios such as heterogeneous road materials/textures, heavy shadows, changing illumination and weather conditions, and dynamic vehicle movements.",
"title": ""
},
{
"docid": "5d1b66986357f2566ac503727a80bb87",
"text": "Natural Language Inference (NLI) task requires an agent to determine the logical relationship between a natural language premise and a natural language hypothesis. We introduce Interactive Inference Network (IIN), a novel class of neural network architectures that is able to achieve high-level understanding of the sentence pair by hierarchically extracting semantic features from interaction space. We show that an interaction tensor (attention weight) contains semantic information to solve natural language inference, and a denser interaction tensor contains richer semantic information. One instance of such architecture, Densely Interactive Inference Network (DIIN), demonstrates the state-of-the-art performance on large scale NLI copora and large-scale NLI alike corpus. It’s noteworthy that DIIN achieve a greater than 20% error reduction on the challenging Multi-Genre NLI (MultiNLI; Williams et al. 2017) dataset with respect to the strongest published system.",
"title": ""
},
{
"docid": "d7e53788cbe072bdf26ea71c0a91c2b3",
"text": "3D mesh segmentation has become a crucial part of many applications in 3D shape analysis. In this paper, a comprehensive survey on 3D mesh segmentation methods is presented. Analysis of the existing methodologies is addressed taking into account a new categorization along with the performance evaluation frameworks which aim to support meaningful benchmarks not only qualitatively but also in a quantitative manner. This survey aims to capture the essence of current trends in 3D mesh segmentation.",
"title": ""
},
{
"docid": "261a35dabf9129c6b9efc5f29540634c",
"text": "To date, the growth of electronic personal data leads to a trend that data owners prefer to remotely outsource their data to clouds for the enjoyment of the high-quality retrieval and storage service without worrying the burden of local data management and maintenance. However, secure share and search for the outsourced data is a formidable task, which may easily incur the leakage of sensitive personal information. Efficient data sharing and searching with security is of critical importance. This paper, for the first time, proposes a searchable attribute-based proxy reencryption system. When compared with the existing systems only supporting either searchable attribute-based functionality or attribute-based proxy reencryption, our new primitive supports both abilities and provides flexible keyword update service. In particular, the system enables a data owner to efficiently share his data to a specified group of users matching a sharing policy and meanwhile, the data will maintain its searchable property but also the corresponding search keyword(s) can be updated after the data sharing. The new mechanism is applicable to many real-world applications, such as electronic health record systems. It is also proved chosen ciphertext secure in the random oracle model.",
"title": ""
},
{
"docid": "18fcdcadc3290f9c8dd09f0aa1a27e8f",
"text": "The Industry 4.0 is a vision that includes connecting more intensively physical systems with their virtual counterparts in computers. This computerization of manufacturing will bring many advantages, including allowing data gathering, integration and analysis in the scale not seen earlier. In this paper we describe our Semantic Big Data Historian that is intended to handle large volumes of heterogeneous data gathered from distributed data sources. We describe the approach and implementation with a special focus on using Semantic Web technologies for integrating the data.",
"title": ""
},
{
"docid": "4b2510dfa7b0d9de17a9a1e43a362e85",
"text": "Stakeholder marketing has established foundational support for redefining and broadening the marketing discipline. An extensive literature review of 58 marketing articles that address six primary stakeholder groups (i.e., customers, suppliers, employees, shareholders, regulators, and the local community) provides evidence of the important role the groups play in stakeholder marketing. Based on this review and in conjunction with established marketing theory, we define stakeholder marketing as “activities and processes within a system of social institutions that facilitate and maintain value through exchange relationships with multiple stakeholders.” In an effort to focus on the stakeholder marketing field of study, we offer both a conceptual framework for understanding the pivotal role of stakeholder marketing and research questions for examining the linkages among stakeholder exchanges, value creation, and marketing outcomes.",
"title": ""
},
{
"docid": "753b167933f5dd92c4b8021f6b448350",
"text": "The advent of social media and microblogging platforms has radically changed the way we consume information and form opinions. In this paper, we explore the anatomy of the information space on Facebook by characterizing on a global scale the news consumption patterns of 376 million users over a time span of 6 y (January 2010 to December 2015). We find that users tend to focus on a limited set of pages, producing a sharp community structure among news outlets. We also find that the preferences of users and news providers differ. By tracking how Facebook pages \"like\" each other and examining their geolocation, we find that news providers are more geographically confined than users. We devise a simple model of selective exposure that reproduces the observed connectivity patterns.",
"title": ""
},
{
"docid": "147b207125fcda1dece25a6c5cd17318",
"text": "In this paper we present a neural network based system for automated e-mail filing into folders and antispam filtering. The experiments show that it is more accurate than several other techniques. We also investigate the effects of various feature selection, weighting and normalization methods, and also the portability of the anti-spam filter across different users.",
"title": ""
},
{
"docid": "8ca60b68f1516d63af36b7ead860686b",
"text": "The automatic patch-based exploit generation problem is: given a program P and a patched version of the program P', automatically generate an exploit for the potentially unknown vulnerability present in P but fixed in P'. In this paper, we propose techniques for automatic patch-based exploit generation, and show that our techniques can automatically generate exploits for 5 Microsoft programs based upon patches provided via Windows Update. Although our techniques may not work in all cases, a fundamental tenant of security is to conservatively estimate the capabilities of attackers. Thus, our results indicate that automatic patch-based exploit generation should be considered practical. One important security implication of our results is that current patch distribution schemes which stagger patch distribution over long time periods, such as Windows Update, may allow attackers who receive the patch first to compromise the significant fraction of vulnerable hosts who have not yet received the patch.",
"title": ""
},
{
"docid": "152182336e620ee94f24e3865b7b377f",
"text": "In Theory III we characterize with a mix of theory and experiments the generalization properties of Stochastic Gradient Descent in overparametrized deep convolutional networks. We show that Stochastic Gradient Descent (SGD) selects with high probability solutions that 1) have zero (or small) empirical error, 2) are degenerate as shown in Theory II and 3) have maximum generalization. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 123 1216. H.M. is supported in part by ARO Grant W911NF-15-10385.",
"title": ""
},
{
"docid": "9cc997e886bea0ac5006c9ee734b7906",
"text": "Additive manufacturing technology using inkjet offers several improvements to electronics manufacturing compared to current nonadditive masking technologies. Manufacturing processes can be made more efficient, straightforward and flexible compared to subtractive masking processes, several time-consuming and expensive steps can be omitted. Due to the additive process, material loss is minimal, because material is never removed as with etching processes. The amounts of used material and waste are smaller, which is advantageous in both productivity and environmental means. Furthermore, the additive inkjet manufacturing process is flexible allowing fast prototyping, easy design changes and personalization of products. Additive inkjet processing offers new possibilities to electronics integration, by enabling direct writing on various surfaces, and component interconnection without a specific substrate. The design and manufacturing of inkjet printed modules differs notably from the traditional way to manufacture electronics. In this study a multilayer inkjet interconnection process to integrate functional systems was demonstrated, and the issues regarding the design and manufacturing were considered. r 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "dd7ab988d8a40e6181cd37f8a1b1acfa",
"text": "In areas approaching malaria elimination, human mobility patterns are important in determining the proportion of malaria cases that are imported or the result of low-level, endemic transmission. A convenience sample of participants enrolled in a longitudinal cohort study in the catchment area of Macha Hospital in Choma District, Southern Province, Zambia, was selected to carry a GPS data logger for one month from October 2013 to August 2014. Density maps and activity space plots were created to evaluate seasonal movement patterns. Time spent outside the household compound during anopheline biting times, and time spent in malaria high- and low-risk areas, were calculated. There was evidence of seasonal movement patterns, with increased long-distance movement during the dry season. A median of 10.6% (interquartile range (IQR): 5.8-23.8) of time was spent away from the household, which decreased during anopheline biting times to 5.6% (IQR: 1.7-14.9). The per cent of time spent in malaria high-risk areas for participants residing in high-risk areas ranged from 83.2% to 100%, but ranged from only 0.0% to 36.7% for participants residing in low-risk areas. Interventions targeted at the household may be more effective because of restricted movement during the rainy season, with limited movement between high- and low-risk areas.",
"title": ""
},
{
"docid": "cfce53c88e07b9cd837c3182a24d9901",
"text": "The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
},
{
"docid": "8b252e706868440162e50a2c23255cb3",
"text": "Currently, most top-performing text detection networks tend to employ fixed-size anchor boxes to guide the search for text instances. ey usually rely on a large amount of anchors with different scales to discover texts in scene images, thus leading to high computational cost. In this paper, we propose an end-to-end boxbased text detector with scale-adaptive anchors, which can dynamically adjust the scales of anchors according to the sizes of underlying texts by introducing an additional scale regression layer. e proposed scale-adaptive anchors allow us to use a few number of anchors to handle multi-scale texts and therefore significantly improve the computational efficiency. Moreover, compared to discrete scales used in previous methods, the learned continuous scales are more reliable, especially for small texts detection. Additionally, we propose Anchor convolution to beer exploit necessary feature information by dynamically adjusting the sizes of receptive fields according to the learned scales. Extensive experiments demonstrate that the proposed detector is fast, taking only 0.28 second per image, while outperforming most state-of-the-art methods in accuracy.",
"title": ""
}
] | scidocsrr |
43f590e6352178a6586387c6d88b28c4 | Knowledge sharing and social media: Altruism, perceived online attachment motivation, and perceived online relationship commitment | [
{
"docid": "cd0b28b896dd84ca70d42541b466d5ff",
"text": "a r t i c l e i n f o a b s t r a c t The success of knowledge management initiatives depends on knowledge sharing. This paper reviews qualitative and quantitative studies of individual-level knowledge sharing. Based on the literature review we developed a framework for understanding knowledge sharing research. The framework identifies five areas of emphasis of knowledge sharing research: organizational context, interpersonal and team characteristics, cultural characteristics, individual characteristics, and motivational factors. For each emphasis area the paper discusses the theoretical frameworks used and summarizes the empirical research results. The paper concludes with a discussion of emerging issues, new research directions, and practical implications of knowledge sharing research. Knowledge is a critical organizational resource that provides a sustainable competitive advantage in a competitive and dynamic economy (e. To gain a competitive advantage it is necessary but insufficient for organizations to rely on staffing and training systems that focus on selecting employees who have specific knowledge, skills, abilities, or competencies or helping employees acquire them (e.g., Brown & Duguid, 1991). Organizations must also consider how to transfer expertise and knowledge from experts who have it to novices who need to know (Hinds, Patterson, & Pfeffer, 2001). That is, organizations need to emphasize and more effectively exploit knowledge-based resources that already exist within the organization As one knowledge-centered activity, knowledge sharing is the fundamental means through which employees can contribute to knowledge application, innovation, and ultimately the competitive advantage of the organization (Jackson, Chuang, Harden, Jiang, & Joseph, 2006). Knowledge sharing between employees and within and across teams allows organizations to exploit and capitalize on knowledge-based resources Research has shown that knowledge sharing and combination is positively related to reductions in production costs, faster completion of new product development projects, team performance, firm innovation capabilities, and firm performance including sales growth and revenue from new products and services (e. Because of the potential benefits that can be realized from knowledge sharing, many organizations have invested considerable time and money into knowledge management (KM) initiatives including the development of knowledge management systems (KMS) which use state-of-the-art technology to facilitate the collection, storage, and distribution of knowledge. However, despite these investments it has been estimated that at least $31.5 billion are lost per year by Fortune 500",
"title": ""
},
{
"docid": "c3750965243aef6b2389a2dfc3afa1b0",
"text": "This study reports on an exploratory survey conducted to investigate the use of social media technologies for sharing information. This paper explores the issue of credibility of the information shared in the context of computer-mediated communication. Four categories of information were explored: sensitive, sensational, political and casual information, across five popular social media technologies: social networking sites, micro-blogging sites, wikis, online forums, and online blogs. One hundred and fourteen active users of social media technologies participated in the study. The exploratory analysis conducted in this study revealed that information producers use different cues to indicate credibility of the information they share on different social media sites. Organizations can leverage findings from this study to improve targeted engagement with their customers. The operationalization of how information credibility is codified by information producers contributes to knowledge in social media research. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "f24011de3d527f54be4dff329e3862e9",
"text": "Basic concepts of ANNs together with three most widely used ANN learning strategies (error back-propagation, Kohonen, and counterpropagation) are explained and discussed. In order to show how the explained methods can be applied to chemical problems, one simple example, the classification and the prediction of the origin of different olive oil samples, each represented by eigtht fatty acid concentrations, is worked out in detail.",
"title": ""
},
{
"docid": "c2bb03165910da0597b0dbdd8831666a",
"text": "In this paper, we propose a method for training neural networks when we have a large set of data with weak labels and a small amount of data with true labels. In our proposed model, we train two neural networks: a target network, the learner and a confidence network, the meta-learner. The target network is optimized to perform a given task and is trained using a large set of unlabeled data that are weakly annotated. We propose to control the magnitude of the gradient updates to the target network using the scores provided by the second confidence network, which is trained on a small amount of supervised data. Thus we avoid that the weight updates computed from noisy labels harm the quality of the target networkmodel.",
"title": ""
},
{
"docid": "d63591706309cf602404c34de547184f",
"text": "This paper presents an overview of the inaugural Amazon Picking Challenge along with a summary of a survey conducted among the 26 participating teams. The challenge goal was to design an autonomous robot to pick items from a warehouse shelf. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team’s background, mechanism design, perception apparatus, planning, and control approach. We identify trends in this data, correlate it with each team’s success in the competition, and discuss observations and lessons learned based on survey results and the authors’ personal experiences during the challenge.Note to Practitioners—Perception, motion planning, grasping, and robotic system engineering have reached a level of maturity that makes it possible to explore automating simple warehouse tasks in semistructured environments that involve high-mix, low-volume picking applications. This survey summarizes lessons learned from the first Amazon Picking Challenge, highlighting mechanism design, perception, and motion planning algorithms, as well as software engineering practices that were most successful in solving a simplified order fulfillment task. While the choice of mechanism mostly affects execution speed, the competition demonstrated the systems challenges of robotics and illustrated the importance of combining reactive control with deliberative planning.",
"title": ""
},
{
"docid": "4d2be7aac363b77c6abd083947bc28c7",
"text": "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.",
"title": ""
},
{
"docid": "36460eda2098bdcf3810828f54ee7d2b",
"text": "[This corrects the article on p. 662 in vol. 60, PMID: 27729694.].",
"title": ""
},
{
"docid": "fc67a42a0c1d278994f0255e6cf3331a",
"text": "ibrant public securities markets rely on complex systems of supporting institutions that promote the governance of publicly traded companies. Corporate governance structures serve: 1) to ensure that minority shareholders receive reliable information about the value of firms and that a company’s managers and large shareholders do not cheat them out of the value of their investments, and 2) to motivate managers to maximize firm value instead of pursuing personal objectives.1 Institutions promoting the governance of firms include reputational intermediaries such as investment banks and audit firms, securities laws and regulators such as the Securities and Exchange Commission (SEC) in the United States, and disclosure regimes that produce credible firm-specific information about publicly traded firms. In this paper, we discuss economics-based research focused primarily on the governance role of publicly reported financial accounting information. Financial accounting information is the product of corporate accounting and external reporting systems that measure and routinely disclose audited, quantitative data concerning the financial position and performance of publicly held firms. Audited balance sheets, income statements, and cash-flow statements, along with supporting disclosures, form the foundation of the firm-specific information set available to investors and regulators. Developing and maintaining a sophisticated financial disclosure regime is not cheap. Countries with highly developed securities markets devote substantial resources to producing and regulating the use of extensive accounting and disclosure rules that publicly traded firms must follow. Resources expended are not only financial, but also include opportunity costs associated with deployment of highly educated human capital, including accountants, lawyers, academicians, and politicians. In the United States, the SEC, under the oversight of the U.S. Congress, is responsible for maintaining and regulating the required accounting and disclosure rules that firms must follow. These rules are produced both by the SEC itself and through SEC oversight of private standards-setting bodies such as the Financial Accounting Standards Board and the Emerging Issues Task Force, which in turn solicit input from business leaders, academic researchers, and regulators around the world. In addition to the accounting standards-setting investments undertaken by many individual countries and securities exchanges, there is currently a major, well-funded effort in progress, under the auspices of the International Accounting Standards Board (IASB), to produce a single set of accounting standards that will ultimately be acceptable to all countries as the basis for cross-border financing transactions.2 The premise behind governance research in accounting is that a significant portion of the return on investment in accounting regimes derives from enhanced governance of firms, which in turn facilitates the operation of securities Robert M. Bushman and Abbie J. Smith",
"title": ""
},
{
"docid": "12819e1ad6ca9b546e39ed286fe54d23",
"text": "This paper describes an efficient method to make individual faces for animation from several possible inputs. We present a method to reconstruct 3D facial model for animation from two orthogonal pictures taken from front and side views or from range data obtained from any available resources. It is based on extracting features on a face in a semiautomatic way and modifying a generic model with detected feature points. Then the fine modifications follow if range data is available. Automatic texture mapping is employed using a composed image from the two images. The reconstructed 3Dface can be animated immediately with given expression parameters. Several faces by one methodology applied to different input data to get a final animatable face are illustrated.",
"title": ""
},
{
"docid": "7f6edf82ddbe5b63ba5d36a7d8691dda",
"text": "This paper identifies the possibility of using electronic compasses and accelerometers in mobile phones, as a simple and scalable method of localization without war-driving. The idea is not fundamentally different from ship or air navigation systems, known for centuries. Nonetheless, directly applying the idea to human-scale environments is non-trivial. Noisy phone sensors and complicated human movements present practical research challenges. We cope with these challenges by recording a person's walking patterns, and matching it against possible path signatures generated from a local electronic map. Electronic maps enable greater coverage, while eliminating the reliance on WiFi infrastructure and expensive war-driving. Measurements on Nokia phones and evaluation with real users confirm the anticipated benefits. Results show a location accuracy of less than 11m in regions where today's localization services are unsatisfactory or unavailable.",
"title": ""
},
{
"docid": "50c961c8b229c7a4b31ca6a67e06112c",
"text": "The emerging three-dimensional (3D) chip architectures, with their intrinsic capability of reducing the wire length, is one of the promising solutions to mitigate the interconnect problem in modern microprocessor designs. 3D memory stacking also enables much higher memory bandwidth for future chip-multiprocessor design, mitigating the ``memory wall\" problem. In addition, heterogenous integration enabled by 3D technology can also result in innovation designs for future microprocessors. This paper serves as a survey of various approaches to design future 3D microprocessors, leveraging the benefits of fast latency, higher bandwidth, and heterogeneous integration capability that are offered by 3D technology.",
"title": ""
},
{
"docid": "d84f0baebe248608ae3c910adb39baea",
"text": "BACKGROUND\nSkin atrophy is a common manifestation of aging and is frequently accompanied by ulceration and delayed wound healing. With an increasingly aging patient population, management of skin atrophy is becoming a major challenge in the clinic, particularly in light of the fact that there are no effective therapeutic options at present.\n\n\nMETHODS AND FINDINGS\nAtrophic skin displays a decreased hyaluronate (HA) content and expression of the major cell-surface hyaluronate receptor, CD44. In an effort to develop a therapeutic strategy for skin atrophy, we addressed the effect of topical administration of defined-size HA fragments (HAF) on skin trophicity. Treatment of primary keratinocyte cultures with intermediate-size HAF (HAFi; 50,000-400,000 Da) but not with small-size HAF (HAFs; <50,000 Da) or large-size HAF (HAFl; >400,000 Da) induced wild-type (wt) but not CD44-deficient (CD44-/-) keratinocyte proliferation. Topical application of HAFi caused marked epidermal hyperplasia in wt but not in CD44-/- mice, and significant skin thickening in patients with age- or corticosteroid-related skin atrophy. The effect of HAFi on keratinocyte proliferation was abrogated by antibodies against heparin-binding epidermal growth factor (HB-EGF) and its receptor, erbB1, which form a complex with a particular isoform of CD44 (CD44v3), and by tissue inhibitor of metalloproteinase-3 (TIMP-3).\n\n\nCONCLUSIONS\nOur observations provide a novel CD44-dependent mechanism for HA oligosaccharide-induced keratinocyte proliferation and suggest that topical HAFi application may provide an attractive therapeutic option in human skin atrophy.",
"title": ""
},
{
"docid": "49e2963e84967100deee8fc810e053ba",
"text": "We have developed a method for rigidly aligning images of tubes. This paper presents an evaluation of the consistency of that method for three-dimensional images of human vasculature. Vascular images may contain alignment ambiguities, poorly corresponding vascular networks, and non-rigid deformations, yet the Monte Carlo experiments presented in this paper show that our method registers vascular images with sub-voxel consistency in a matter of seconds. Furthermore, we show that the method's insensitivity to non-rigid deformations enables the localization, quantification, and visualization of those deformations. Our method aligns a source image with a target image by registering a model of the tubes in the source image directly with the target image. Time can be spent to extract an accurate model of the tubes in the source image. Multiple target images can then be registered with that model without additional extractions. Our registration method builds upon the principles of our tubular object segmentation work that combines dynamic-scale central ridge traversal with radius estimation. In particular, our registration method's consistency stems from incorporating multi-scale ridge and radius measures into the model-image match metric. Additionally, the method's speed is due in part to the use of coarse-to-fine optimization strategies that are enabled by measures made during model extraction and by the parameters inherent to the model-image match metric.",
"title": ""
},
{
"docid": "7716409441fb8e34013d3e9f58d32476",
"text": "Decentralized partially observable Markov decision processes (Dec-POMDPs) are a powerful tool for modeling multi-agent planning and decision-making under uncertainty. Prevalent Dec-POMDP solution techniques require centralized computation given full knowledge of the underlying model. Multi-agent reinforcement learning (MARL) based approaches have been recently proposed for distributed solution of during learning and policy execution are identical. In some practical scenarios this may not be the case. We propose a novel MARL approach in which agents are allowed to rehearse with information that will not be available during policy execution. The key is for the agents to learn policies that do not explicitly rely on these rehearsal features. We also establish a weak convergence result for our algorithm, RLaR, demonstrating that RLaR converges in probability when certain conditions are met. We show experimentally that incorporating rehearsal features can enhance the learning rate compared to non-rehearsalbased learners, and demonstrate fast, (near) optimal performance on many existing benchmark DecPOMDP problems. We also compare RLaR against an existing approximate Dec-POMDP solver which, like RLaR, does not assume a priori knowledge of the model. While RLaR's policy representation is not as scalable, we show that RLaR produces higher quality policies for most problems and horizons studied. & 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e2b166491ccc69674d2a597282facf02",
"text": "With the advancement of radio access networks, more and more mobile data content needs to be transported by optical networks. Mobile fronthaul is an important network segment that connects centralized baseband units (BBUs) with remote radio units in cloud radio access networks (C-RANs). It enables advanced wireless technologies such as coordinated multipoint and massive multiple-input multiple-output. Mobile backhaul, on the other hand, connects BBUs with core networks to transport the baseband data streams to their respective destinations. Optical access networks are well positioned to meet the first optical communication demands of C-RANs. To better address the stringent requirements of future generations of wireless networks, such as the fifth-generation (5G) wireless, optical access networks need to be improved and enhanced. In this paper, we review emerging optical access network technologies that aim to support 5G wireless with high capacity, low latency, and low cost and power per bit. Advances in high-capacity passive optical networks (PONs), such as 100 Gbit/s PON, will be reviewed. Among the topics discussed are advanced modulation and detection techniques, digital signal processing tailored for optical access networks, and efficient mobile fronthaul techniques. We also discuss the need for coordination between RAN and PON to simplify the overall network, reduce the network latency, and improve the network cost efficiency and power efficiency.",
"title": ""
},
{
"docid": "12c2e384d9bbb33e6b2d1e15acde7984",
"text": "Traditional companies are increasingly turning towards platform strategies to gain speed in the development of digital value propositions and prepare for the challenges arising from digitalization. This paper reports on the digitalization journey of the LEGO Group to elaborate how brick-and-mortar companies can break away from a drifting information infrastructure and trigger its transformation into a digital platform. Conceptualizing information infrastructure evolution as path-dependent process, the case study explores how mindful deviations by Enterprise Architects guide installed base cultivation through collective action and trigger the creation of a new ‘platformization’ path. Additionally, the findings portrait Enterprise Architecture management as a process of socio-technical path constitution that is equally shaped by deliberate human interventions and emergent forces through path dependencies.",
"title": ""
},
{
"docid": "5dd790f34fec2f4adc52971c39e55d6b",
"text": "Although within SDN community, the notion of logically centralized network control is well understood and agreed upon, many different approaches exist on how one should deliver such a logically centralized view to multiple distributed controller instances. In this paper, we survey and investigate those approaches. We discover that we can classify the methods into several design choices that are trending among SDN adopters. Each design choice may influence several SDN issues such as scalability, robustness, consistency, and privacy. Thus, we further analyze the pros and cons of each model regarding these matters. We conclude that each design begets some characteristics. One may excel in resolving one issue but perform poor in another. We also present which design combinations one should pick to build distributed controller that is scalable, robust, consistent",
"title": ""
},
{
"docid": "599a142f342506c08476bdd353a6ef89",
"text": "This study was undertaken to report clinical outcomes after high tibial osteotomy (HTO) in patients with a discoid lateral meniscus and to determine (1) whether discoid lateral meniscus degeneration by magnetic resonance imaging (MRI) progresses after HTO and (2) whether this progression adversely affects clinical results. The records of 292 patients (292 knees) who underwent medial opening HTO were retrospectively reviewed, and discoid types and grades of lateral meniscus degeneration as determined by MRI were recorded preoperatively. Of the 292 patients, 17 (5.8 %) had a discoid lateral meniscus, and postoperative MR images were obtained at least 2 years after HTO for 15 of these 17 patients. American Knee Society (AKS) pain, knee and function scores significantly improved in the 15 patients after surgery (p < 0.001). Eight (53 %) had an incomplete and 7 (47 %) had a complete discoid lateral meniscus. By preoperative MRI, the distribution of meniscal degeneration was as follows: grade 1, 4 patients; grade 2, 7 patients; and grade 3, 4 patients. At the final follow-up, the distribution of degeneration was as follows: grade 1, 2 patients; grade 2, 5 patients; and grade 3, 8 patients. Two patients with grade 3 degeneration who did not undergo partial meniscectomy showed tear progression. Thus, 8 of the 15 patients (53 %) experienced progressive discoid meniscal degeneration after HTO. Median AKS pain score was significantly lower in the progression group than in the non-progression group (40 vs 45, respectively). The results of this study suggest that increased load on the lateral compartment after HTO can accelerate discoid lateral meniscus degeneration by MRI and caution that when a discoid lateral meniscus is found by preoperative MRI, progressive degeneration may occur after HTO and clinical outcome may be adversely affected. Therapeutic study, Level IV.",
"title": ""
},
{
"docid": "30d119e1c2777988aab652e34fb76846",
"text": "The relationship between games and story remains a divisive question among game fans, designers, and scholars alike. At a recent academic Games Studies conference, for example, a blood feud threatened to erupt between the self-proclaimed Ludologists, who wanted to see the focus shift onto the mechanics of game play, and the Narratologists, who were interested in studying games alongside other storytelling media.(1) Consider some recent statements made on this issue:",
"title": ""
},
{
"docid": "1acb7ca89eab0a0b4306aa2ebb844018",
"text": "This paper describes work in progress. Our research is focused on efficient construction of effective models for spam detection. Clustering messages allows for efficient labeling of a representative sample of messages for learning a spam detection model using a Random Forest for classification and active learning for refining the classification model. Results are illustrated for the 2007 TREC Public Spam Corpus. The area under the Receiver Operating Characteristic (ROC) curve is competitive with other solutions while requiring much fewer labeled training examples.",
"title": ""
},
{
"docid": "048d54f4997bfea726f69cf7f030543d",
"text": "In this article, we have reviewed the state of the art of IPT systems and have explored the suitability of the technology to wirelessly charge battery powered vehicles. the review shows that the IPT technology has merits for stationary charging (when the vehicle is parked), opportunity charging (when the vehicle is stopped for a short period of time, for example, at a bus stop), and dynamic charging (when the vehicle is moving along a dedicated lane equipped with an IPT system). Dynamic wireless charging holds promise to partially or completely eliminate the overnight charging through a compact network of dynamic chargers installed on the roads that would keep the vehicle batteries charged at all times, consequently reducing the range anxiety and increasing the reliability of EVs. Dynamic charging can help lower the price of EVs by reducing the size of the battery pack. Indeed, if the recharging energy is readily available, the batteries do not have to support the whole driving range but only supply power when the IPT system is not available. Depending on the power capability, the use of dynamic charging may increase driving range and reduce the size of the battery pack.",
"title": ""
},
{
"docid": "217c8e82b0131a1634a0e09967c388dc",
"text": "F anthropology plays a vital role in medicolegal investigations of death. Today, forensic anthropologists are intimately involved in many aspects of these investigations; they may participate in search and recovery efforts, develop a biological profile, identify and document trauma, determine postmortem interval, and offer expert witness courtroom testimony. However, few forensic anthropology textbooks include substantial discussions of our medicolegal and judicial systems. Forensic Anthropology: Contemporary Theory and Practice, by Debra A. Komar and Jane E. Buikstra, not only examines current forensic anthropology from a theoretical perspective, but it also includes an introduction to elements of our legal system. Further, the text integrates these important concepts with bioanthropological theories and methods. Komar and Buikstra begin with an introductory chapter that traces the history of forensic anthropology in the United States. The careers of several founding members of the American Board of Forensic Anthropology are recognized for their contribution to advancing the profession. We are reminded that the field has evolved through the years from biological anthropologists doing forensic anthropology to modern students, who need training in both the medical and physical sciences, as well as traditional foundations in biological anthropology. In Chapters Two and Three, the authors introduce the reader to the medicolegal and judicial systems respectively. They present the medicolegal system with interesting discussions of important topics such as jurisdiction, death investigations, cause and manner of death, elements of a crime (actus reus and mens rea), and postmortem examinations. The chapter on the judicial system begins with the different classifications and interpretations of evidence, followed by an overview. Key components of this chapter include the rules governing expert witness testimony and scientific evidence in the courtroom. The authors also review the United States Supreme Court landmark decision, Daubert v. Merrell Dow Pharmaceuticals 1993, which established more stringent criteria that federal judges must follow regarding the admissibility of scientific evidence in federal courtrooms. The authors note that in the Daubert decision, the Supreme Court modified the “Frye test”, removing the general acceptability criterion formerly required. In light of the Daubert ruling, the authors demonstrate the need for anthropologists to refine techniques and continue to develop biological profiling methods that will meet the rigorous Daubert standards. Anthropology is not alone among the forensic sciences that seek to refine methods and techniques. For example, forensic odontology has recently come under scrutiny in cases where defendants have been wrongfully convicted based on bite mark evidence (Saks and Koehler 2005). Additionally, Saks and Koehler also remark upon 86 DNA exoneration cases and note that 63% of these wrongful convictions are attributed to forensic science testing errors. Chapter Four takes a comprehensive look at the role of forensic anthropologists during death investigations. The authors note that “the participation of forensic anthropologists can be invaluable to the proper handling of the death scene” (p. 65). To this end, the chapter includes discussions of identifying remains of medicolegal and nonmedicolegal significance, jurisdiction issues, search strategies, and proper handling of evidence. Readers may find the detailed treatment of differentiating human from nonhuman material particularly useful. The following two chapters deal with developing a biological profile, and pathology and trauma. A detailed review of sex and age estimation for both juvenile and adult skeletal remains is provided, as well as an assessment of the estimation of ancestry and stature. A welcome discussion on scientific testing and the error rates of different methods is highlighted throughout their ‘reference’ packed discussion. In their critical review of biological profile development, Komar and Buikstra discuss the various estimation methods; they note that more recent techniques may need testing on additional skeletal samples to survive potential challenges under the Daubert ruling. We also are reminded that in forensic science, flawed methods may result in the false imprisonment of innocent persons, therefore an emphasis is placed on developing and refining techniques that improve both the accuracy and reliability of biological profile estimates. Students will find that the descriptions and discussions of the different categories of both pathology and trauma assessments are beneficial for understanding postmortem examinations. One also may find that the reviews of blunt and sharp force trauma, gunshot wounds, and fracture terminology are particularly useful. Komar and Buikstra continue their remarkable book with a chapter focusing on forensic taphonomy. They begin with an introduction and an outline of the goals of forensic taphonomy which includes time since death estimation, mechanisms of bone modification, and reconstructing perimortem events. The reader is drawn to the case studies that",
"title": ""
}
] | scidocsrr |
3fea56b608bb0d446944ca50b580c4b5 | Revisiting the role of language in spatial cognition: Categorical perception of spatial relations in English and Korean speakers. | [
{
"docid": "f5495554337d3996c2a63459c5f90ab7",
"text": "In this paper we examine how English and Mandarin speakers think about time, and we test how the patterns of thinking in the two groups relate to patterns in linguistic and cultural experience. In Mandarin, vertical spatial metaphors are used more frequently to talk about time than they are in English; English relies primarily on horizontal terms. We present results from two tasks comparing English and Mandarin speakers' temporal reasoning. The tasks measure how people spatialize time in three-dimensional space, including the sagittal (front/back), transverse (left/right), and vertical (up/down) axes. Results of Experiment 1 show that people automatically create spatial representations in the course of temporal reasoning, and these implicit spatializations differ in accordance with patterns in language, even in a non-linguistic task. Both groups showed evidence of a left-to-right representation of time, in accordance with writing direction, but only Mandarin speakers showed a vertical top-to-bottom pattern for time (congruent with vertical spatiotemporal metaphors in Mandarin). Results of Experiment 2 confirm and extend these findings, showing that bilinguals' representations of time depend on both long-term and proximal aspects of language experience. Participants who were more proficient in Mandarin were more likely to arrange time vertically (an effect of previous language experience). Further, bilinguals were more likely to arrange time vertically when they were tested in Mandarin than when they were tested in English (an effect of immediate linguistic context).",
"title": ""
}
] | [
{
"docid": "b96f5d52bc37bf3ad876699826cc5022",
"text": "According to psychological scientists, humans understand models that most match their own internal models, which they characterize as lists of \"heuristic\"s (i.e. lists of very succinct rules). One such heuristic rule generator is the Fast-and-Frugal Trees (FFT) preferred by psychological scientists. Despite their successful use in many applied domains, FFTs have not been applied in software analytics. Accordingly, this paper assesses FFTs for software analytics. \n We find that FFTs are remarkably effective in that their models are very succinct (5 lines or less describing a binary decision tree) while also outperforming result from very recent, top-level, conference papers. Also, when we restrict training data to operational attributes (i.e., those attributes that are frequently changed by developers), the performance of FFTs are not effected (while the performance of other learners can vary wildly). \n Our conclusions are two-fold. Firstly, there is much that software analytics community could learn from psychological science. Secondly, proponents of complex methods should always baseline those methods against simpler alternatives. For example, FFTs could be used as a standard baseline learner against which other software analytics tools are compared.",
"title": ""
},
{
"docid": "24c1b31bac3688c901c9b56ef9a331da",
"text": "Advanced Persistent Threats (APTs) are a new breed of internet based smart threats, which can go undetected with the existing state of-the-art internet traffic monitoring and protection systems. With the evolution of internet and cloud computing, a new generation of smart APT attacks has also evolved and signature based threat detection systems are proving to be futile and insufficient. One of the essential strategies in detecting APTs is to continuously monitor and analyze various features of a TCP/IP connection, such as the number of transferred packets, the total count of the bytes exchanged, the duration of the TCP/IP connections, and details of the number of packet flows. The current threat detection approaches make extensive use of machine learning algorithms that utilize statistical and behavioral knowledge of the traffic. However, the performance of these algorithms is far from satisfactory in terms of reducing false negatives and false positives simultaneously. Mostly, current algorithms focus on reducing false positives, only. This paper presents a fractal based anomaly classification mechanism, with the goal of reducing both false positives and false negatives, simultaneously. A comparison of the proposed fractal based method with a traditional Euclidean based machine learning algorithm (k-NN) shows that the proposed method significantly outperforms the traditional approach by reducing false positive and false negative rates, simultaneously, while improving the overall classification rates.",
"title": ""
},
{
"docid": "4507f495e401e9e67a0ff6396778ff06",
"text": "Deep generative adversarial networks (GANs) are the emerging technology in drug discovery and biomarker development. In our recent work, we demonstrated a proof-of-concept of implementing deep generative adversarial autoencoder (AAE) to identify new molecular fingerprints with predefined anticancer properties. Another popular generative model is the variational autoencoder (VAE), which is based on deep neural architectures. In this work, we developed an advanced AAE model for molecular feature extraction problems, and demonstrated its advantages compared to VAE in terms of (a) adjustability in generating molecular fingerprints; (b) capacity of processing very large molecular data sets; and (c) efficiency in unsupervised pretraining for regression model. Our results suggest that the proposed AAE model significantly enhances the capacity and efficiency of development of the new molecules with specific anticancer properties using the deep generative models.",
"title": ""
},
{
"docid": "a13a302e7e2fd5e09a054f1bf23f1702",
"text": "A number of machine learning (ML) techniques have recently been proposed to solve color constancy problem in computer vision. Neural networks (NNs) and support vector regression (SVR) in particular, have been shown to outperform many traditional color constancy algorithms. However, neither neural networks nor SVR were compared to simpler regression tools in those studies. In this article, we present results obtained with a linear technique known as ridge regression (RR) and show that it performs better than NNs, SVR, and gray world (GW) algorithm on the same dataset. We also perform uncertainty analysis for NNs, SVR, and RR using bootstrapping and show that ridge regression and SVR are more consistent than neural networks. The shorter training time and single parameter optimization of the proposed approach provides a potential scope for real time video tracking application.",
"title": ""
},
{
"docid": "775e0205ef85aa5d04af38748e63aded",
"text": "Monads are a de facto standard for the type-based analysis of impure aspects of programs, such as runtime cost [9, 5]. Recently, the logical dual of a monad, the comonad, has also been used for the cost analysis of programs, in conjunction with a linear type system [6, 8]. The logical duality of monads and comonads extends to cost analysis: In monadic type systems, costs are (side) effects, whereas in comonadic type systems, costs are coeffects. However, it is not clear whether these two methods of cost analysis are related and, if so, how. Are they equally expressive? Are they equally well-suited for cost analysis with all reduction strategies? Are there translations from type systems with effects to type systems with coeffects and viceversa? The goal of this work-in-progress paper is to explore some of these questions in a simple context — the simply typed lambda-calculus (STLC). As we show, even this simple context is already quite interesting technically and it suffices to bring out several key points.",
"title": ""
},
{
"docid": "a1d167f6c1c1d574e8e5c0c6cba2c775",
"text": "Hypothesis generation, a crucial initial step for making scientific discoveries, relies on prior knowledge, experience and intuition. Chance connections made between seemingly distinct subareas sometimes turn out to be fruitful. The goal in text mining is to assist in this process by automatically discovering a small set of interesting hypotheses from a suitable text collection. In this paper we present open and closed text mining algorithms that are built within the discovery framework established by Swanson and Smalheiser. Our algorithms represent topics using metadata profiles. When applied to MEDLINE these are MeSH based profiles. We present experiments that demonstrate the effectiveness of our algorithms. Specifically, our algorithms generate ranked term lists where the key terms representing novel relationships between topics are ranked high.",
"title": ""
},
{
"docid": "da3e12690fd5bfeb21be374e7aa3a111",
"text": "The most common specimens from immunocompromised patients that are analyzed for detection of herpes simplex virus (HSV) or varicella-zoster virus (VZV) are from skin lesions. Many types of assays are applicable to these samples, but some, such as virus isolation and direct fluorescent antibody testing, are useful only in the early phases of the lesions. In contrast, nucleic acid (NA) detection methods, which generally have superior sensitivity and specificity, can be applied to skin lesions at any stage of progression. NA methods are also the best choice, and sometimes the only choice, for detecting HSV or VZV in blood, cerebrospinal fluid, aqueous or vitreous humor, and from mucosal surfaces. NA methods provide the best performance when reliability and speed (within 24 hours) are considered together. They readily distinguish the type of HSV detected or the source of VZV detected (wild type or vaccine strain). Nucleic acid detection methods are constantly being improved with respect to speed and ease of performance. Broader applications are under study, such as the use of quantitative results of viral load for prognosis and to assess the efficacy of antiviral therapy.",
"title": ""
},
{
"docid": "641a51f9a5af9fc9dba4be3d12829fd5",
"text": "In this paper, we present a novel SpaTial Attention Residue Network (STAR-Net) for recognising scene texts. The overall architecture of our STAR-Net is illustrated in fig. 1. Our STARNet emphasises the importance of representative image-based feature extraction from text regions by the spatial attention mechanism and the residue learning strategy. It is by far the deepest neural network proposed for scene text recognition.",
"title": ""
},
{
"docid": "259972cd20a1f763b07bef4619dc7f70",
"text": "This paper proposes an Interactive Chinese Character Learning System (ICCLS) based on pictorial evolution as an edutainment concept in computer-based learning of language. The advantage of the language origination itself is taken as a learning platform due to the complexity in Chinese language as compared to other types of languages. Users especially children enjoy more by utilize this learning system because they are able to memories the Chinese Character easily and understand more of the origin of the Chinese character under pleasurable learning environment, compares to traditional approach which children need to rote learning Chinese Character under un-pleasurable environment. Skeletonization is used as the representation of Chinese character and object with an animated pictograph evolution to facilitate the learning of the language. Shortest skeleton path matching technique is employed for fast and accurate matching in our implementation. User is required to either write a word or draw a simple 2D object in the input panel and the matched word and object will be displayed as well as the pictograph evolution to instill learning. The target of computer-based learning system is for pre-school children between 4 to 6 years old to learn Chinese characters in a flexible and entertaining manner besides utilizing visual and mind mapping strategy as learning methodology.",
"title": ""
},
{
"docid": "c746704be981521aa38f7760a37d4b83",
"text": "Myoelectric or electromyogram (EMG) signals can be useful in intelligently recognizing intended limb motion of a person. This paper presents an attempt to develop a four-channel EMG signal acquisition system as part of an ongoing research in the development of an active prosthetic device. The acquired signals are used for identification and classification of six unique movements of hand and wrist, viz. hand open, hand close, wrist flexion, wrist extension, ulnar deviation and radial deviation. This information is used for actuation of prosthetic drive. The time domain features are extracted, and their dimension is reduced using principal component analysis. The reduced features are classified using two different techniques: k nearest neighbor and artificial neural networks, and the results are compared.",
"title": ""
},
{
"docid": "2a86c4904ef8059295f1f0a2efa546d8",
"text": "3D shape is a crucial but heavily underutilized cue in today’s computer vision system, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape model in the loop. Apart from object recognition on 2.5D depth maps, recovering these incomplete 3D shapes to full 3D is critical for analyzing shape variations. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses. It naturally supports joint object recognition and shape reconstruction from 2.5D depth maps, and further, as an additional application it allows active object recognition through view planning. We construct a largescale 3D CAD model dataset to train our model, and conduct extensive experiments to study our new representation.",
"title": ""
},
{
"docid": "c5ff79665033fd215411069cb860d641",
"text": "This paper presents a new geometry-based method to determine if a cable-driven robot operating in a d-degree-of-freedom workspace (2 ≤ d ≤ 6) with n ≥ d cables can generate a given set of wrenches in a given pose, considering acceptable minimum and maximum tensions in the cables. To this end, the fundamental nature of the Available Wrench Set is studied. The latter concept, defined here, is closely related to similar sets introduced in [23, 4]. It is shown that the Available Wrench Set can be represented mathematically by a zonotope, a special class of convex polytopes. Using the properties of zonotopes, two methods to construct the Available Wrench Set are discussed. From the representation of the Available Wrench Set, computationallyefficient and non-iterative tests are presented to verify if this set includes the Task Wrench Set, the set of wrenches needed for a given task. INTRODUCTION AND PROBLEM DEFINITION A cable-driven robot, or simply cable robot, is a parallel robot whose actuated limbs are cables. The length of the cables can be adjusted in a coordinated manner to control the pose (position and orientation) and/or wrench (force and torque) at the moving platform. Pioneer applications of such mechanisms are the NIST Robocrane [1], the Falcon high-speed manipulator [15] and the Skycam [7]. The fact that cables can only exert efforts in one direction impacts the capability of the mechanism to generate wrenches at the platform. Previous work already presented methods to test if a set of wrenches – ranging from one to all possible wrenches – could be generated by a cable robot in a given pose, considering that cables work only in tension. Some of the proposed methods focus on fully constrained cable robots while others apply to unconstrained robots. In all cases, minimum and/or maximum cable tensions is considered. A complete section of this paper is dedicated to the comparison of the proposed approach with previous methods. A general geometric approach that addresses all possible cases without using an iterative algorithm is presented here. It will be shown that the results obtained with this approach are consistent with the ones previously presented in the literature [4, 5, 14, 17, 18, 22, 23, 24, 26]. This paper does not address the workspace of cable robots. The latter challenging problem was addressed in several papers over the recent years [10, 11, 12, 19, 25]. Before looking globally at the workspace, all proposed methods must go through the intermediate step of assessing the capability of a mechanism to generate a given set of wrenches. The approach proposed here is also compared with the intermediate steps of the papers on the workspace determination of cable robots. The task that a robot has to achieve implies that it will have to be able to generate a given set of wrenches in a given pose x. This Task Wrench Set, T , depends on the various applications of the considered robot, which can be for example to move a camera or other sensors [7, 6, 9, 3], manipulate payloads [15, 1] or simulate walking sensations to a user immersed in virtual reality [21], just to name a few. The Available Wrench Set, A, is the set of wrenches that the mechanism can generate. This set depends on the architecture of the robot, i.e., where the cables are attached on the platform and where the fixed winches are located. It also depends on the configuration pose as well as on the minimum and maximum acceptable tension in the cables. All the wrenches that are possibly needed to accomplish a task can 1 Copyright 2008 by ASME",
"title": ""
},
{
"docid": "85cb15ae35a6368c004fde646c486491",
"text": "OBJECTIVES\nThe purposes of this study were to identify age-related changes in objectively recorded sleep patterns across the human life span in healthy individuals and to clarify whether sleep latency and percentages of stage 1, stage 2, and rapid eye movement (REM) sleep significantly change with age.\n\n\nDESIGN\nReview of literature of articles published between 1960 and 2003 in peer-reviewed journals and meta-analysis.\n\n\nPARTICIPANTS\n65 studies representing 3,577 subjects aged 5 years to 102 years.\n\n\nMEASUREMENT\nThe research reports included in this meta-analysis met the following criteria: (1) included nonclinical participants aged 5 years or older; (2) included measures of sleep characteristics by \"all night\" polysomnography or actigraphy on sleep latency, sleep efficiency, total sleep time, stage 1 sleep, stage 2 sleep, slow-wave sleep, REM sleep, REM latency, or minutes awake after sleep onset; (3) included numeric presentation of the data; and (4) were published between 1960 and 2003 in peer-reviewed journals.\n\n\nRESULTS\nIn children and adolescents, total sleep time decreased with age only in studies performed on school days. Percentage of slow-wave sleep was significantly negatively correlated with age. Percentages of stage 2 and REM sleep significantly changed with age. In adults, total sleep time, sleep efficiency, percentage of slow-wave sleep, percentage of REM sleep, and REM latency all significantly decreased with age, while sleep latency, percentage of stage 1 sleep, percentage of stage 2 sleep, and wake after sleep onset significantly increased with age. However, only sleep efficiency continued to significantly decrease after 60 years of age. The magnitudes of the effect sizes noted changed depending on whether or not studied participants were screened for mental disorders, organic diseases, use of drug or alcohol, obstructive sleep apnea syndrome, or other sleep disorders.\n\n\nCONCLUSIONS\nIn adults, it appeared that sleep latency, percentages of stage 1 and stage 2 significantly increased with age while percentage of REM sleep decreased. However, effect sizes for the different sleep parameters were greatly modified by the quality of subject screening, diminishing or even masking age associations with different sleep parameters. The number of studies that examined the evolution of sleep parameters with age are scant among school-aged children, adolescents, and middle-aged adults. There are also very few studies that examined the effect of race on polysomnographic sleep parameters.",
"title": ""
},
{
"docid": "d2eff62f79d07e286a6418c2e2f90bd1",
"text": "It has been believed that stochastic feedforward neural networks (SFNN) have several advantages beyond deterministic deep neural networks (DNN): they have more expressive power allowing multi-modal mappings and regularize better due to their stochastic nature. However, training SFNN is notoriously harder. In this paper, we aim at developing efficient training methods for large-scale SFNN, in particular using known architectures and pre-trained parameters of DNN. To this end, we propose a new intermediate stochastic model, called Simplified-SFNN, which can be built upon any baseline DNN and approximates certain SFNN by simplifying its upper latent units above stochastic ones. The main novelty of our approach is in establishing the connection between three models, i.e., DNN → Simplified-SFNN → SFNN, which naturally leads to an efficient training procedure of the stochastic models utilizing pre-trained parameters of DNN. Using several popular DNNs, we show how they can be effectively transferred to the corresponding stochastic models for both multi-modal and classification tasks on MNIST, TFD, CIFAR-10, CIFAR-100 and SVHN datasets. In particular, our stochastic model built from the wide residual network has 28 layers and 36 million parameters, where the former consistently outperforms the latter for the classification tasks on CIFAR-10 and CIFAR-100 due to its stochastic regularizing effect.",
"title": ""
},
{
"docid": "49679549342ab91cc40e9f7280c3cc6f",
"text": "The prevalence of cardiovascular risk factors, insulin resistance/diabetes and/or uterine pathology appears to be increased in women with polycystic ovarian syndrome (PCOS), although more outcome studies are necessary to determine incidence. Data pertaining to some of the potential long-term health consequences associated with PCOS are summarized. Medline, Current Contents and PubMed were searched for studies from the time of our original interest in this issue in 1980 to the present. The review is limited to published human data. The current literature indicate that women with this syndrome cluster risk factors for premature morbidity and mortality. Large multi-site co-operative studies are necessary to evaluate the long-term health outcomes.",
"title": ""
},
{
"docid": "00962a9505eac94bd64a4e442bcc98dd",
"text": "In this paper is presented the implementation of a compact FPGA-based single-phase cascaded H-bridge multilevel inverter suitable for teaching and research activities. The softwares Matlab/Simulink and Quartus II were used to describe and simulate the PWM algorithm in hardware description language (VHDL), before experimental implementation. A Terasic DE0-Nano board with an Intel Cyclone IV EP4CE22F17C6N FPGA was used to generate the 2.4 kHz PWM switching control signals, which are fed to isolated gate drivers with the HCPL-3180 optocoupler, before being applied to the eight IRF640N MOSFETs in a developed low power prototype inverter board. To validate the proposed inverter, two amplitude modulation indexes were evaluated (0.4 and 0.99) using the phase opposition carriers disposition (POD) technique. Simulation and experimental results to synthesize a three- and a five-level PWM voltage waveform across a resistive and a resistive-inductive-capacitive load show that both are in close agreement, validating the proposed low-cost inverter.",
"title": ""
},
{
"docid": "4e8ab63a4b7fe9f78c89046628237d4d",
"text": "Modeling the structure of coherent texts is a key NLP problem. The task of coherently organizing a given set of sentences has been commonly used to build and evaluate models that understand such structure. We propose an end-to-end unsupervised deep learning approach based on the set-to-sequence framework to address this problem. Our model strongly outperforms prior methods in the order discrimination task and a novel task of ordering abstracts from scientific articles. Furthermore, our work shows that useful text representations can be obtained by learning to order sentences. Visualizing the learned sentence representations shows that the model captures high-level logical structure in paragraphs. Our representations perform comparably to state-of-the-art pre-training methods on sentence similarity and paraphrase detection tasks.",
"title": ""
},
{
"docid": "5d546a8d21859a057d36cdbd3fa7f887",
"text": "In 1984, a prospective cohort study, Coronary Artery Risk Development in Young Adults (CARDIA) was initiated to investigate life-style and other factors that influence, favorably and unfavorably, the evolution of coronary heart disease risk factors during young adulthood. After a year of planning and protocol development, 5,116 black and white women and men, age 18-30 years, were recruited and examined in four urban areas: Birmingham, Alabama; Chicago, Illinois; Minneapolis, Minnesota, and Oakland, California. The initial examination included carefully standardized measurements of major risk factors as well as assessments of psychosocial, dietary, and exercise-related characteristics that might influence them, or that might be independent risk factors. This report presents the recruitment and examination methods as well as the mean levels of blood pressure, total plasma cholesterol, height, weight and body mass index, and the prevalence of cigarette smoking by age, sex, race and educational level. Compared to recent national samples, smoking is less prevalent in CARDIA participants, and weight tends to be greater. Cholesterol levels are representative and somewhat lower blood pressures in CARDIA are probably, at least in part, due to differences in measurement methods. Especially noteworthy among several differences in risk factor levels by demographic subgroup, were a higher body mass index among black than white women and much higher prevalence of cigarette smoking among persons with no more than a high school education than among those with more education.",
"title": ""
},
{
"docid": "49cafb7a5a42b7a8f8260a398c390504",
"text": "With the availability of vast collection of research articles on internet, textual analysis is an increasingly important technique in scientometric analysis. While the context in which it is used and the specific algorithms implemented may vary, typically any textual analysis exercise involves intensive pre-processing of input text which includes removing topically uninteresting terms (stop words). In this paper we argue that corpus specific stop words, which take into account the specificities of a collection of texts, improve textual analysis in scientometrics. We describe two relatively simple techniques to generate corpus-specific stop words; stop words lists following a Poisson distribution and keyword adjacency stop words lists. In a case study to extract keywords from scientific abstracts of research project funded by the European Research Council in the domain of Life sciences, we show that a combination of those techniques gives better recall values than standard stop words or any of the two techniques alone. The method we propose can be implemented to obtain stop words lists in an automatic way by using author provided keywords for a set of abstracts. The stop words lists generated can be updated easily by adding new texts to the training corpus. Conference Topic Methods and techniques",
"title": ""
},
{
"docid": "61e75fb597438712098c2b6d4b948558",
"text": "Impact of occupational stress on employee performance has been recognized as an important area of concern for organizations. Negative stress affects the physical and mental health of the employees that in turn affects their performance on job. Research into the relationship between stress and job performance has been neglected in the occupational stress literature (Jex, 1998). It is therefore significant to understand different Occupational Stress Inducers (OSI) on one hand and their impact on different aspects of job performance on the other. This article reviews the available literature to understand the phenomenon so as to develop appropriate stress management strategies to not only save the employees from variety of health problems but to improve their performance and the performance of the organization. 35 Occupational Stress Inducers (OSI) were identified through a comprehensive review of articles and reports published in the literature of management and allied disciplines between 1990 and 2014. A conceptual model is proposed towards the end to study the impact of stress on employee job performance. The possible data analysis techniques are also suggested providing direction for future research.",
"title": ""
}
] | scidocsrr |
a72d8fd6002882fe7456554484225884 | Fine-Grained Control-Flow Integrity Through Binary Hardening | [
{
"docid": "094f784bfb5ad7cfeb52891242dfc38b",
"text": "Code diversification has been proposed as a technique to mitigate code reuse attacks, which have recently become the predominant way for attackers to exploit memory corruption vulnerabilities. As code reuse attacks require detailed knowledge of where code is in memory, diversification techniques attempt to mitigate these attacks by randomizing what instructions are executed and where code is located in memory. As an attacker cannot read the diversified code, it is assumed he cannot reliably exploit the code.\n In this paper, we show that the fundamental assumption behind code diversity can be broken, as executing the code reveals information about the code. Thus, we can leak information without needing to read the code. We demonstrate how an attacker can utilize a memory corruption vulnerability to create side channels that leak information in novel ways, removing the need for a memory disclosure vulnerability. We introduce seven new classes of attacks that involve fault analysis and timing side channels, where each allows a remote attacker to learn how code has been diversified.",
"title": ""
},
{
"docid": "82e6533bf92395a008a024e880ef61b1",
"text": "A new binary software randomization and ControlFlow Integrity (CFI) enforcement system is presented, which is the first to efficiently resist code-reuse attacks launched by informed adversaries who possess full knowledge of the inmemory code layout of victim programs. The defense mitigates a recent wave of implementation disclosure attacks, by which adversaries can exfiltrate in-memory code details in order to prepare code-reuse attacks (e.g., Return-Oriented Programming (ROP) attacks) that bypass fine-grained randomization defenses. Such implementation-aware attacks defeat traditional fine-grained randomization by undermining its assumption that the randomized locations of abusable code gadgets remain secret. Opaque CFI (O-CFI) overcomes this weakness through a novel combination of fine-grained code-randomization and coarsegrained control-flow integrity checking. It conceals the graph of hijackable control-flow edges even from attackers who can view the complete stack, heap, and binary code of the victim process. For maximal efficiency, the integrity checks are implemented using instructions that will soon be hardware-accelerated on commodity x86-x64 processors. The approach is highly practical since it does not require a modified compiler and can protect legacy binaries without access to source code. Experiments using our fully functional prototype implementation show that O-CFI provides significant probabilistic protection against ROP attacks launched by adversaries with complete code layout knowledge, and exhibits only 4.7% mean performance overhead on current hardware (with further overhead reductions to follow on forthcoming Intel processors). I. MOTIVATION Code-reuse attacks (cf., [5]) have become a mainstay of software exploitation over the past several years, due to the rise of data execution protections that nullify traditional codeinjection attacks. Rather than injecting malicious payload code directly onto the stack or heap, where modern data execution protections block it from being executed, attackers now ingeniously inject addresses of existing in-memory code fragments (gadgets) onto victim stacks, causing the victim process to execute its own binary code in an unanticipated order [38]. With a sufficiently large victim code section, the pool of exploitable gadgets becomes arbitrarily expressive (e.g., Turing-complete) [20], facilitating the construction of arbitrary attack payloads without the need for code-injection. Such payload construction has even been automated [34]. As a result, code-reuse has largely replaced code-injection as one of the top software security threats. Permission to freely reproduce all or part of this paper for noncommercial purposes is granted provided that copies bear this notice and the full citation on the first page. Reproduction for commercial purposes is strictly prohibited without the prior written consent of the Internet Society, the first-named author (for reproduction of an entire paper only), and the author’s employer if the paper was prepared within the scope of employment. NDSS ’15, 8–11 February 2015, San Diego, CA, USA Copyright 2015 Internet Society, ISBN 1-891562-38-X http://dx.doi.org/10.14722/ndss.2015.23271 This has motivated copious work on defenses against codereuse threats. Prior defenses can generally be categorized into: CFI [1] and artificial software diversity [8]. CFI restricts all of a program’s runtime control-flows to a graph of whitelisted control-flow edges. Usually the graph is derived from the semantics of the program source code or a conservative disassembly of its binary code. As a result, CFIprotected programs reject control-flow hijacks that attempt to traverse edges not supported by the original program’s semantics. Fine-grained CFI monitors indirect control-flows precisely; for example, function callees must return to their exact callers. Although such precision provides the highest security, it also tends to incur high performance overheads (e.g., 21% for precise caller-callee return-matching [1]). Because this overhead is often too high for industry adoption, researchers have proposed many optimized, coarser-grained variants of CFI. Coarse-grained CFI trades some security for better performance by reducing the precision of the checks. For example, functions must return to valid call sites (but not necessarily to the particular site that invoked the callee). Unfortunately, such relaxations have proved dangerous—a number of recent proof-of-concept exploits have shown how even minor relaxations of the control-flow policy can be exploited to effect attacks [6, 11, 18, 19]. Table I summarizes the impact of several of these recent exploits. Artificial software diversity offers a different but complementary approach that randomizes programs in such a way that attacks succeeding against one program instance have a very low probability of success against other (independently randomized) instances of the same program. Probabilistic defenses rely on memory secrecy—i.e., the effects of randomization must remain hidden from attackers. One of the simplest and most widely adopted forms of artificial diversity is Address Space Layout Randomization (ASLR), which randomizes the base addresses of program segments at loadtime. Unfortunately, merely randomizing the base addresses does not yield sufficient entropy to preserve memory secrecy in many cases; there are numerous successful derandomization attacks against ASLR [13, 26, 36, 37, 39, 42]. Finer-grained diversity techniques obtain exponentially higher entropy by randomizing the relative distances between all code points. For example, binary-level Self-Transforming Instruction Relocation (STIR) [45] and compilers with randomized code-generation (e.g., [22]) have both realized fine-grained artificial diversity for production-level software at very low overheads. Recently, a new wave of implementation disclosure attacks [4, 10, 35, 40] have threatened to undermine fine-grained artificial diversity defenses. Implementation disclosure attacks exploit information leak vulnerabilities to read memory pages of victim processes at the discretion of the attacker. By reading the TABLE I. OVERVIEW OF CONTROL-FLOW INTEGRITY BYPASSES CFI [1] bin-CFI [50] CCFIR [49] kBouncer [33] ROPecker [7] ROPGuard [16] EMET [30] DeMott [12] Feb 2014 / Göktaş et al. [18] May 2014 / / / Davi et al. [11] Aug 2014 / / / / / Göktaş et al. [19] Aug 2014 / / Carlini and Wagner [6] Aug 2014 / / in-memory code sections, attackers violate the memory secrecy assumptions of artificial diversity, rendering their defenses ineffective. Since finding and closing all information leaks is well known to be prohibitively difficult and often intractable for many large software products, these attacks constitute a very dangerous development in the cyber-threat landscape; there is currently no well-established, practical defense. This paper presents Opaque CFI (O-CFI): a new approach to coarse-grained CFI that strengthens fine-grained artificial diversity to withstand implementation disclosure attacks. The heart of O-CFI is a new form of control-flow check that conceals the graph of abusable control-flow edges even from attackers who have complete read-access to the randomized binary code, the stack, and the heap of victim processes. Such access only affords attackers knowledge of the intended (and therefore nonabusable) edges of the control-flow graph, not the edges left unprotected by the coarse-grained CFI implementation. Artificial diversification is employed to vary the set of unprotected edges between program instances, maintaining the probabilistic guarantees of fine-grained diversity. Experiments show that O-CFI enjoys performance overheads comparable to standard fine-grained diversity and non-opaque, coarse-grained CFI. Moreover, O-CFI’s control-flow checking logic is implemented using Intel x86/x64 memory-protection extensions (MPX) that are expected to be hardware-accelerated in commodity CPUs from 2015 onwards. We therefore expect even better performance for O-CFI in the near future. Our contributions are as follows: • We introduce O-CFI, the first low-overhead code-reuse defense that tolerates implementation disclosures. • We describe our implementation of a fully functional prototype that protects stripped, x86 legacy binaries without source code. • Analysis shows that O-CFI provides quantifiable security against state-of-the-art exploits—including JITROP [40] and Blind-ROP [4]. • Performance evaluation yields competitive overheads of just 4.7% for computation-intensive programs. II. THREAT MODEL Our work is motivated by the emergence of attacks against fine-grained diversity and coarse-grained control-flow integrity. We therefore introduce these attacks and distill them into a single, unified threat model. A. Bypassing Coarse-Grained CFI Ideally, CFI permits only programmer-intended control-flow transfers during a program’s execution. The typical approach is to assign a unique ID to each permissible indirect controlflow target, and check the IDs at runtime. Unfortunately, this introduces performance overhead proportional to the degree of the graph—the more overlaps between valid target sets of indirect branch instructions, the more IDs must be stored and checked at each branch. Moreover, perfect CFI cannot be realized with a purely static control-flow graph; for example, the permissible destinations of function returns depend on the calling context, which is only known at runtime. Fine-grained CFI therefore implements a dynamically computed shadow stack, incurring high overheads [1]. To avoid this, coarse-grained CFI implementations resort to a reduced-degree, static approximation of the control-flow graph, and merge identifiers at the cost of reduced security. For example, bin-CFI [49] and CCFIR [50] use at most three IDs per branch, and omit shadow stacks. Recent work has demonstrated that these optimizations open exploitable",
"title": ""
}
] | [
{
"docid": "4fc356024295824f6c68360bf2fcb860",
"text": "Detecting depression is a key public health challenge, as almost 12% of all disabilities can be attributed to depression. Computational models for depression detection must prove not only that can they detect depression, but that they can do it early enough for an intervention to be plausible. However, current evaluations of depression detection are poor at measuring model latency. We identify several issues with the currently popular ERDE metric, and propose a latency-weighted F1 metric that addresses these concerns. We then apply this evaluation to several models from the recent eRisk 2017 shared task on depression detection, and show how our proposed measure can better capture system differences.",
"title": ""
},
{
"docid": "d1cde8ce9934723224ecf21c3cab6615",
"text": "Deep Neural Networks (DNNs) denote multilayer artificial neural networks with more than one hidden layer and millions of free parameters. We propose a Generalized Discriminant Analysis (GerDA) based on DNNs to learn discriminative features of low dimension optimized with respect to a fast classification from a large set of acoustic features for emotion recognition. On nine frequently used emotional speech corpora, we compare the performance of GerDA features and their subsequent linear classification with previously reported benchmarks obtained using the same set of acoustic features classified by Support Vector Machines (SVMs). Our results impressively show that low-dimensional GerDA features capture hidden information from the acoustic features leading to a significantly raised unweighted average recall and considerably raised weighted average recall.",
"title": ""
},
{
"docid": "659b1c167f0778c825788710237da569",
"text": "Voice conversion methods based on frequency warping followed by amplitude scaling have been recently proposed. These methods modify the frequency axis of the source spectrum in such manner that some significant parts of it, usually the formants, are moved towards their image in the target speaker's spectrum. Amplitude scaling is then applied to compensate for the differences between warped source spectra and target spectra. This article presents a fully parametric formulation of a frequency warping plus amplitude scaling method in which bilinear frequency warping functions are used. Introducing this constraint allows for the conversion error to be described in the cepstral domain and to minimize it with respect to the parameters of the transformation through an iterative algorithm, even when multiple overlapping conversion classes are considered. The paper explores the advantages and limitations of this approach when applied to a cepstral representation of speech. We show that it achieves significant improvements in quality with respect to traditional methods based on Gaussian mixture models, with no loss in average conversion accuracy. Despite its relative simplicity, it achieves similar performance scores to state-of-the-art statistical methods involving dynamic features and global variance.",
"title": ""
},
{
"docid": "40495cc96353f56481ed30f7f5709756",
"text": "This paper reported the construction of partial discharge measurement system under influence of cylindrical metal particle in transformer oil. The partial discharge of free cylindrical metal particle in the uniform electric field under AC applied voltage was studied in this paper. The partial discharge inception voltage (PDIV) for the single particle was measure to be 11kV. The typical waveform of positive PD and negative PD was also obtained. The result shows that the magnitude of negative PD is higher compared to positive PD. The observation on cylindrical metal particle movement revealed that there were a few stages of motion process involved.",
"title": ""
},
{
"docid": "06f4ec7c6425164ee7fc38a8b26b8437",
"text": "In this paper we present a decomposition strategy for solving large scheduling problems using mathematical programming methods. Instead of formulating one huge and unsolvable MILP problem, we propose a decomposition scheme that generates smaller programs that can often be solved to global optimality. The original problem is split into subproblems in a natural way using the special features of steel making and avoiding the need for expressing the highly complex rules as explicit constraints. We present a small illustrative example problem, and several real-world problems to demonstrate the capabilities of the proposed strategy, and the fact that the solutions typically lie within 1-3% of the global optimum.",
"title": ""
},
{
"docid": "1348ee3316643f4269311b602b71d499",
"text": "This paper describes our proposed solution for SemEval 2017 Task 1: Semantic Textual Similarity (Daniel Cer and Specia, 2017). The task aims at measuring the degree of equivalence between sentences given in English. Performance is evaluated by computing Pearson Correlation scores between the predicted scores and human judgements. Our proposed system consists of two subsystems and one regression model for predicting STS scores. The two subsystems are designed to learn Paraphrase and Event Embeddings that can take the consideration of paraphrasing characteristics and sentence structures into our system. The regression model associates these embeddings to make the final predictions. The experimental result shows that our system acquires 0.8 of Pearson Correlation Scores in this task.",
"title": ""
},
{
"docid": "7aefad1e65b946a3149897c65b9c3fad",
"text": "A touch-less interaction technology on vision based wearable device is designed and evaluated. Users interact with the application with dynamic hands/feet gestures in front of the camera. Several proof-of-concept prototypes with eleven dynamic gestures are developed based on the touch-less interaction. At last, a comparing user study evaluation is proposed to demonstrate the usability of the touch-less approach, as well as the impact on user's emotion, running on a wearable framework or Google Glass.",
"title": ""
},
{
"docid": "dacb4491a0cf1e05a2972cc1a82a6c62",
"text": "Human parechovirus type 3 (HPeV3) can cause serious conditions in neonates, such as sepsis and encephalitis, but data for adults are lacking. The case of a pregnant woman with HPeV3 infection is reported herein. A 28-year-old woman at 36 weeks of pregnancy was admitted because of myalgia and muscle weakness. Her grip strength was 6.0kg for her right hand and 2.5kg for her left hand. The patient's symptoms, probably due to fasciitis and not myositis, improved gradually with conservative treatment, however labor pains with genital bleeding developed unexpectedly 3 days after admission. An obstetric consultation was obtained and a cesarean section was performed, with no complications. A real-time PCR assay for the detection of viral genomic ribonucleic acid against HPeV showed positive results for pharyngeal swabs, feces, and blood, and negative results for the placenta, umbilical cord, umbilical cord blood, amniotic fluid, and breast milk. The HPeV3 was genotyped by sequencing of the VP1 region. The woman made a full recovery and was discharged with her infant in a stable condition.",
"title": ""
},
{
"docid": "715de052c6a603e3c8a572531920ecfa",
"text": "Muscle samples were obtained from the gastrocnemius of 17 female and 23 male track athletes, 10 untrained women, and 11 untrained men. Portions of the specimen were analyzed for total phosphorylase, lactic dehydrogenase (LDH), and succinate dehydrogenase (SDH) activities. Sections of the muscle were stained for myosin adenosine triphosphatase, NADH2 tetrazolium reductase, and alpha-glycerophosphate dehydrogenase. Maximal oxygen uptake (VO2max) was measured on a treadmill for 23 of the volunteers (6 female athletes, 11 male athletes, 10 untrained women, and 6 untrained men). These measurements confirm earlier reports which suggest that the athlete's preference for strength, speed, and/or endurance events is in part a matter of genetic endowment. Aside from differences in fiber composition and enzymes among middle-distance runners, the only distinction between the sexes was the larger fiber areas of the male athletes. SDH activity was found to correlate 0.79 with VO2max, while muscle LDH appeared to be a function of muscle fiber composition. While sprint- and endurance-trained athletes are characterized by distinct fiber compositions and enzyme activities, participants in strength events (e.g., shot-put) have relatively low muscle enzyme activities and a variety of fiber compositions.",
"title": ""
},
{
"docid": "466b1e13c9c94f83bbacb740def7416b",
"text": "High service quality is imperative and important for competitiveness of service industry. In order to provide much quality service, a deeper research on service quality models is necessary. There are plenty of service quality models which enable managers and practitioners to identify quality problems and improve the efficiency and profitability of overall performance. One of the most influential models in the service quality literature is the model of service quality gaps. In this paper, the model of service quality gaps has been critically reviewed and developed in order to make it more comprehensive. The developed model has been verified based using a survey on 16 experts. Compared to the traditional models, the proposed model involves five additional components and eight additional gaps.",
"title": ""
},
{
"docid": "641754ee9332e1032838d0dba7712607",
"text": "Medication administration is an increasingly complex process, influenced by the number of medications on the market, the number of medications prescribed for each patient, new medical technology and numerous administration policies and procedures. Adverse events initiated by medication error are a crucial area to improve patient safety. This project looked at the complexity of the medication administration process at a regional hospital and the effect of two medication distribution systems. A reduction in work complexity and time spent gathering medication and supplies, was a goal of this work; but more importantly was determining what barriers to safety and efficiency exist in the medication administration process and the impact of barcode scanning and other technologies. The concept of mobile medication units is attractive to both managers and clinicians; however it is only one solution to the problems with medication administration. Introduction and Background Medication administration is an increasingly complex process, influenced by the number of medications on the market, the number of medications prescribed for each patient, and the numerous policies and procedures created for their administration. Mayo and Duncan (2004) found that a “single [hospital] patient can receive up to 18 medications per day, and a nurse can administer as many as 50 medications per shift” (p. 209). While some researchers indicated that the solution is more nurse education or training (e.g. see Mayo & Duncan, 2004; and Tang, Sheu, Yu, Wei, & Chen, 2007), it does not appear that they have determined the feasibility of this solution and the increased time necessary to look up every unfamiliar medication. Most of the research which focuses on the causes of medication errors does not examine the processes involved in the administration of the medication. And yet, understanding the complexity in the nurses’ processes and workflow is necessary to develop safeguards and create more robust systems that reduce the probability of errors and adverse events. Current medication administration processes include many \\ tasks, including but not limited to, assessing the patient to obtain pertinent data, gathering medications, confirming the five rights (right dose, patient, route, medication, and time), administering the medications, documenting administration, and observing for therapeutic and untoward effects. In studies of the delivery of nursing care in acute care settings, Potter et al. (2005) found that nurses spent 16% their time preparing or administering medication. In addition to the amount of time that the nurses spent in preparing and administering medication, Potter et al found that a significant number of interruptions occurred during this critical process. Interruptions impact the cognitive workload of the nurse, and create an environment where medication errors are more likely to occur. A second environmental factor that affects the nurses’ workflow, is the distance traveled to administer care during a shift. Welker, Decker, Adam, & Zone-Smith (2006) found that on average, ward nurses who were assigned three patients walked just over 4.1 miles per shift while a nurse assigned to six patients walked over 4.8 miles. As a large number of interruptions (22%) occurred within the medication rooms, which were highly visible and in high traffic locations (Potter et al., 2005), and while collecting supplies or traveling to and from patient rooms (Ebright, Patterson, Chalko, & Render, 2003), reducing the distances and frequency of repeated travel could have the ability to decrease the number of interruptions and possibly errors in medication administration. Adding new technology, revising policies and procedures, and providing more education have often been the approaches taken to reduce medication errors. Unfortunately these new technologies, such as computerized order entry and electronic medical records / charting, and new procedures, for instance bar code scanning both the medicine and the patient, can add complexity to the nurse’s taskload. The added complexity in correspondence with the additional time necessary to complete the additional steps can lead to workarounds and variations in care. Given the problems in the current medication administration processes, this work focused on facilitating the nurse’s role in the medication administration process. This study expands on the Braswell and Duggar (2006) investigation and compares processes at baseline and postintroduction of a new mobile medication system. To do this, the current medication administration and distribution process was fully documented to determine a baseline in workload complexity. Then a new mobile medication center was installed to allow nurses easier access to patient medications while traveling on the floor, and the medication administration and distribution process was remapped to demonstrate where process complexities were reduced and nurse workflow is more efficient. A similar study showed that the time nurses spend gathering medications and supplies can be dramatically reduced through this type of system (see Braswell & Duggar, 2006); however, they did not directly investigate the impact on the nursing process. Thus, this research is presented to document the impact of this technology on the nursing workflow at a regional hospital, and as an expansion on the work begun by Braswell and Duggar.",
"title": ""
},
{
"docid": "6e7d629c5dd111df1064b969755863ef",
"text": "Recently proposed universal filtered multicarrier (UFMC) system is not an orthogonal system in multipath channel environments and might cause significant performance loss. In this paper, the authors propose a cyclic prefix (CP) based UFMC system and first analyze the conditions for interference-free one-tap equalization in the absence of transceiver imperfections. Then the corresponding signal model and output signal-to-noise ratio expression are derived. In the presence of carrier frequency offset, timing offset, and insufficient CP length, the authors establish an analytical system model as a summation of desired signal, intersymbol interference, intercarrier interference, and noise. New channel equalization algorithms are proposed based on the derived analytical signal model. Numerical results show that the derived model matches the simulation results precisely, and the proposed equalization algorithms improve the UFMC system performance in terms of bit error rate.",
"title": ""
},
{
"docid": "0c57dd3ce1f122d3eb11a98649880475",
"text": "Insulin resistance plays a major role in the pathogenesis of the metabolic syndrome and type 2 diabetes, and yet the mechanisms responsible for it remain poorly understood. Magnetic resonance spectroscopy studies in humans suggest that a defect in insulin-stimulated glucose transport in skeletal muscle is the primary metabolic abnormality in insulin-resistant patients with type 2 diabetes. Fatty acids appear to cause this defect in glucose transport by inhibiting insulin-stimulated tyrosine phosphorylation of insulin receptor substrate-1 (IRS-1) and IRS-1-associated phosphatidylinositol 3-kinase activity. A number of different metabolic abnormalities may increase intramyocellular and intrahepatic fatty acid metabolites; these include increased fat delivery to muscle and liver as a consequence of either excess energy intake or defects in adipocyte fat metabolism, and acquired or inherited defects in mitochondrial fatty acid oxidation. Understanding the molecular and biochemical defects responsible for insulin resistance is beginning to unveil novel therapeutic targets for the treatment of the metabolic syndrome and type 2 diabetes.",
"title": ""
},
{
"docid": "e1651c1f329b8caa53e5322be5bf700b",
"text": "Personalized curriculum sequencing is an important research issue for web-based learning systems because no fixed learning paths will be appropriate for all learners. Therefore, many researchers focused on developing e-learning systems with personalized learning mechanisms to assist on-line web-based learning and adaptively provide learning paths in order to promote the learning performance of individual learners. However, most personalized e-learning systems usually neglect to consider if learner ability and the difficulty level of the recommended courseware are matched to each other while performing personalized learning services. Moreover, the problem of concept continuity of learning paths also needs to be considered while implementing personalized curriculum sequencing because smooth learning paths enhance the linked strength between learning concepts. Generally, inappropriate courseware leads to learner cognitive overload or disorientation during learning processes, thus reducing learning performance. Therefore, compared to the freely browsing learning mode without any personalized learning path guidance used in most web-based learning systems, this paper assesses whether the proposed genetic-based personalized e-learning system, which can generate appropriate learning paths according to the incorrect testing responses of an individual learner in a pre-test, provides benefits in terms of learning performance promotion while learning. Based on the results of pre-test, the proposed genetic-based personalized e-learning system can conduct personalized curriculum sequencing through simultaneously considering courseware difficulty level and the concept continuity of learning paths to support web-based learning. Experimental results indicated that applying the proposed genetic-based personalized e-learning system for web-based learning is superior to the freely browsing learning mode because of high quality and concise learning path for individual learners. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "51505087f5ae1a9f57fe04f5e9ad241e",
"text": "Microblogs have recently received widespread interest from NLP researchers. However, current tools for Japanese word segmentation and POS tagging still perform poorly on microblog texts. We developed an annotated corpus and proposed a joint model for overcoming this situation. Our annotated corpus of microblog texts enables not only training of accurate statistical models but also quantitative evaluation of their performance. Our joint model with lexical normalization handles the orthographic diversity of microblog texts. We conducted an experiment to demonstrate that the corpus and model substantially contribute to boosting accuracy.",
"title": ""
},
{
"docid": "5a0cfbd3d8401d4d8e437ec1a1e9458f",
"text": "Ehlers-Danlos syndrome is an inherited heterogeneous group of connective tissue disorders, characterized by abnormal collagen synthesis, affecting skin, ligaments, joints, blood vessels and other organs. It is one of the oldest known causes of bruising and bleeding and was first described by Hipprocrates in 400 BC. Edvard Ehlers, in 1901, recognized the condition as a distinct entity. In 1908, Henri-Alexandre Danlos suggested that skin extensibility and fragility were the cardinal features of the syndrome. In 1998, Beighton published the classification of Ehlers-Danlos syndrome according to the Villefranche nosology. From the 1960s the genetic make up was identified. Management of bleeding problems associated with Ehlers-Danlos has been slow to progress.",
"title": ""
},
{
"docid": "b9cf32ef9364f55c5f59b4c6a9626656",
"text": "Graph-based methods have gained attention in many areas of Natural Language Processing (NLP) including Word Sense Disambiguation (WSD), text summarization, keyword extraction and others. Most of the work in these areas formulate their problem in a graph-based setting and apply unsupervised graph clustering to obtain a set of clusters. Recent studies suggest that graphs often exhibit a hierarchical structure that goes beyond simple flat clustering. This paper presents an unsupervised method for inferring the hierarchical grouping of the senses of a polysemous word. The inferred hierarchical structures are applied to the problem of word sense disambiguation, where we show that our method performs significantly better than traditional graph-based methods and agglomerative clustering yielding improvements over state-of-the-art WSD systems based on sense induction.",
"title": ""
},
{
"docid": "afe2bc204458117fb278ef500b485ea1",
"text": "PURPOSE\nTitanium based implant systems, though considered as the gold standard for rehabilitation of edentulous spaces, have been criticized for many inherent flaws. The onset of hypersensitivity reactions, biocompatibility issues, and an unaesthetic gray hue have raised demands for more aesthetic and tissue compatible material for implant fabrication. Zirconia is emerging as a promising alternative to conventional Titanium based implant systems for oral rehabilitation with superior biological, aesthetics, mechanical and optical properties. This review aims to critically analyze and review the credibility of Zirconia implants as an alternative to Titanium for prosthetic rehabilitation.\n\n\nSTUDY SELECTION\nThe literature search for articles written in the English language in PubMed and Cochrane Library database from 1990 till December 2016. The following search terms were utilized for data search: \"zirconia implants\" NOT \"abutment\", \"zirconia implants\" AND \"titanium implants\" AND \"osseointegration\", \"zirconia implants\" AND compatibility.\n\n\nRESULTS\nThe number of potential relevant articles selected were 47. All the human in vivo clinical, in vitro, animals' studies were included and discussed under the following subheadings: Chemical composition, structure and phases; Physical and mechanical properties; Aesthetic and optical properties; Osseointegration and biocompatibility; Surface modifications; Peri-implant tissue compatibility, inflammation and soft tissue healing, and long-term prognosis.\n\n\nCONCLUSIONS\nZirconia implants are a promising alternative to titanium with a superior soft-tissue response, biocompatibility, and aesthetics with comparable osseointegration. However, further long-term longitudinal and comparative clinical trials are required to validate zirconia as a viable alternative to the titanium implant.",
"title": ""
},
{
"docid": "2aa5f065e63a9bc0e24f74d4a37a7ea6",
"text": "Dataflow programming models are suitable to express multi-core streaming applications. The design of high-quality embedded systems in that context requires static analysis to ensure the liveness and bounded memory of the application. However, many streaming applications have a dynamic behavior. The previously proposed dataflow models for dynamic applications do not provide any static guarantees or only in exchange of significant restrictions in expressive power or automation. To overcome these restrictions, we propose the schedulable parametric dataflow (SPDF) model. We present static analyses and a quasi-static scheduling algorithm. We demonstrate our approach using a video decoder case study.",
"title": ""
},
{
"docid": "cc08e377d924f86fb6ceace022ad8db2",
"text": "Homomorphic cryptography has been one of the most interesting topics of mathematics and computer security since Gentry presented the first construction of a fully homomorphic encryption (FHE) scheme in 2009. Since then, a number of different schemes have been found, that follow the approach of bootstrapping a fully homomorphic scheme from a somewhat homomorphic foundation. All existing implementations of these systems clearly proved, that fully homomorphic encryption is not yet practical, due to significant performance limitations. However, there are many applications in the area of secure methods for cloud computing, distributed computing and delegation of computation in general, that can be implemented with homomorphic encryption schemes of limited depth. We discuss a simple algebraically homomorphic scheme over the integers that is based on the factorization of an approximate semiprime integer. We analyze the properties of the scheme and provide a couple of known protocols that can be implemented with it. We also provide a detailed discussion on searching with encrypted search terms and present implementations and performance figures for the solutions discussed in this paper.",
"title": ""
}
] | scidocsrr |
589dd8ec4b0dc8dfb5b49d0b78948084 | Information Filtering and Information Retrieval: Two Sides of the Same Coin? | [
{
"docid": "7bbfe54476196292fabddcf0bda734eb",
"text": "Gerard Salton has worked on a number of non-numeric computer applications including automatic information retrieval. He has published a large number of articles and several books on information retrieval, most recently, \"Introduction to Modern Information Retrieval,\" 1983. Edward Fox is currently assistant professor of computer science at VPI. Harry Wu is currently involved in the comparative study of relational database systems. His main interest is in information storage and retrieval. Authors' Present Addresses: G. Salton, Dept. of Computer Science, Cornell Univ., Ithaca, NY 14853; E. A. Fox, Virginia Polytechnic Institute and State Univ., Blacksburg, VA 24061; H. Wu, ITT--Programming Technology Center, lO00 Oronogue Lane, Stratford, CT 06497. This study was supported in part by the National Science Foundation under Grant IST-8108696. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1983 ACM 0001-0782/83/1100-1022 75¢ 1. CONVENTIONAL RETRIEVAL STRATEGIES In conventional information retrieval, the stored records are normally identified by sets of key words or",
"title": ""
},
{
"docid": "38dfeb7a0b906ec9894d2e03b56ad6e2",
"text": "This article reviews recent research into the use of hierarchic agglomerative clustering methods for document retrieval. After an introduction to the calculation of interdocument similarities and to clustering methods that are appropriate for document clustering, the article discusses algorithms that can be used to allow the implementation of these methods on databases of nontrivial size. The validation of document hierarchies is described using tests based on the theory of random graphs and on empirical characteristics of document collections that are to be clustered. A range of search strategies is available for retrieval from document hierarchies and the results are presented of a series of research projects that have used these strategies to search the clusters resulting from several different types of hierarchic agglomerative clustering method. It is suggested that the complete linkage method is probably the most effective method in terms of retrieval performance; however, it is also difficult to implement in an efficient manner. Other applications of document clustering techniques are discussed briefly; experimental evidence suggests that nearest neighbor clusters, possibly represented as a network model, provide a reasonably efficient and effective means of including interdocument similarity information in document retrieval systems.",
"title": ""
}
] | [
{
"docid": "90907753fd2c69c97088d333079fbb56",
"text": "This paper concerns the problem of pose estimation for an inertial-visual sensor. It is well known that IMU bias, and calibration errors between camera and IMU frames can impair the achievement of high-quality estimates through the fusion of visual and inertial data. The main contribution of this work is the design of new observers to estimate pose, IMU bias and camera-to-IMU rotation. The observers design relies on an extension of the so-called passive complementary filter on SO(3). Stability of the observers is established using Lyapunov functions under adequate observability conditions. Experimental results are presented to assess this approach.",
"title": ""
},
{
"docid": "c04e0adee5e0be3ac204733b3513cc5c",
"text": "There is an ongoing debate over the capabilities of hierarchical neural feedforward architectures for performing real-world invariant object recognition. Although a variety of hierarchical models exists, appropriate supervised and unsupervised learning methods are still an issue of intense research. We propose a feedforward model for recognition that shares components like weight sharing, pooling stages, and competitive nonlinearities with earlier approaches but focuses on new methods for learning optimal feature-detecting cells in intermediate stages of the hierarchical network. We show that principles of sparse coding, which were previously mostly applied to the initial feature detection stages, can also be employed to obtain optimized intermediate complex features. We suggest a new approach to optimize the learning of sparse features under the constraints of a weight-sharing or convolutional architecture that uses pooling operations to achieve gradual invariance in the feature hierarchy. The approach explicitly enforces symmetry constraints like translation invariance on the feature set. This leads to a dimension reduction in the search space of optimal features and allows determining more efficiently the basis representatives, which achieve a sparse decomposition of the input. We analyze the quality of the learned feature representation by investigating the recognition performance of the resulting hierarchical network on object and face databases. We show that a hierarchy with features learned on a single object data set can also be applied to face recognition without parameter changes and is competitive with other recent machine learning recognition approaches. To investigate the effect of the interplay between sparse coding and processing nonlinearities, we also consider alternative feedforward pooling nonlinearities such as presynaptic maximum selection and sum-of-squares integration. The comparison shows that a combination of strong competitive nonlinearities with sparse coding offers the best recognition performance in the difficult scenario of segmentation-free recognition in cluttered surround. We demonstrate that for both learning and recognition, a precise segmentation of the objects is not necessary.",
"title": ""
},
{
"docid": "512c0d3d9ad6d6a4d139a5e7e0bd3a4e",
"text": "The epidermal growth factor receptor (EGFR) contributes to the pathogenesis of head&neck squamous cell carcinoma (HNSCC). However, only a subset of HNSCC patients benefit from anti-EGFR targeted therapy. By performing an unbiased proteomics screen, we found that the calcium-activated chloride channel ANO1 interacts with EGFR and facilitates EGFR-signaling in HNSCC. Using structural mutants of EGFR and ANO1 we identified the trans/juxtamembrane domain of EGFR to be critical for the interaction with ANO1. Our results show that ANO1 and EGFR form a functional complex that jointly regulates HNSCC cell proliferation. Expression of ANO1 affected EGFR stability, while EGFR-signaling elevated ANO1 protein levels, establishing a functional and regulatory link between ANO1 and EGFR. Co-inhibition of EGFR and ANO1 had an additive effect on HNSCC cell proliferation, suggesting that co-targeting of ANO1 and EGFR could enhance the clinical potential of EGFR-targeted therapy in HNSCC and might circumvent the development of resistance to single agent therapy. HNSCC cell lines with amplification and high expression of ANO1 showed enhanced sensitivity to Gefitinib, suggesting ANO1 overexpression as a predictive marker for the response to EGFR-targeting agents in HNSCC therapy. Taken together, our results introduce ANO1 as a promising target and/or biomarker for EGFR-directed therapy in HNSCC.",
"title": ""
},
{
"docid": "8565471c18407fc0741548d11d44a7d2",
"text": "This study evaluated the clinical efficacy of 2% chlorhexidine (CHX) gel on intracanal bacteria reduction during root canal instrumentation. The additional antibacterial effect of an intracanal dressing (Ca[OH](2) mixed with 2% CHX gel) was also assessed. Forty-three patients with apical periodontitis were recruited. Four patients with irreversible pulpitis were included as negative controls. Teeth were instrumented using rotary instruments and 2% CHX gel as the disinfectant. Bacterial samples were taken upon access (S1), after instrumentation (S2), and after 2 weeks of intracanal dressing (S3). Anaerobic culture was performed. Four samples showed no bacteria growth at S1, which were excluded from further analysis. Of the samples cultured positively at S1, 10.3% (4/39) and 8.3% (4/36) sampled bacteria at S2 and S3, respectively. A significant difference in the percentage of positive culture between S1 and S2 (p < 0.001) but not between S2 and S3 (p = 0.692) was found. These results suggest that 2% CHX gel is an effective root canal disinfectant and additional intracanal dressing did not significantly improve the bacteria reduction on the sampled root canals.",
"title": ""
},
{
"docid": "ad606470b92b50eae9b0f729968cde7a",
"text": "It is projected that increasing on-chip integration with technology scaling will lead to the so-called dark silicon era in which more transistors are available on a chip than can be simultaneously powered on. It is conventionally assumed that the dark silicon will be provisioned with heterogeneous resources, for example dedicated hardware accelerators. In this paper we challenge the conventional assumption and build a case for homogeneous dark silicon CMPs that exploit the inherent variations in process parameters that exist in scaled technologies to offer increased performance. Since process variations result in core-to-core variations in power and frequency, the idea is to cherry pick the best subset of cores for an application so as to maximize performance within the power budget. To this end, we propose a polynomial time algorithm for optimal core selection, thread mapping and frequency assignment for a large class of multi-threaded applications. Our experimental results based on the Sniper multi-core simulator show that up to 22% and 30% performance improvement is observed for homogeneous CMPs with 33% and 50% dark silicon, respectively.",
"title": ""
},
{
"docid": "24a472d66dcffa1a871c277d679c4ef1",
"text": "Artificial intelligence (AI) is the core technology of technological revolution and industrial transformation. As one of the new intelligent needs in the AI 2.0 era, financial intelligence has elicited much attention from the academia and industry. In our current dynamic capital market, financial intelligence demonstrates a fast and accurate machine learning capability to handle complex data and has gradually acquired the potential to become a \"financial brain\". In this work, we survey existing studies on financial intelligence. First, we describe the concept of financial intelligence and elaborate on its position in the financial technology field. Second, we introduce the development of financial intelligence and review state-of-the-art techniques in wealth management, risk management, financial security, financial consulting, and blockchain. Finally, we propose a research framework called FinBrain and summarize four open issues, namely, explainable financial agents and causality, perception and prediction under uncertainty, risk-sensitive and robust decision making, and multi-agent game and mechanism design. We believe that these research directions can lay the foundation for the development of AI 2.0 in the finance field.",
"title": ""
},
{
"docid": "096c564c5be0faf7b52ff53d9db18c48",
"text": "With the increasing popularity of Web 2.0 streams, people become overwhelmed by the available information. This is partly countered by tagging blog posts and tweets, so that users can filter messages according to their tags. However, this is insufficient for detecting newly emerging topics that are not reflected by a single tag but are rather expressed by unusual tag combinations. This paper presents enBlogue, an approach for automatically detecting such emergent topics. EnBlogue uses a time-sliding window to compute statistics about tags and tag-pairs. These statistics are then used to identify unusual shifts in correlations, most of the time caused by real-world events. We analyze the strength of these shifts and measure the degree of unpredictability they include, used to rank tag-pairs expressing emergent topics. Additionally, this \"indicator of surprise\" is carried over to subsequent time points, as user interests do not abruptly vanish from one moment to the other. To avoid monitoring all tag-pairs we can also select a subset of tags, e. g., the most popular or volatile of them, to be used as seed-tags for subsequent pair-wise correlation computations. The system is fully implemented and publicly available on the Web, processing live Twitter data. We present experimental studies based on real world datasets demonstrating both the prediction quality by means of a user study and the efficiency of enBlogue.",
"title": ""
},
{
"docid": "68f0f63fcfa29d3867fa7d2dea6807cc",
"text": "We propose a machine learning framework to capture the dynamics of highfrequency limit order books in financial equity markets and automate real-time prediction of metrics such as mid-price movement and price spread crossing. By characterizing each entry in a limit order book with a vector of attributes such as price and volume at different levels, the proposed framework builds a learning model for each metric with the help of multi-class support vector machines (SVMs). Experiments with real data establish that features selected by the proposed framework are effective for short term price movement forecasts.",
"title": ""
},
{
"docid": "83b50f380f500bf6e140b3178431f0c6",
"text": "Leader election protocols are a fundamental building block for replicated distributed services. They ease the design of leader-based coordination protocols that tolerate failures. In partially synchronous systems, designing a leader election algorithm, that does not permit multiple leaders while the system is unstable, is a complex task. As a result many production systems use third-party distributed coordination services, such as ZooKeeper and Chubby, to provide a reliable leader election service. However, adding a third-party service such as ZooKeeper to a distributed system incurs additional operational costs and complexity. ZooKeeper instances must be kept running on at least three machines to ensure its high availability. In this paper, we present a novel leader election protocol using NewSQL databases for partially synchronous systems, that ensures at most one leader at any given time. The leader election protocol uses the database as distributed shared memory. Our work enables distributed systems that already use NewSQL databases to save the operational overhead of managing an additional third-party service for leader election. Our main contribution is the design, implementation and validation of a practical leader election algorithm, based on NewSQL databases, that has performance comparable to a leader election implementation using a state-of-the-art distributed coordination service, ZooKeeper.",
"title": ""
},
{
"docid": "c9e2487578f52638a0a3bb5967e9137a",
"text": "In a typical content-based image retrieval (CBIR) system, query results are a set of images sorted by feature similarities with respect to the query. However, images with high feature similarities to the query may be very different from the query in terms of semantics. This is known as the semantic gap. We introduce a novel image retrieval scheme, CLUster-based rEtrieval of images by unsupervised learning (CLUE), which tackles the semantic gap problem based on a hypothesis: semantically similar images tend to be clustered in some feature space. CLUE attempts to capture semantic concepts by learning the way that images of the same semantics are similar and retrieving image clusters instead of a set of ordered images. Clustering in CLUE is dynamic. In particular, clusters formed depend on which images are retrieved in response to the query. Therefore, the clusters give the algorithm as well as the users semantic relevant clues as to where to navigate. CLUE is a general approach that can be combined with any real-valued symmetric similarity measure (metric or nonmetric). Thus it may be embedded in many current CBIR systems. Experimental results based on a database of about 60, 000 images from COREL demonstrate improved performance.",
"title": ""
},
{
"docid": "6ab1b6ee8de387530742067c521e6694",
"text": "Loss of volume in the temples is an early sign of aging that is often overlooked by both the physician and the patient. Augmentation of the temple using soft tissue fillers improves the contours of the upper face with the secondary effect of lengthening and lifting the lateral brow. After replacement of volume, treatment of the overlying skin with skin-tightening devices or laser resurfacing help to complete a comprehensive rejuvenation of the temple and upper one-third of the face.",
"title": ""
},
{
"docid": "fa7682dc85d868e57527fdb3124b309c",
"text": "The seminal 2003 paper by Cosley, Lab, Albert, Konstan, and Reidl, demonstrated the susceptibility of recommender systems to rating biases. To facilitate browsing and selection, almost all recommender systems display average ratings before accepting ratings from users which has been shown to bias ratings. This effect is called Social Inuence Bias (SIB); the tendency to conform to the perceived \\norm\" in a community. We propose a methodology to 1) learn, 2) analyze, and 3) mitigate the effect of SIB in recommender systems. In the Learning phase, we build a baseline dataset by allowing users to rate twice: before and after seeing the average rating. In the Analysis phase, we apply a new non-parametric significance test based on the Wilcoxon statistic to test whether the data is consistent with SIB. If significant, we propose a Mitigation phase using polynomial regression and the Bayesian Information Criterion (BIC) to predict unbiased ratings. We evaluate our approach on a dataset of 9390 ratings from the California Report Card (CRC), a rating-based system designed to encourage political engagement. We found statistically significant evidence of SIB. Mitigating models were able to predict changed ratings with a normalized RMSE of 12.8% and reduce bias by 76.3%. The CRC, our data, and experimental code are available at: http://californiareportcard.org/data/",
"title": ""
},
{
"docid": "9824b33621ad02c901a9e16895d2b1a6",
"text": "Objective This systematic review aims to summarize current evidence on which naturally present cannabinoids contribute to cannabis psychoactivity, considering their reported concentrations and pharmacodynamics in humans. Design Following PRISMA guidelines, papers published before March 2016 in Medline, Scopus-Elsevier, Scopus, ISI-Web of Knowledge and COCHRANE, and fulfilling established a-priori selection criteria have been included. Results In 40 original papers, three naturally present cannabinoids (∆-9-Tetrahydrocannabinol, ∆-8-Tetrahydrocannabinol and Cannabinol) and one human metabolite (11-OH-THC) had clinical relevance. Of these, the metabolite produces the greatest psychoactive effects. Cannabidiol (CBD) is not psychoactive but plays a modulating role on cannabis psychoactive effects. The proportion of 9-THC in plant material is higher (up to 40%) than in other cannabinoids (up to 9%). Pharmacodynamic reports vary due to differences in methodological aspects (doses, administration route and volunteers' previous experience with cannabis). Conclusions Findings reveal that 9-THC contributes the most to cannabis psychoactivity. Due to lower psychoactive potency and smaller proportions in plant material, other psychoactive cannabinoids have a weak influence on cannabis final effects. Current lack of standard methodology hinders homogenized research on cannabis health effects. Working on a standard cannabis unit considering 9-THC is recommended.",
"title": ""
},
{
"docid": "be96da6d7a1e8348366b497f160c674e",
"text": "The large availability of biomedical data brings opportunities and challenges to health care. Representation of medical concepts has been well studied in many applications, such as medical informatics, cohort selection, risk prediction, and health care quality measurement. In this paper, we propose an efficient multichannel convolutional neural network (CNN) model based on multi-granularity embeddings of medical concepts named MG-CNN, to examine the effect of individual patient characteristics including demographic factors and medical comorbidities on total hospital costs and length of stay (LOS) by using the Hospital Quality Monitoring System (HQMS) data. The proposed embedding method leverages prior medical hierarchical ontology and improves the quality of embedding for rare medical concepts. The embedded vectors are further visualized by the t-Distributed Stochastic Neighbor Embedding (t-SNE) technique to demonstrate the effectiveness of grouping related medical concepts. Experimental results demonstrate that our MG-CNN model outperforms traditional regression methods based on the one-hot representation of medical concepts, especially in the outcome prediction tasks for patients with low-frequency medical events. In summary, MG-CNN model is capable of mining potential knowledge from the clinical data and will be broadly applicable in medical research and inform clinical decisions.",
"title": ""
},
{
"docid": "150ad4c49d10be14bf2f1a653a245498",
"text": "Code quality metrics are widely used to identify design flaws (e.g., code smells) as well as to act as fitness functions for refactoring recommenders. Both these applications imply a strong assumption: quality metrics are able to assess code quality as perceived by developers. Indeed, code smell detectors and refactoring recommenders should be able to identify design flaws/recommend refactorings that are meaningful from the developer's point-of-view. While such an assumption might look reasonable, there is limited empirical evidence supporting it. We aim at bridging this gap by empirically investigating whether quality metrics are able to capture code quality improvement as perceived by developers. While previous studies surveyed developers to investigate whether metrics align with their perception of code quality, we mine commits in which developers clearly state in the commit message their aim of improving one of four quality attributes: cohesion, coupling, code readability, and code complexity. Then, we use state-of-the-art metrics to assess the change brought by each of those commits to the specific quality attribute it targets. We found that, more often than not the considered quality metrics were not able to capture the quality improvement as perceived by developers (e.g., the developer states \"improved the cohesion of class C\", but no quality metric captures such an improvement).",
"title": ""
},
{
"docid": "8979f2a0e6db231b1363f764366e1d56",
"text": "In the current object detection field, one of the fastest algorithms is the Single Shot Multi-Box Detector (SSD), which uses a single convolutional neural network to detect the object in an image. Although SSD is fast, there is a big gap compared with the state-of-the-art on mAP. In this paper, we propose a method to improve SSD algorithm to increase its classification accuracy without affecting its speed. We adopt the Inception block to replace the extra layers in SSD, and call this method Inception SSD (I-SSD). The proposed network can catch more information without increasing the complexity. In addition, we use the batch-normalization (BN) and the residual structure in our I-SSD network architecture. Besides, we propose an improved non-maximum suppression method to overcome its deficiency on the expression ability of the model. The proposed I-SSD algorithm achieves 78.6% mAP on the Pascal VOC2007 test, which outperforms SSD algorithm while maintaining its time performance. We also construct an Outdoor Object Detection (OOD) dataset to testify the effectiveness of the proposed I-SSD on the platform of unmanned vehicles.",
"title": ""
},
{
"docid": "ea64ba0b1c3d4ed506fb3605893fef92",
"text": "We explore frame-level audio feature learning for chord recognition using artificial neural networks. We present the argument that chroma vectors potentially hold enough information to model harmonic content of audio for chord recognition, but that standard chroma extractors compute too noisy features. This leads us to propose a learned chroma feature extractor based on artificial neural networks. It is trained to compute chroma features that encode harmonic information important for chord recognition, while being robust to irrelevant interferences. We achieve this by feeding the network an audio spectrum with context instead of a single frame as input. This way, the network can learn to selectively compensate noise and resolve harmonic ambiguities. We compare the resulting features to hand-crafted ones by using a simple linear frame-wise classifier for chord recognition on various data sets. The results show that the learned feature extractor produces superior chroma vectors for chord recognition.",
"title": ""
},
{
"docid": "f6aaa3b7d8dae76c183982cdaad058d0",
"text": "Cutting and packing problems are encountered in many industries, with different industries incorporating different constraints and objectives. The wood-, glassand paper industry are mainly concerned with the cutting of regular figures, whereas in the ship building, textile and leather industry irregular, arbitrary shaped items are to be packed. In this paper two genetic algorithms are described for a rectangular packing problem. Both GAs are hybridised with a heuristic placement algorithm, one of which is the well-known Bottom-Left routine. A second placement method has been developed which overcomes some of the disadvantages of the Bottom-Left rule. The two hybrid genetic algorithms are compared with heuristic placement algorithms. In order to show the effectiveness of the design of the two genetic algorithms, their performance is compared to random search.",
"title": ""
},
{
"docid": "d4e4759c183c61acbf09bff91cc75ee5",
"text": "A wide range of defenses have been proposed to harden neural networks against adversarial attacks. However, a pattern has emerged in which the majority of adversarial defenses are quickly broken by new attacks. Given the lack of success at generating robust defenses, we are led to ask a fundamental question: Are adversarial attacks inevitable? This paper analyzes adversarial examples from a theoretical perspective, and identifies fundamental bounds on the susceptibility of a classifier to adversarial attacks. We show that, for certain classes of problems, adversarial examples are inescapable. Using experiments, we explore the implications of theoretical guarantees for real-world problems and discuss how factors such as dimensionality and image complexity limit a classifier’s robustness against adversarial examples.",
"title": ""
},
{
"docid": "546296aecaee9963ee7495c9fbf76fd4",
"text": "In this paper, we propose text summarization method that creates text summary by definition of the relevance score of each sentence and extracting sentences from the original documents. While summarization this method takes into account weight of each sentence in the document. The essence of the method suggested is in preliminary identification of every sentence in the document with characteristic vector of words, which appear in the document, and calculation of relevance score for each sentence. The relevance score of sentence is determined through its comparison with all the other sentences in the document and with the document title by cosine measure. Prior to application of this method the scope of features is defined and then the weight of each word in the sentence is calculated with account of those features. The weights of features, influencing relevance of words, are determined using genetic algorithms.",
"title": ""
}
] | scidocsrr |
b28541811021f530432657261b8fe919 | Real-Time Machine Learning: The Missing Pieces | [
{
"docid": "a06c9d681bb8a8b89a8ee64a53e3b344",
"text": "This paper introduces CIEL, a universal execution engine for distributed data-flow programs. Like previous execution engines, CIEL masks the complexity of distributed programming. Unlike those systems, a CIEL job can make data-dependent control-flow decisions, which enables it to compute iterative and recursive algorithms. We have also developed Skywriting, a Turingcomplete scripting language that runs directly on CIEL. The execution engine provides transparent fault tolerance and distribution to Skywriting scripts and highperformance code written in other programming languages. We have deployed CIEL on a cloud computing platform, and demonstrate that it achieves scalable performance for both iterative and non-iterative algorithms.",
"title": ""
}
] | [
{
"docid": "c0549844f4e8813bd7b839a95c94a13d",
"text": "In this paper, we present a novel method to fuse observations from an inertial measurement unit (IMU) and visual sensors, such that initial conditions of the inertial integration, including gravity estimation, can be recovered quickly and in a linear manner, thus removing any need for special initialization procedures. The algorithm is implemented using a graphical simultaneous localization and mapping like approach that guarantees constant time output. This paper discusses the technical aspects of the work, including observability and the ability for the system to estimate scale in real time. Results are presented of the system, estimating the platforms position, velocity, and attitude, as well as gravity vector and sensor alignment and calibration on-line in a built environment. This paper discusses the system setup, describing the real-time integration of the IMU data with either stereo or monocular vision data. We focus on human motion for the purposes of emulating high-dynamic motion, as well as to provide a localization system for future human-robot interaction.",
"title": ""
},
{
"docid": "ada35607fa56214e5df8928008735353",
"text": "Osseous free flaps have become the preferred method for reconstructing segmental mandibular defects. Of 457 head and neck free flaps, 150 osseous mandible reconstructions were performed over a 10-year period. This experience was retrospectively reviewed to establish an approach to osseous free flap mandible reconstruction. There were 94 male and 56 female patients (mean age, 50 years; range 3 to 79 years); 43 percent had hemimandibular defects, and the rest had central, lateral, or a combination defect. Donor sites included the fibula (90 percent), radius (4 percent), scapula (4 percent), and ilium (2 percent). Rigid fixation (up to five osteotomy sites) was used in 98 percent of patients. Aesthetic and functional results were evaluated a minimum of 6 months postoperatively. The free flap success rate was 100 percent, and bony union was achieved in 97 percent of the osteotomy sites. Osseointegrated dental implants were placed in 20 patients. A return to an unrestricted diet was achieved in 45 percent of patients; 45 percent returned to a soft diet, and 5 percent were on a liquid diet. Five percent of patients required enteral feeding to maintain weight. Speech was assessed as normal (36 percent), near normal (27 percent), intelligible (28 percent), or unintelligible (9 percent). Aesthetic outcome was judged as excellent (32 percent), good (27 percent), fair (27 percent), or poor (14 percent). This study demonstrates a very high success rate, with good-to-excellent functional and aesthetic results using osseous free flaps for primary mandible reconstruction. The fibula donor site should be the first choice for most cases, particularly those with anterior or large bony defects requiring multiple osteotomies. Use of alternative donor sites (i.e., radius and scapula) is best reserved for cases with large soft-tissue and minimal bone requirements. The ilium is recommended only when other options are unavailable. Thoughtful flap selection and design should supplant the need for multiple, simultaneous free flaps and vein grafting in most cases.",
"title": ""
},
{
"docid": "04629c15852f031fcee042577034f78f",
"text": "The mobility of carriers in a silicon surface inversion layer is one of the most important parameters required to accurately model and predict MOSFET device and circuit performance. It has been found that electron mobility follows a universal curve when plotted as a function of an effective normal field regardless of substrate bias, substrate doping (≤ 1017 cm-3) and nominal process variations [1]. Although accurate modeling of p-channel MOS devices has become important due to the prevalence of CMOS technology, the existence of a universal hole mobility-field relationship has not been demonstrated. Furthermore, the effect on mobility of low-temperature and rapid high-temperature processing, which are commonly used in modern VLSI technology to control impurity diffusion, is unknown.",
"title": ""
},
{
"docid": "23052a651887a5a73831b3c8a6571ba0",
"text": "This paper presentes a novel algorithm for the voxelization of surface models of arbitrary topology. Our algorithm uses the depth and stencil buffers, available in most commercial graphics hardware, to achieve high performance. It is suitable for both polygonal meshes and parametric surfaces. Experiments highlight the advantages and limitations of our approach.",
"title": ""
},
{
"docid": "27a8ec0dc0f4ad0ae67c2a75c25c4553",
"text": "Although the concept of industrial cobots dates back to 1999, most present day hybrid human-machine assembly systems are merely weight compensators. Here, we present results on the development of a collaborative human-robot manufacturing cell for homokinetic joint assembly. The robot alternates active and passive behaviours during assembly, to lighten the burden on the operator in the first case, and to comply to his/her needs in the latter. Our approach can successfully manage direct physical contact between robot and human, and between robot and environment. Furthermore, it can be applied to standard position (and not torque) controlled robots, common in the industry. The approach is validated in a series of assembly experiments. The human workload is reduced, diminishing the risk of strain injuries. Besides, a complete risk analysis indicates that the proposed setup is compatible with the safety standards, and could be certified.",
"title": ""
},
{
"docid": "5c056ba2e29e8e33c725c2c9dd12afa8",
"text": "The large amount of text data which are continuously produced over time in a variety of large scale applications such as social networks results in massive streams of data. Typically massive text streams are created by very large scale interactions of individuals, or by structured creations of particular kinds of content by dedicated organizations. An example in the latter category would be the massive text streams created by news-wire services. Such text streams provide unprecedented challenges to data mining algorithms from an efficiency perspective. In this paper, we review text stream mining algorithms for a wide variety of problems in data mining such as clustering, classification and topic modeling. A recent challenge arises in the context of social streams, which are generated by large social networks such as Twitter. We also discuss a number of future challenges in this area of research.",
"title": ""
},
{
"docid": "b0e30f8c95c972d01e342fc30c2a501c",
"text": "PURPOSE\nThe aim of the study was to explore the impact of a permanent stoma on patients' everyday lives and to gain further insight into their need for ostomy-related education.\n\n\nSUBJECTS AND SETTING\nThe sample population comprised 15 persons with permanent ostomies. Stomas were created to manage colorectal cancer or inflammatory bowel disease. The research setting was the surgical department at a hospital in the Capitol Region of Denmark associated with the University of Copenhagen.\n\n\nMETHODS\nFocus group interviews were conducted using a phenomenological hermeneutic approach. Data were collected and analyzed using qualitative content analysis.\n\n\nRESULTS\nStoma creation led to feelings of stigma, worries about disclosure, a need for control and self-imposed limits. Furthermore, patients experienced difficulties identifying their new lives with their lives before surgery. Participants stated they need to be seen as a whole person, to have close contact with health care professionals, and receive trustworthy information about life with an ostomy. Respondents proposed group sessions conducted after hospital discharge. They further recommended that sessions be delivered by lay teachers who had a stoma themselves.\n\n\nCONCLUSIONS\nSelf-imposed isolation was often selected as a strategy for avoiding disclosing the presence of a stoma. Patient education, using health promotional methods, should take the settings into account and patients' possibility of effective knowledge transfer. Respondents recommend involvement of lay teachers, who have a stoma, and group-based learning processes are proposed, when planning and conducting patient education.",
"title": ""
},
{
"docid": "2657e5090896cc7dc01f3b66d2d97a94",
"text": "In this article, we review gas sensor application of one-dimensional (1D) metal-oxide nanostructures with major emphases on the types of device structure and issues for realizing practical sensors. One of the most important steps in fabricating 1D-nanostructure devices is manipulation and making electrical contacts of the nanostructures. Gas sensors based on individual 1D nanostructure, which were usually fabricated using electron-beam lithography, have been a platform technology for fundamental research. Recently, gas sensors with practical applicability were proposed, which were fabricated with an array of 1D nanostructures using scalable micro-fabrication tools. In the second part of the paper, some critical issues are pointed out including long-term stability, gas selectivity, and room-temperature operation of 1D-nanostructure-based metal-oxide gas sensors.",
"title": ""
},
{
"docid": "9b99371de5da25c3e2cc2d8787da7d21",
"text": "lations, is a critical ecological process (Ims and Yoccoz 1997). It can maintain genetic diversity, rescue declining populations, and re-establish extirpated populations. Sufficient movement of individuals between isolated, extinction-prone populations can allow an entire network of populations to persist via metapopulation dynamics (Hanski 1991). As areas of natural habitat are reduced in size and continuity by human activities, the degree to which the remaining fragments are functionally linked by dispersal becomes increasingly important. The strength of those linkages is determined largely by a property known as “connectivity”, which, despite its intuitive appeal, is inconsistently defined. At one extreme, metapopulation ecologists argue for a habitat patch-level definition, while at the other, landscape ecologists insist that connectivity is a landscape-scale property (Merriam 1984; Taylor et al. 1993; Tischendorf and Fahrig 2000; Moilanen and Hanski 2001; Tischendorf 2001a; Moilanen and Nieminen 2002). Differences in perspective notwithstanding, theoreticians do agree that connectivity has undeniable effects on many population processes (Wiens 1997; Moilanen and Hanski 2001). It is therefore desirable to quantify connectivity and use these measurements as a basis for decision making. Currently, many reserve design algorithms factor in some measure of connectivity when weighing alternative plans (Siitonen et al. 2002, 2003; Singleton et al. 2002; Cabeza 2003). Consideration of connectivity during the reserve design process could highlight situations where it really matters. For example, alternative reserve designs that are similar in other factors such as area, habitat quality, and cost may differ greatly in connectivity (Siitonen et al. 2002). This matters because the low-connectivity scenarios may not be able to support viable populations of certain species over long periods of time. Analyses of this sort could also redirect some project resources towards improving the connectivity of a reserve network by building movement corridors or acquiring small, otherwise undesirable habitat patches that act as links between larger patches (Keitt et al. 1997). Reserve designs could therefore include the demographic and genetic benefits of increased connectivity without substantially increasing the cost of the project (eg Siitonen et al. 2002). If connectivity is to serve as a guide, at least in part, for conservation decision-making, it clearly matters how it is measured. Unfortunately, the ecological literature is awash with different connectivity metrics. How are land managers and decision makers to efficiently choose between these alternatives, when ecologists cannot even agree on a basic definition of connectivity, let alone how it is best measured? Aside from the theoretical perspectives to which they are tied, these metrics differ in two important regards: the type of data they require and the level of detail they provide. Here, we attempt to cut through some of the confusion surrounding connectivity by developing a classification scheme based on these key differences between metrics. 529",
"title": ""
},
{
"docid": "f2d2979ca63d47ba33fffb89c16b9499",
"text": "Shor and Grover demonstrated that a quantum computer can outperform any classical computer in factoring numbers and in searching a database by exploiting the parallelism of quantum mechanics. Whereas Shor's algorithm requires both superposition and entanglement of a many-particle system, the superposition of single-particle quantum states is sufficient for Grover's algorithm. Recently, the latter has been successfully implemented using Rydberg atoms. Here we propose an implementation of Grover's algorithm that uses molecular magnets, which are solid-state systems with a large spin; their spin eigenstates make them natural candidates for single-particle systems. We show theoretically that molecular magnets can be used to build dense and efficient memory devices based on the Grover algorithm. In particular, one single crystal can serve as a storage unit of a dynamic random access memory device. Fast electron spin resonance pulses can be used to decode and read out stored numbers of up to 105, with access times as short as 10-10 seconds. We show that our proposal should be feasible using the molecular magnets Fe8 and Mn12.",
"title": ""
},
{
"docid": "c1a44605e8e9b76a76bf5a2dd3539310",
"text": "This paper presents a stereo matching approach for a novel multi-perspective panoramic stereo vision system, making use of asynchronous and non-simultaneous stereo imaging towards real-time 3D 360° vision. The method is designed for events representing the scenes visual contrast as a sparse visual code allowing the stereo reconstruction of high resolution panoramic views. We propose a novel cost measure for the stereo matching, which makes use of a similarity measure based on event distributions. Thus, the robustness to variations in event occurrences was increased. An evaluation of the proposed stereo method is presented using distance estimation of panoramic stereo views and ground truth data. Furthermore, our approach is compared to standard stereo methods applied on event-data. Results show that we obtain 3D reconstructions of 1024 × 3600 round views and outperform depth reconstruction accuracy of state-of-the-art methods on event data.",
"title": ""
},
{
"docid": "5e53a20b6904a9b8765b0384f5d1d692",
"text": "This paper provides a description of the crowdfunding sector, considering investment-based crowdfunding platforms as well as platforms in which funders do not obtain monetary payments. It lays out key features of this quickly developing sector and explores the economic forces at play that can explain the design of these platforms. In particular, it elaborates on cross-group and within-group external e¤ects and asymmetric information on crowdfunding platforms. Keywords: Crowdfunding, Platform markets, Network e¤ects, Asymmetric information, P2P lending JEL-Classi
cation: L13, D62, G24 Université catholique de Louvain, CORE and Louvain School of Management, and CESifo yRITM, University of Paris Sud and Digital Society Institute zUniversity of Mannheim, Mannheim Centre for Competition and Innovation (MaCCI), and CERRE. Email: [email protected]",
"title": ""
},
{
"docid": "ac24229e51822e44cb09baaf44e9623e",
"text": "Detecting representative frames in videos based on human actions is quite challenging because of the combined factors of human pose in action and the background. This paper addresses this problem and formulates the key frame detection as one of finding the video frames that optimally maximally contribute to differentiating the underlying action category from all other categories. To this end, we introduce a deep two-stream ConvNet for key frame detection in videos that learns to directly predict the location of key frames. Our key idea is to automatically generate labeled data for the CNN learning using a supervised linear discriminant method. While the training data is generated taking many different human action videos into account, the trained CNN can predict the importance of frames from a single video. We specify a new ConvNet framework, consisting of a summarizer and discriminator. The summarizer is a two-stream ConvNet aimed at, first, capturing the appearance and motion features of video frames, and then encoding the obtained appearance and motion features for video representation. The discriminator is a fitting function aimed at distinguishing between the key frames and others in the video. We conduct experiments on a challenging human action dataset UCF101 and show that our method can detect key frames with high accuracy.",
"title": ""
},
{
"docid": "883e244ff530bf243daa367bad2c5c99",
"text": "The demand for computing resources in the university is on the increase on daily basis and the traditional method of acquiring computing resources may no longer meet up with the present demand. This is as a result of high level of researches being carried out by the universities. The 21st century universities are now seen as the centre and base of education, research and development for the society. The university community now has to deal with a large number of people including staff, students and researchers working together on voluminous large amount of data. This actually requires very high computing resources that can only be gotten easily through cloud computing. In this paper, we have taken a close look at exploring the benefits of cloud computing and study the adoption and usage of cloud services in the University Enterprise. We establish a theoretical background to cloud computing and its associated services including rigorous analysis of the latest research on Cloud Computing as an alternative to IT provision, management and security and discuss the benefits of cloud computing in the university enterprise. We also assess the trend of adoption and usage of cloud services in the university enterprise.",
"title": ""
},
{
"docid": "11229bf95164064f954c25681c684a16",
"text": "This article proposes integrating the insights generated by framing, priming, and agenda-setting research through a systematic effort to conceptualize and understand their larger implications for political power and democracy. The organizing concept is bias, that curiously undertheorized staple of public discourse about the media. After showing how agenda setting, framing and priming fit together as tools of power, the article connects them to explicit definitions of news slant and the related but distinct phenomenon of bias. The article suggests improved measures of slant and bias. Properly defined and measured, slant and bias provide insight into how the media influence the distribution of power: who gets what, when, and how. Content analysis should be informed by explicit theory linking patterns of framing in the media text to predictable priming and agenda-setting effects on audiences. When unmoored by such underlying theory, measures and conclusions of media bias are suspect.",
"title": ""
},
{
"docid": "b1ee02bfabb08a8a8e32be14553413cb",
"text": "This report describes and analyzes the MD6 hash function and is part of our submission package for MD6 as an entry in the NIST SHA-3 hash function competition. Significant features of MD6 include: • Accepts input messages of any length up to 2 − 1 bits, and produces message digests of any desired size from 1 to 512 bits, inclusive, including the SHA-3 required sizes of 224, 256, 384, and 512 bits. • Security—MD6 is by design very conservative. We aim for provable security whenever possible; we provide reduction proofs for the security of the MD6 mode of operation, and prove that standard differential attacks against the compression function are less efficient than birthday attacks for finding collisions. We also show that when used as a MAC within NIST recommendedations, the keyed version of MD6 is not vulnerable to linear cryptanalysis. The compression function and the mode of operation are each shown to be indifferentiable from a random oracle under reasonable assumptions. • MD6 has good efficiency: 22.4–44.1M bytes/second on a 2.4GHz Core 2 Duo laptop with 32-bit code compiled with Microsoft Visual Studio 2005 for digest sizes in the range 160–512 bits. When compiled for 64-bit operation, it runs at 61.8–120.8M bytes/second, compiled with MS VS, running on a 3.0GHz E6850 Core Duo processor. • MD6 works extremely well for multicore and parallel processors; we have demonstrated hash rates of over 1GB/second on one 16-core system, and over 427MB/sec on an 8-core system, both for 256-bit digests. We have also demonstrated MD6 hashing rates of 375 MB/second on a typical desktop GPU (graphics processing unit) card. We also show that MD6 runs very well on special-purpose hardware. • MD6 uses a single compression function, no matter what the desired digest size, to map input data blocks of 4096 bits to output blocks of 1024 bits— a fourfold reduction. (The number of rounds does, however, increase for larger digest sizes.) The compression function has auxiliary inputs: a “key” (K), a “number of rounds” (r), a “control word” (V ), and a “unique ID” word (U). • The standard mode of operation is tree-based: the data enters at the leaves of a 4-ary tree, and the hash value is computed at the root. See Figure 2.1. This standard mode of operation is highly parallelizable. 1http://www.csrc.nist.gov/pki/HashWorkshop/index.html",
"title": ""
},
{
"docid": "0a17722ba7fbeda51784cdd699f54b3f",
"text": "One of the greatest challenges food research is facing in this century lies in maintaining sustainable food production and at the same time delivering high quality food products with an added functionality to prevent life-style related diseases such as, cancer, obesity, diabetes, heart disease, stroke. Functional foods that contain bioactive components may provide desirable health benefits beyond basic nutrition and play important roles in the prevention of life-style related diseases. Polyphenols and carotenoids are plant secondary metabolites which are well recognized as natural antioxidants linked to the reduction of the development and progression of life-style related diseases. This chapter focuses on healthpromoting food ingredients (polyphenols and carotenoids), food structure and functionality, and bioavailability of these bioactive ingredients, with examples on their commercial applications, namely on functional foods. Thereafter, in order to support successful development of health-promoting food ingredients, this chapter contributes to an understanding of the relationship between food structures, ingredient functionality, in relation to the breakdown of food structures in the gastrointestinal tract and its impact on the bioavailability of bioactive ingredients. The overview on food processing techniques and the processing of functional foods given here will elaborate novel delivery systems for functional food ingredients and their applications in food. Finally, this chapter concludes with microencapsulation techniques and examples of encapsulation of polyphenols and carotenoids; the physical structure of microencapsulated food ingredients and their impacts on food sensorial properties; yielding an outline on the controlled release of encapsulated bioactive compounds in food products.",
"title": ""
},
{
"docid": "85e51ac7980deac92e140d0965a35708",
"text": "Ensuring that autonomous systems work ethically is both complex and difficult. However, the idea of having an additional ‘governor’ that assesses options the system has, and prunes them to select the most ethical choices is well understood. Recent work has produced such a governor consisting of a ‘consequence engine’ that assesses the likely future outcomes of actions then applies a Safety/Ethical logic to select actions. Although this is appealing, it is impossible to be certain that the most ethical options are actually taken. In this paper we extend and apply a well-known agent verification approach to our consequence engine, allowing us to verify the correctness of its ethical decision-making.",
"title": ""
},
{
"docid": "11ed7e0742ddb579efe6e1da258b0d3c",
"text": "Supervisory Control and Data Acquisition(SCADA) systems are deeply ingrained in the fabric of critical infrastructure sectors. These computerized real-time process control systems, over geographically dispersed continuous distribution operations, are increasingly subject to serious damage and disruption by cyber means due to their standardization and connectivity to other networks. However, SCADA systems generally have little protection from the escalating cyber threats. In order to understand the potential danger and to protect SCADA systems, in this paper, we highlight their difference from standard IT systems and present a set of security property goals. Furthermore, we focus on systematically identifying and classifying likely cyber attacks including cyber-induced cyber-physical attack son SCADA systems. Determined by the impact on control performance of SCADA systems, the attack categorization criteria highlights commonalities and important features of such attacks that define unique challenges posed to securing SCADA systems versus traditional Information Technology(IT) systems.",
"title": ""
}
] | scidocsrr |
730d66eaef0577d2cd08caf3142db5a3 | Cover Tree Bayesian Reinforcement Learning | [
{
"docid": "4e4560d1434ee05c30168e49ffc3d94a",
"text": "We present a tree data structure for fast nearest neighbor operations in general <i>n</i>-point metric spaces (where the data set consists of <i>n</i> points). The data structure requires <i>O</i>(<i>n</i>) space <i>regardless</i> of the metric's structure yet maintains all performance properties of a navigating net (Krauthgamer & Lee, 2004b). If the point set has a bounded expansion constant <i>c</i>, which is a measure of the intrinsic dimensionality, as defined in (Karger & Ruhl, 2002), the cover tree data structure can be constructed in <i>O</i> (<i>c</i><sup>6</sup><i>n</i> log <i>n</i>) time. Furthermore, nearest neighbor queries require time only logarithmic in <i>n</i>, in particular <i>O</i> (<i>c</i><sup>12</sup> log <i>n</i>) time. Our experimental results show speedups over the brute force search varying between one and several orders of magnitude on natural machine learning datasets.",
"title": ""
}
] | [
{
"docid": "06e8d9c53fe89fbf683920e90bf09731",
"text": "Convolutional neural networks (CNNs) with their ability to learn useful spatial features have revolutionized computer vision. The network topology of CNNs exploits the spatial relationship among the pixels in an image and this is one of the reasons for their success. In other domains deep learning has been less successful because it is not clear how the structure of non-spatial data can constrain network topology. Here, we show how multivariate time series can be interpreted as space-time pictures, thus expanding the applicability of the tricks-of-the-trade for CNNs to this important domain. We demonstrate that our model beats more traditional state-of-the-art models at predicting price development on the European Power Exchange (EPEX). Furthermore, we find that the features discovered by CNNs on raw data beat the features that were hand-designed by an expert.",
"title": ""
},
{
"docid": "5e61c6f1f8b9d63ffd964119c4ae122f",
"text": "In this paper, a novel converter, named as negative-output KY buck-boost converter, is presented herein, which has no bilinear characteristics. First of all, the basic operating principle of the proposed converter is illustrated in detail, and secondly some simulated and experimental results are provided to verify its effectiveness.",
"title": ""
},
{
"docid": "9364e07801fc01e50d0598b61ab642aa",
"text": "Online learning represents a family of machine learning methods, where a learner attempts to tackle some predictive (or any type of decision-making) task by learning from a sequence of data instances one by one at each time. The goal of online learning is to maximize the accuracy/correctness for the sequence of predictions/decisions made by the online learner given the knowledge of correct answers to previous prediction/learning tasks and possibly additional information. This is in contrast to traditional batch or offline machine learning methods that are often designed to learn a model from the entire training data set at once. Online learning has become a promising technique for learning from continuous streams of data in many real-world applications. This survey aims to provide a comprehensive survey of the online machine learning literature through a systematic review of basic ideas and key principles and a proper categorization of different algorithms and techniques. Generally speaking, according to the types of learning tasks and the forms of feedback information, the existing online learning works can be classified into three major categories: (i) online supervised learning where full feedback information is always available, (ii) online learning with limited feedback, and (iii) online unsupervised learning where no feedback is available. Due to space limitation, the survey will be mainly focused on the first category, but also briefly cover some basics of the other two categories. Finally, we also discuss some open issues and attempt to shed light on potential future research directions in this field.",
"title": ""
},
{
"docid": "ffb03136c1f8d690be696f65f832ab11",
"text": "This paper aims to improve the feature learning in Convolutional Networks (Convnet) by capturing the structure of objects. A new sparsity function is imposed on the extracted featuremap to capture the structure and shape of the learned object, extracting interpretable features to improve the prediction performance. The proposed algorithm is based on organizing the activation within and across featuremap by constraining the node activities through `2 and `1 normalization in a structured form.",
"title": ""
},
{
"docid": "a1d9742feb9f2a5dcf2322b00daf4151",
"text": "We tackle the problem of predicting the future popularity level of micro-reviews, focusing on Foursquare tips, whose high degree of informality and briefness offer extra difficulties to the design of effective popularity prediction methods. Such predictions can greatly benefit the future design of content filtering and recommendation methods. Towards our goal, we first propose a rich set of features related to the user who posted the tip, the venue where it was posted, and the tip’s content to capture factors that may impact popularity of a tip. We evaluate different regression and classification based models using this rich set of proposed features as predictors in various scenarios. As fas as we know, this is the first work to investigate the predictability of micro-review popularity (or helpfulness) exploiting spatial, temporal, topical and, social aspects that are rarely exploited conjointly in this domain. © 2015 Published by Elsevier Inc.",
"title": ""
},
{
"docid": "9b34b171858ad3ebda73848b7bb5372d",
"text": "INTRODUCTION\nVulvar and vaginal atrophy (VVA) affects up to two thirds of postmenopausal women, but most symptomatic women do not receive prescription therapy.\n\n\nAIM\nTo evaluate postmenopausal women's perceptions of VVA and treatment options for symptoms in the Women's EMPOWER survey.\n\n\nMETHODS\nThe Rose Research firm conducted an internet survey of female consumers provided by Lightspeed Global Market Insite. Women at least 45 years of age who reported symptoms of VVA and residing in the United States were recruited.\n\n\nMAIN OUTCOME MEASURES\nSurvey results were compiled and analyzed by all women and by treatment subgroups.\n\n\nRESULTS\nRespondents (N = 1,858) had a median age of 58 years (range = 45-90). Only 7% currently used prescribed VVA therapies (local estrogen therapies or oral selective estrogen receptor modulators), whereas 18% were former users of prescribed VVA therapies, 25% used over-the-counter treatments, and 50% had never used any treatment. Many women (81%) were not aware of VVA or that it is a medical condition. Most never users (72%) had never discussed their symptoms with a health care professional (HCP). The main reason for women not to discuss their symptoms with an HCP was that they believed that VVA was just a natural part of aging and something to live with. When women spoke to an HCP about their symptoms, most (85%) initiated the discussion. Preferred sources of information were written material from the HCP's office (46%) or questionnaires to fill out before seeing the HCP (41%).The most negative attributes of hormonal products were perceived risk of systemic absorption, messiness of local creams, and the need to reuse an applicator. Overall, HCPs only recommended vaginal estrogen therapy to 23% and oral hormone therapies to 18% of women. When using vaginal estrogen therapy, less than half of women adhered to and complied with posology; only 33% to 51% of women were very to extremely satisfied with their efficacy.\n\n\nCONCLUSION\nThe Women's EMPOWER survey showed that VVA continues to be an under-recognized and under-treated condition, despite recent educational initiatives. A disconnect in education, communication, and information between HCPs and their menopausal patients remains prevalent. Kingsberg S, Krychman M, Graham S, et al. The Women's EMPOWER Survey: Identifying Women's Perceptions on Vulvar and Vaginal Atrophy and Its Treatment. J Sex Med 2017;14:413-424.",
"title": ""
},
{
"docid": "d150439e46201c3d3979bc243fb38c26",
"text": "Genetic Algorithms and Evolution Strategies represent two of the three major Evolutionary Algorithms. This paper examines the history, theory and mathematical background, applications, and the current direction of both Genetic Algorithms and Evolution Strategies.",
"title": ""
},
{
"docid": "1ee063329b62404e22d73a4f5996332d",
"text": "High-rate data communication over a multipath wireless channel often requires that the channel response be known at the receiver. Training-based methods, which probe the channel in time, frequency, and space with known signals and reconstruct the channel response from the output signals, are most commonly used to accomplish this task. Traditional training-based channel estimation methods, typically comprising linear reconstruction techniques, are known to be optimal for rich multipath channels. However, physical arguments and growing experimental evidence suggest that many wireless channels encountered in practice tend to exhibit a sparse multipath structure that gets pronounced as the signal space dimension gets large (e.g., due to large bandwidth or large number of antennas). In this paper, we formalize the notion of multipath sparsity and present a new approach to estimating sparse (or effectively sparse) multipath channels that is based on some of the recent advances in the theory of compressed sensing. In particular, it is shown in the paper that the proposed approach, which is termed as compressed channel sensing (CCS), can potentially achieve a target reconstruction error using far less energy and, in many instances, latency and bandwidth than that dictated by the traditional least-squares-based training methods.",
"title": ""
},
{
"docid": "bd4234dc626b4c56d0170948ac5d5de3",
"text": "ISSN: 1049-4820 (Print) 1744-5191 (Online) Journal homepage: http://www.tandfonline.com/loi/nile20 Gamification and student motivation Patrick Buckley & Elaine Doyle To cite this article: Patrick Buckley & Elaine Doyle (2016) Gamification and student motivation, Interactive Learning Environments, 24:6, 1162-1175, DOI: 10.1080/10494820.2014.964263 To link to this article: https://doi.org/10.1080/10494820.2014.964263",
"title": ""
},
{
"docid": "bda1e2a1f27673dceed36adddfdc3e36",
"text": "IEEE 802.11 WLANs are a very important technology to provide high speed wireless Internet access. Especially at airports, university campuses or in city centers, WLAN coverage is becoming ubiquitous leading to a deployment of hundreds or thousands of Access Points (AP). Managing and configuring such large WLAN deployments is a challenge. Current WLAN management protocols such as CAPWAP are hard to extend with new functionality. In this paper, we present CloudMAC, a novel architecture for enterprise or carrier grade WLAN systems. By partially offloading the MAC layer processing to virtual machines provided by cloud services and by integrating our architecture with OpenFlow, a software defined networking approach, we achieve a new level of flexibility and reconfigurability. In Cloud-MAC APs just forward MAC frames between virtual APs and IEEE 802.11 stations. The processing of MAC layer frames as well as the creation of management frames is handled at the virtual APs while the binding between the virtual APs and the physical APs is managed using OpenFlow. The testbed evaluation shows that CloudMAC achieves similar performance as normal WLANs, but allows novel services to be implemented easily in high level programming languages. The paper presents a case study which shows that dynamically switching off APs to save energy can be performed seamlessly with CloudMAC, while a traditional WLAN architecture causes large interruptions for users.",
"title": ""
},
{
"docid": "b5270bbcbe8ed4abf8ae5dabe02bb933",
"text": "We address the use of three-dimensional facial shape information for human face identification. We propose a new method to represent faces as 3D registered point clouds. Fine registration of facial surfaces is done by first automatically finding important facial landmarks and then, establishing a dense correspondence between points on the facial surface with the help of a 3D face template-aided thin plate spline algorithm. After the registration of facial surfaces, similarity between two faces is defined as a discrete approximation of the volume difference between facial surfaces. Experiments done on the 3D RMA dataset show that the proposed algorithm performs as good as the point signature method, and it is statistically superior to the point distribution model-based method and the 2D depth imagery technique. In terms of computational complexity, the proposed algorithm is faster than the point signature method.",
"title": ""
},
{
"docid": "64160c1842b00377b07da7797f6002d0",
"text": "The macaque monkey ventral intraparietal area (VIP) contains neurons with aligned visual-tactile receptive fields anchored to the face and upper body. Our previous fMRI studies using standard head coils found a human parietal face area (VIP+ complex; putative macaque VIP homologue) containing superimposed topological maps of the face and near-face visual space. Here, we construct high signal-to-noise surface coils and used phase-encoded air puffs and looming stimuli to map topological organization of the parietal face area at higher resolution. This area is consistently identified as a region extending between the superior postcentral sulcus and the upper bank of the anterior intraparietal sulcus (IPS), avoiding the fundus of IPS. Using smaller voxel sizes, our surface coils picked up strong fMRI signals in response to tactile and visual stimuli. By analyzing tactile and visual maps in our current and previous studies, we constructed a set of topological models illustrating commonalities and differences in map organization across subjects. The most consistent topological feature of the VIP+ complex is a central-anterior upper face (and upper visual field) representation adjoined by lower face (and lower visual field) representations ventrally (laterally) and/or dorsally (medially), potentially forming two subdivisions VIPv (ventral) and VIPd (dorsal). The lower visual field representations typically extend laterally into the anterior IPS to adjoin human area AIP, and medially to overlap with the parietal body areas at the superior parietal ridge. Significant individual variations are then illustrated to provide an accurate and comprehensive view of the topological organization of the parietal face area.",
"title": ""
},
{
"docid": "e910310c5cc8357c570c6c4110c4e94f",
"text": "Epistemic planning can be used for decision making in multi-agent situations with distributed knowledge and capabilities. Dynamic Epistemic Logic (DEL) has been shown to provide a very natural and expressive framework for epistemic planning. In this paper, we aim to give an accessible introduction to DEL-based epistemic planning. The paper starts with the most classical framework for planning, STRIPS, and then moves towards epistemic planning in a number of smaller steps, where each step is motivated by the need to be able to model more complex planning scenarios.",
"title": ""
},
{
"docid": "8c067af7b61fae244340e784149a9c9b",
"text": "Based on EuroNCAP regulations the number of autonomous emergency braking systems for pedestrians (AEB-P) will increase over the next years. According to accident research a considerable amount of severe pedestrian accidents happen at artificial lighting, twilight or total darkness conditions. Because radar sensors are very robust in these situations, they will play an important role for future AEB-P systems. To assess and evaluate systems a pedestrian dummy with reflection characteristics as close as possible to real humans is indispensable. As an extension to existing measurements in literature this paper addresses open issues like the influence of different positions of the limbs or different clothing for both relevant automotive frequency bands. Additionally suggestions and requirements for specification of pedestrian dummies based on results of RCS measurements of humans and first experimental developed dummies are given.",
"title": ""
},
{
"docid": "ff939b33128e2b8d2cd0074a3b021842",
"text": "Breast cancer is the most common form of cancer among women worldwide. Ultrasound imaging is one of the most frequently used diagnostic tools to detect and classify abnormalities of the breast. Recently, computer-aided diagnosis (CAD) systems using ultrasound images have been developed to help radiologists to increase diagnosis accuracy. However, accurate ultrasound image segmentation remains a challenging problem due to various ultrasound artifacts. In this paper, we investigate approaches developed for breast ultrasound (BUS) image segmentation. In this paper, we reviewed the literature on the segmentation of BUS images according to the techniques adopted, especially over the past 10 years. By dividing into seven classes (i.e., thresholding-based, clustering-based, watershed-based, graph-based, active contour model, Markov random field and neural network), we have introduced corresponding techniques and representative papers accordingly. We have summarized and compared many techniques on BUS image segmentation and found that all these techniques have their own pros and cons. However, BUS image segmentation is still an open and challenging problem due to various ultrasound artifacts introduced in the process of imaging, including high speckle noise, low contrast, blurry boundaries, low signal-to-noise ratio and intensity inhomogeneity To the best of our knowledge, this is the first comprehensive review of the approaches developed for segmentation of BUS images. With most techniques involved, this paper will be useful and helpful for researchers working on segmentation of ultrasound images, and for BUS CAD system developers.",
"title": ""
},
{
"docid": "0cfa40d89a1d169d334067172167d750",
"text": "Recent advances in RST discourse parsing have focused on two modeling paradigms: (a) high order parsers which jointly predict the tree structure of the discourse and the relations it encodes; or (b) lineartime parsers which are efficient but mostly based on local features. In this work, we propose a linear-time parser with a novel way of representing discourse constituents based on neural networks which takes into account global contextual information and is able to capture long-distance dependencies. Experimental results show that our parser obtains state-of-the art performance on benchmark datasets, while being efficient (with time complexity linear in the number of sentences in the document) and requiring minimal feature engineering.",
"title": ""
},
{
"docid": "929f294583267ca8cb8616e803687f1e",
"text": "Recent systems for natural language understanding are strong at overcoming linguistic variability for lookup style reasoning. Yet, their accuracy drops dramatically as the number of reasoning steps increases. We present the first formal framework to study such empirical observations, addressing the ambiguity, redundancy, incompleteness, and inaccuracy that the use of language introduces when representing a hidden conceptual space. Our formal model uses two interrelated spaces: a conceptual meaning space that is unambiguous and complete but hidden, and a linguistic symbol space that captures a noisy grounding of the meaning space in the symbols or words of a language. We apply this framework to study the connectivity problem in undirected graphs---a core reasoning problem that forms the basis for more complex multi-hop reasoning. We show that it is indeed possible to construct a high-quality algorithm for detecting connectivity in the (latent) meaning graph, based on an observed noisy symbol graph, as long as the noise is below our quantified noise level and only a few hops are needed. On the other hand, we also prove an impossibility result: if a query requires a large number (specifically, logarithmic in the size of the meaning graph) of hops, no reasoning system operating over the symbol graph is likely to recover any useful property of the meaning graph. This highlights a fundamental barrier for a class of reasoning problems and systems, and suggests the need to limit the distance between the two spaces, rather than investing in multi-hop reasoning with\"many\"hops.",
"title": ""
},
{
"docid": "1e7b1c821631918c37cf3fc583e59fe2",
"text": "One of the most important issue that must be addressed in designing communication protocols for wireless sensor networks (WSN) is how to save sensor node energy while meeting the needs of applications. Recent researches have led to new protocols specifically designed for sensor networks where energy awareness is an essential consideration. Internet of Things (IoT) is an innovative ICT paradigm where a number of intelligent devices connected to Internet are involved in sharing information and making collaborative decision. Integration of sensing and actuation systems, connected to the Internet, means integration of all forms of energy consuming devices such as power outlets, bulbs, air conditioner, etc. Sometimes the system can communicate with the utility supply company and this led to achieve a balance between power generation and energy usage or in general is likely to optimize energy consumption as a whole. In this paper some emerging trends and challenges are identified to enable energy-efficient communications in Internet of Things architectures and between smart devices. The way devices communicate is analyzed in order to reduce energy consumption and prolong system lifetime. Devices equipped with WiFi and RF interfaces are analyzed under different scenarios by setting different communication parameters, such as data size, in order to evaluate the best device configuration and the longest lifetime of devices.",
"title": ""
},
{
"docid": "07ce7ea6645bd4cb644e04771a14194f",
"text": "As organizations increase their dependence on database systems for daily business, they become more vulnerable to security breaches even as they gain productivity and efficiency advantages. A truly comprehensive approach for data protection must include mechanisms for enforcing access control policies based on data contents, subject qualifications and characteristics. The database security community has developed a number of different techniques and approaches to assure data confidentiality, integrity, and availability. In this paper, we survey the most relevant concepts underlying the notion of access control policies for database security. We review the key access control models, namely, the discretionary and mandatory access control models and the role-based access control (RBAC)",
"title": ""
}
] | scidocsrr |
99854865f3b0c56939d67e168eb9d2ec | Name usage pattern in the synonym ambiguity problem in bibliographic data | [
{
"docid": "ce6f27561060d7119a82f9e69a089785",
"text": "Name disambiguation can occur when one is seeking a list of publications of an author who has used different name variations and when there are multiple other authors with the same name. We present an efficient integrative machine learning framework for solving the name disambiguation problem: a blocking method retrieves candidate classes of authors with similar names and a clustering method, DBSCAN, clusters papers by author. The distance metric between papers used in DBSCAN is calculated by an online active selection support vector machine algorithm (LASVM), yielding a simpler model, lower test errors and faster prediction time than a standard SVM. We prove that by recasting transitivity as density reachability in DBSCAN, transitivity is guaranteed for core points. For evaluation, we manually annotated 3,355 papers yielding 490 authors and achieved 90.6% pairwise-F1 metric. For scalability, authors in the entire CiteSeer dataset, over 700,000 papers, were readily disambiguated.",
"title": ""
},
{
"docid": "02d8ad18b07d08084764d124dc74a94c",
"text": "The large number of potential applications from bridging web data with knowledge bases have led to an increase in the entity linking research. Entity linking is the task to link entity mentions in text with their corresponding entities in a knowledge base. Potential applications include information extraction, information retrieval, and knowledge base population. However, this task is challenging due to name variations and entity ambiguity. In this survey, we present a thorough overview and analysis of the main approaches to entity linking, and discuss various applications, the evaluation of entity linking systems, and future directions.",
"title": ""
},
{
"docid": "7f57322b6e998d629d1a67cd5fb28da9",
"text": "Background: We recently described “Author-ity,” a model for estimating the probability that two articles in MEDLINE, sharing the same author name, were written by the same individual. Features include shared title words, journal name, coauthors, medical subject headings, language, affiliations, and author name features (middle initial, suffix, and prevalence in MEDLINE). Here we test the hypothesis that the Author-ity model will suffice to disambiguate author names for the vast majority of articles in MEDLINE. Methods: Enhancements include: (a) incorporating first names and their variants, email addresses, and correlations between specific last names and affiliation words; (b) new methods of generating large unbiased training sets; (c) new methods for estimating the prior probability; (d) a weighted least squares algorithm for correcting transitivity violations; and (e) a maximum likelihood based agglomerative algorithm for computing clusters of articles that represent inferred author-individuals. Results: Pairwise comparisons were computed for all author names on all 15.3 million articles in MEDLINE (2006 baseline), that share last name and first initial, to create Author-ity 2006, a database that has each name on each article assigned to one of 6.7 million inferred author-individual clusters. Recall is estimated at ∼98.8%. Lumping (putting two different individuals into the same cluster) affects ∼0.5% of clusters, whereas splitting (assigning articles written by the same individual to >1 cluster) affects ∼2% of articles. Impact: The Author-ity model can be applied generally to other bibliographic databases. Author name disambiguation allows information retrieval and data integration to become person-centered, not just document-centered, setting the stage for new data mining and social network tools that will facilitate the analysis of scholarly publishing and collaboration behavior. Availability: The Author-ity 2006 database is available for nonprofit academic research, and can be freely queried via http://arrowsmith.psych.uic.edu.",
"title": ""
},
{
"docid": "9c3218ce94172fd534e2a70224ee564f",
"text": "Author ambiguity mainly arises when several different authors express their names in the same way, generally known as the namesake problem, and also when the name of an author is expressed in many different ways, referred to as the heteronymous name problem. These author ambiguity problems have long been an obstacle to efficient information retrieval in digital libraries, causing incorrect identification of authors and impeding correct classification of their publications. It is a nontrivial task to distinguish those authors, especially when there is very limited information about them. In this paper, we propose a graph based approach to author name disambiguation, where a graph model is constructed using the co-author relations, and author ambiguity is resolved by graph operations such as vertex (or node) splitting and merging based on the co-authorship. In our framework, called a Graph Framework for Author Disambiguation (GFAD), the namesake problem is solved by splitting an author vertex involved in multiple cycles of co-authorship, and the heteronymous name problem is handled by merging multiple author vertices having similar names if those vertices are connected to a common vertex. Experiments were carried out with the real DBLP and Arnetminer collections and the performance of GFAD is compared with three representative unsupervised author name disambiguation systems. We confirm that GFAD shows better overall performance from the perspective of representative evaluation metrics. An additional contribution is that we released the refined DBLP collection to the public to facilitate organizing a performance benchmark for future systems on author disambiguation.",
"title": ""
},
{
"docid": "2d05142e12f63a354ec0c48436cd3697",
"text": "Author Name Disambiguation Neil R. Smalheiser and Vetle I. Torvik",
"title": ""
}
] | [
{
"docid": "90907753fd2c69c97088d333079fbb56",
"text": "This paper concerns the problem of pose estimation for an inertial-visual sensor. It is well known that IMU bias, and calibration errors between camera and IMU frames can impair the achievement of high-quality estimates through the fusion of visual and inertial data. The main contribution of this work is the design of new observers to estimate pose, IMU bias and camera-to-IMU rotation. The observers design relies on an extension of the so-called passive complementary filter on SO(3). Stability of the observers is established using Lyapunov functions under adequate observability conditions. Experimental results are presented to assess this approach.",
"title": ""
},
{
"docid": "f2af256af6a405a3b223abc5d9a276ac",
"text": "Traditional execution environments deploy Address Space Layout Randomization (ASLR) to defend against memory corruption attacks. However, Intel Software Guard Extension (SGX), a new trusted execution environment designed to serve security-critical applications on the cloud, lacks such an effective, well-studied feature. In fact, we find that applying ASLR to SGX programs raises non-trivial issues beyond simple engineering for a number of reasons: 1) SGX is designed to defeat a stronger adversary than the traditional model, which requires the address space layout to be hidden from the kernel; 2) the limited memory uses in SGX programs present a new challenge in providing a sufficient degree of entropy; 3) remote attestation conflicts with the dynamic relocation required for ASLR; and 4) the SGX specification relies on known and fixed addresses for key data structures that cannot be randomized. This paper presents SGX-Shield, a new ASLR scheme designed for SGX environments. SGX-Shield is built on a secure in-enclave loader to secretly bootstrap the memory space layout with a finer-grained randomization. To be compatible with SGX hardware (e.g., remote attestation, fixed addresses), SGX-Shield is designed with a software-based data execution protection mechanism through an LLVM-based compiler. We implement SGX-Shield and thoroughly evaluate it on real SGX hardware. It shows a high degree of randomness in memory layouts and stops memory corruption attacks with a high probability. SGX-Shield shows 7.61% performance overhead in running common microbenchmarks and 2.25% overhead in running a more realistic workload of an HTTPS server.",
"title": ""
},
{
"docid": "d967d6525cf88d498ecc872a9eef1c7c",
"text": "Historical Chinese character recognition has been suffering from the problem of lacking sufficient labeled training samples. A transfer learning method based on Convolutional Neural Network (CNN) for historical Chinese character recognition is proposed in this paper. A CNN model L is trained by printed Chinese character samples in the source domain. The network structure and weights of model L are used to initialize another CNN model T, which is regarded as the feature extractor and classifier in the target domain. The model T is then fine-tuned by a few labeled historical or handwritten Chinese character samples, and used for final evaluation in the target domain. Several experiments regarding essential factors of the CNNbased transfer learning method are conducted, showing that the proposed method is effective.",
"title": ""
},
{
"docid": "a1367b21acfebfe35edf541cdc6e3f48",
"text": "Mobile phone sensing is an emerging area of interest for researchers as smart phones are becoming the core communication device in people's everyday lives. Sensor enabled mobile phones or smart phones are hovering to be at the center of a next revolution in social networks, green applications, global environmental monitoring, personal and community healthcare, sensor augmented gaming, virtual reality and smart transportation systems. More and more organizations and people are discovering how mobile phones can be used for social impact, including how to use mobile technology for environmental protection, sensing, and to leverage just-in-time information to make our movements and actions more environmentally friendly. In this paper we have described comprehensively all those systems which are using smart phones and mobile phone sensors for humans good will and better human phone interaction.",
"title": ""
},
{
"docid": "96d44888850cf1940fb3a9e35c01f782",
"text": "This article investigates whether, and how, an artificial intelligence (AI) system can be said to use visual, imagery-based representations in a way that is analogous to the use of visual mental imagery by people. In particular, this article aims to answer two fundamental questions about imagery-based AI systems. First, what might visual imagery look like in an AI system, in terms of the internal representations used by the system to store and reason about knowledge? Second, what kinds of intelligent tasks would an imagery-based AI system be able to accomplish? The first question is answered by providing a working definition of what constitutes an imagery-based knowledge representation, and the second question is answered through a literature survey of imagery-based AI systems that have been developed over the past several decades of AI research, spanning task domains of: 1) template-based visual search; 2) spatial and diagrammatic reasoning; 3) geometric analogies and matrix reasoning; 4) naive physics; and 5) commonsense reasoning for question answering. This article concludes by discussing three important open research questions in the study of visual-imagery-based AI systems-on evaluating system performance, learning imagery operators, and representing abstract concepts-and their implications for understanding human visual mental imagery.",
"title": ""
},
{
"docid": "1e5f80dd831b5a1e373a9779f77ca373",
"text": "Direct volume rendered images (DVRIs) have been widely used to reveal structures in volumetric data. However, DVRIs generated by many volume visualization techniques can only partially satisfy users' demands. In this paper, we propose a framework for editing DVRIs, which can also be used for interactive transfer function (TF) design. Our approach allows users to fuse multiple features in distinct DVRIs into a comprehensive one, to blend two DVRIs, and/or to delete features in a DVRI. We further present how these editing operations can generate smooth animations for focus + context visualization. Experimental results on some real volumetric data demonstrate the effectiveness of our method.",
"title": ""
},
{
"docid": "a2fd33f276a336e2a33d84c2a0abc283",
"text": "The Smart information retrieval project emphasizes completely automatic approaches to the understanding and retrieval of large quantities of text. We continue our work in TREC 3, performing runs in the routing, ad-hoc, and foreign language environments. Our major focus is massive query expansion: adding from 300 to 530 terms to each query. These terms come from known relevant documents in the case of routing, and from just the top retrieved documents in the case of ad-hoc and Spanish. This approach improves e ectiveness from 7% to 25% in the various experiments. Other ad-hoc work extends our investigations into combining global similarities, giving an overall indication of how a document matches a query, with local similarities identifying a smaller part of the document which matches the query. Using an overlapping text window de nition of \\local\", we achieve a 16% improvement.",
"title": ""
},
{
"docid": "614539c43d5fa2986b9aab3a2562fd85",
"text": "Mobile devices such as smart phones are becoming popular, and realtime access to multimedia data in different environments is getting easier. With properly equipped communication services, users can easily obtain the widely distributed videos, music, and documents they want. Because of its usability and capacity requirements, music is more popular than other types of multimedia data. Documents and videos are difficult to view on mobile phones' small screens, and videos' large data size results in high overhead for retrieval. But advanced compression techniques for music reduce the required storage space significantly and make the circulation of music data easier. This means that users can capture their favorite music directly from the Web without going to music stores. Accordingly, helping users find music they like in a large archive has become an attractive but challenging issue over the past few years.",
"title": ""
},
{
"docid": "7e10aa210d6985d757a21b8b6c49ae53",
"text": "Haptic devices for computers and video-game consoles aim to reproduce touch and to engage the user with `force feedback'. Although physical touch is often associated with proximity and intimacy, technologies of touch can reproduce such sensations over a distance, allowing intricate and detailed operations to be conducted through a network such as the Internet. The `virtual handshake' between Boston and London in 2002 is given as an example. This paper is therefore a critical investigation into some technologies of touch, leading to observations about the sociospatial framework in which this technological touching takes place. Haptic devices have now become routinely included with videogame consoles, and have started to be used in computer-aided design and manufacture, medical simulation, and even the cybersex industry. The implications of these new technologies are enormous, as they remould the human ^ computer interface from being primarily audiovisual to being more truly multisensory, and thereby enhance the sense of `presence' or immersion. But the main thrust of this paper is the development of ideas of presence over a large distance, and how this is enhanced by the sense of touch. By using the results of empirical research, including interviews with key figures in haptics research and engineering and personal experience of some of the haptic technologies available, I build up a picture of how `presence', `copresence', and `immersion', themselves paradoxically intangible properties, are guiding the design, marketing, and application of haptic devices, and the engendering and engineering of a set of feelings of interacting with virtual objects, across a range of distances. DOI:10.1068/d394t",
"title": ""
},
{
"docid": "023d547ffb283a377635ad12be9cac99",
"text": "Pretend play has recently been of great interest to researchers studying children's understanding of the mind. One reason for this interest is that pretense seems to require many of the same skills as mental state understanding, and these skills seem to emerge precociously in pretense. Pretend play might be a zone of proximal development, an activity in which children operate at a cognitive level higher than they operate at in nonpretense situations. Alternatively, pretend play might be fool's gold, in that it might appear to be more sophisticated than it really is. This paper first discusses what pretend play is. It then investigates whether pretend play is an area of advanced understanding with reference to 3 skills that are implicated in both pretend play and a theory of mind: the ability to represent one object as two things at once, the ability to see one object as representing another, and the ability to represent mental representations.",
"title": ""
},
{
"docid": "ea04dad2ac1de160f78fa79b33a93b6a",
"text": "OBJECTIVE\nTo construct new size charts for all fetal limb bones.\n\n\nDESIGN\nA prospective, cross sectional study.\n\n\nSETTING\nUltrasound department of a large hospital.\n\n\nSAMPLE\n663 fetuses scanned once only for the purpose of the study at gestations between 12 and 42 weeks.\n\n\nMETHODS\nCentiles were estimated by combining separate regression models fitted to the mean and standard deviation, assuming that the measurements have a normal distribution at each gestational age.\n\n\nMAIN OUTCOME MEASURES\nDetermination of fetal limb lengths from 12 to 42 weeks of gestation.\n\n\nRESULTS\nSize charts for fetal bones (radius, ulna, humerus, tibia, fibula, femur and foot) are presented and compared with previously published data.\n\n\nCONCLUSIONS\nWe present new size charts for fetal limb bones which take into consideration the increasing variability with gestational age. We have compared these charts with other published data; the differences seen may be largely due to methodological differences. As standards for fetal head and abdominal measurements have been published from the same population, we suggest that the use of the new charts may facilitate prenatal diagnosis of skeletal dysplasias.",
"title": ""
},
{
"docid": "3fe5ea7769bfd7e7ea0adcb9ae497dcf",
"text": "Working memory emerges in infancy and plays a privileged role in subsequent adaptive cognitive development. The neural networks important for the development of working memory during infancy remain unknown. We used diffusion tensor imaging (DTI) and deterministic fiber tracking to characterize the microstructure of white matter fiber bundles hypothesized to support working memory in 12-month-old infants (n=73). Here we show robust associations between infants' visuospatial working memory performance and microstructural characteristics of widespread white matter. Significant associations were found for white matter tracts that connect brain regions known to support working memory in older children and adults (genu, anterior and superior thalamic radiations, anterior cingulum, arcuate fasciculus, and the temporal-parietal segment). Better working memory scores were associated with higher FA and lower RD values in these selected white matter tracts. These tract-specific brain-behavior relationships accounted for a significant amount of individual variation above and beyond infants' gestational age and developmental level, as measured with the Mullen Scales of Early Learning. Working memory was not associated with global measures of brain volume, as expected, and few associations were found between working memory and control white matter tracts. To our knowledge, this study is among the first demonstrations of brain-behavior associations in infants using quantitative tractography. The ability to characterize subtle individual differences in infant brain development associated with complex cognitive functions holds promise for improving our understanding of normative development, biomarkers of risk, experience-dependent learning and neuro-cognitive periods of developmental plasticity.",
"title": ""
},
{
"docid": "a5cee6dc248da019159ba7d769406928",
"text": "Coffee is one of the most consumed beverages in the world and is the second largest traded commodity after petroleum. Due to the great demand of this product, large amounts of residues are generated in the coffee industry, which are toxic and represent serious environmental problems. Coffee silverskin and spent coffee grounds are the main coffee industry residues, obtained during the beans roasting, and the process to prepare “instant coffee”, respectively. Recently, some attempts have been made to use these residues for energy or value-added compounds production, as strategies to reduce their toxicity levels, while adding value to them. The present article provides an overview regarding coffee and its main industrial residues. In a first part, the composition of beans and their processing, as well as data about the coffee world production and exportation, are presented. In the sequence, the characteristics, chemical composition, and application of the main coffee industry residues are reviewed. Based on these data, it was concluded that coffee may be considered as one of the most valuable primary products in world trade, crucial to the economies and politics of many developing countries since its cultivation, processing, trading, transportation, and marketing provide employment for millions of people. As a consequence of this big market, the reuse of the main coffee industry residues is of large importance from environmental and economical viewpoints.",
"title": ""
},
{
"docid": "add6957a74f1df33e21bf1923732ddc4",
"text": "Conversational search and recommendation based on user-system dialogs exhibit major differences from conventional search and recommendation tasks in that 1) the user and system can interact for multiple semantically coherent rounds on a task through natural language dialog, and 2) it becomes possible for the system to understand the user needs or to help users clarify their needs by asking appropriate questions from the users directly. We believe the ability to ask questions so as to actively clarify the user needs is one of the most important advantages of conversational search and recommendation. In this paper, we propose and evaluate a unified conversational search/recommendation framework, in an attempt to make the research problem doable under a standard formalization. Specifically, we propose a System Ask -- User Respond (SAUR) paradigm for conversational search, define the major components of the paradigm, and design a unified implementation of the framework for product search and recommendation in e-commerce. To accomplish this, we propose the Multi-Memory Network (MMN) architecture, which can be trained based on large-scale collections of user reviews in e-commerce. The system is capable of asking aspect-based questions in the right order so as to understand the user needs, while (personalized) search is conducted during the conversation, and results are provided when the system feels confident. Experiments on real-world user purchasing data verified the advantages of conversational search and recommendation against conventional search and recommendation algorithms in terms of standard evaluation measures such as NDCG.",
"title": ""
},
{
"docid": "66127055aff890d3f3f9d40bd1875980",
"text": "A simple, but comprehensive model of heat transfer and solidification of the continuous casting of steel slabs is described, including phenomena in the mold and spray regions. The model includes a one-dimensional (1-D) transient finite-difference calculation of heat conduction within the solidifying steel shell coupled with two-dimensional (2-D) steady-state heat conduction within the mold wall. The model features a detailed treatment of the interfacial gap between the shell and mold, including mass and momentum balances on the solid and liquid interfacial slag layers, and the effect of oscillation marks. The model predicts the shell thickness, temperature distributions in the mold and shell, thickness of the resolidified and liquid powder layers, heat-flux profiles down the wide and narrow faces, mold water temperature rise, ideal taper of the mold walls, and other related phenomena. The important effect of the nonuniform distribution of superheat is incorporated using the results from previous threedimensional (3-D) turbulent fluid-flow calculations within the liquid pool. The FORTRAN program CONID has a user-friendly interface and executes in less than 1 minute on a personal computer. Calibration of the model with several different experimental measurements on operating slab casters is presented along with several example applications. In particular, the model demonstrates that the increase in heat flux throughout the mold at higher casting speeds is caused by two combined effects: a thinner interfacial gap near the top of the mold and a thinner shell toward the bottom. This modeling tool can be applied to a wide range of practical problems in continuous casters.",
"title": ""
},
{
"docid": "c3f4f7d75c1b5cfd713ad7a10c887a3a",
"text": "This paper presents an open-source diarization toolkit which is mostly dedicated to speaker and developed by the LIUM. This toolkit includes hierarchical agglomerative clustering methods using well-known measures such as BIC and CLR. Two applications for which the toolkit has been used are presented: one is for broadcast news using the ESTER 2 data and the other is for telephone conversations using the MEDIA corpus.",
"title": ""
},
{
"docid": "e7f54e013e8e0de9cd5fde903dbac813",
"text": "Three concurrent public health problems coexist in the United States: endemic nonmedical use/misuse of opioid analgesics, epidemic overdose fatalities involving opioid analgesics, and endemic chronic pain in adults. These intertwined issues comprise an opioid crisis that has spurred the development of formulations of opioids with abuse-deterrent properties and label claims (OADP). To reduce abuse and misuse of prescription opioids, the federal Food and Drug Administration (FDA) has issued a formal Guidance to drug developers that delineates four categories of testing to generate data sufficient for a description of a product's abuse-deterrent properties, along with associated claims, in its Full Prescribing Information (FPI). This article reviews the epidemiology of the crisis as background for the development of OADP, summarizes the FDA Guidance for Industry regarding abuse-deterrent technologies, and provides an overview of some technologies that are currently employed or are under study for incorporation into OADP. Such technologies include physical and chemical barriers to abuse, combined formulations of opioid agonists and antagonists, inclusion of aversive agents, use of delivery systems that deter abuse, development of new molecular entities and prodrugs, and formulation of products that include some combination of these approaches. Opioids employing these novel technologies are one part of a comprehensive intervention strategy that can deter abuse of prescription opioid analgesics without creating barriers to the safe use of prescription opioids. The maximal public health contribution of OADP will probably occur only when all opioids have FDA-recognized abuse-deterrent properties and label claims.",
"title": ""
},
{
"docid": "fd1b32615aa7eb8f153e495d831bdd93",
"text": "The culture movement challenged the universality of the self-enhancement motive by proposing that the motive is pervasive in individualistic cultures (the West) but absent in collectivistic cultures (the East). The present research posited that Westerners and Easterners use different tactics to achieve the same goal: positive self-regard. Study 1 tested participants from differing cultural backgrounds (the United States vs. Japan), and Study 2 tested participants of differing self-construals (independent vs. interdependent). Americans and independents self-enhanced on individualistic attributes, whereas Japanese and interdependents self-enhanced on collectivistic attributes. Independents regarded individualistic attributes, whereas interdependents regarded collectivistic attributes, as personally important. Attribute importance mediated self-enhancement. Regardless of cultural background or self-construal, people self-enhance on personally important dimensions. Self-enhancement is a universal human motive.",
"title": ""
},
{
"docid": "cca4bd7bf4d9d00a4cf19bf2be785366",
"text": "Sometimes information systems fail or have operational and communication problems because designers may not have knowledge of the domain which is intended to be modeled. The same happens with systems for monitoring. Thus, an ontological model is needed to represent the organizational domain, which is intended to be monitored in order to develop an effective monitoring system. In this way, the purpose of the paper is to present a database based on Enterprise Ontology, which represents and specifies organizational transactions, aiming to be a repository of references or models of organizational transaction executions. Therefore, this database intends to be a generic risk profiles repository of organizational transactions for monitoring applications. Moreover, the Risk Profiles Repository presented in this paper is an innovative vision about continuous monitoring and has demonstrated to be a powerful tool for technological representations of organizational transactions and processes in compliance with the formalisms of a business ontological model.",
"title": ""
},
{
"docid": "b40bbfc19072efc645e5f1d6fb1d89e7",
"text": "With the development of information technologies, a great amount of semantic data is being generated on the web. Consequently, finding efficient ways of accessing this data becomes more and more important. Question answering is a good compromise between intuitiveness and expressivity, which has attracted the attention of researchers from different communities. In this paper, we propose an intelligent questing answering system for answering questions about concepts. It is based on ConceptRDF, which is an RDF presentation of the ConceptNet knowledge base. We use it as a knowledge base for answering questions. Our experimental results show that our approach is promising: it can answer questions about concepts at a satisfactory level of accuracy (reaches 94.5%).",
"title": ""
}
] | scidocsrr |
7168693549485567e291d3e70e28e135 | Context Contrasted Feature and Gated Multi-scale Aggregation for Scene Segmentation | [
{
"docid": "8a77882cfe06eaa88db529432ed31b0c",
"text": "We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.",
"title": ""
},
{
"docid": "93314112049e3bccd7853e63afc97f73",
"text": "In this paper, we address the challenging task of scene segmentation. In order to capture the rich contextual dependencies over image regions, we propose Directed Acyclic Graph-Recurrent Neural Networks (DAG-RNN) to perform context aggregation over locally connected feature maps. More specifically, DAG-RNN is placed on top of pre-trained CNN (feature extractor) to embed context into local features so that their representative capability can be enhanced. In comparison with plain CNN (as in Fully Convolutional Networks-FCN), DAG-RNN is empirically found to be significantly more effective at aggregating context. Therefore, DAG-RNN demonstrates noticeably performance superiority over FCNs on scene segmentation. Besides, DAG-RNN entails dramatically less parameters as well as demands fewer computation operations, which makes DAG-RNN more favorable to be potentially applied on resource-constrained embedded devices. Meanwhile, the class occurrence frequencies are extremely imbalanced in scene segmentation, so we propose a novel class-weighted loss to train the segmentation network. The loss distributes reasonably higher attention weights to infrequent classes during network training, which is essential to boost their parsing performance. We evaluate our segmentation network on three challenging public scene segmentation benchmarks: Sift Flow, Pascal Context and COCO Stuff. On top of them, we achieve very impressive segmentation performance.",
"title": ""
}
] | [
{
"docid": "85713bc895a5477e9e99bd4884d01d3c",
"text": "Recently, Fan-out Wafer Level Packaging (FOWLP) has been emerged as a promising technology to meet the ever increasing demands of the consumer electronic products. However, conventional FOWLP technology is limited to small size packages with single chip and Low to Mid-range Input/ Output (I/O) count due to die shift, warpage and RDL scaling issues. In this paper, we are presenting new RDL-First FOWLP approach which enables RDL scaling, overcomes the die shift, die protrusion and warpage challenges of conventional FOWLP, and extend the FOWLP technology for multi-chip and high I/O count package applications. RDL-First FOWLP process integration flow was demonstrated and fabricated test vehicles of large multi-chip package of 20 x 20 mm2 with 3 layers fine pitch RDL of LW/LS of 2μm/2μm and ~2400 package I/Os. Two Through Mold Interconnections (TMI) fabrication approaches (tall Cu pillar and vertical Cu wire) were evaluated on this platform for Package-on-Package (PoP) application. Backside RDL process on over molded Chip-to-Wafer (C2W) with carrier wafer was demonstrated for PoP applications. Laser de-bonding and sacrificial release layer material cleaning processes were established, and successfully used in the integration flow to fabricate the test vehicles. Assembly processes were optimized and successfully demonstrated large multi-chip RDL-first FOWLP package and PoP assembly on test boards. The large multi-chip FOWLP packages samples were passed JEDEC component level test Moisture Sensitivity Test Level 1 & Level 3 (MST L1 & MST L3) and 30 drops of board level drop test, and results will be presented.",
"title": ""
},
{
"docid": "4f096ba7fc6164cdbf5d37676d943fa8",
"text": "This work presents an intelligent clothes search system based on domain knowledge, targeted at creating a virtual assistant to search clothes matched to fashion and userpsila expectation using all what have already been in real closet. All what garment essentials and fashion knowledge are from visual images. Users can simply submit the desired image keywords, such as elegant, sporty, casual, and so on, and occasion type, such as formal meeting, outdoor dating, and so on, to the system. And then the fashion style recognition module is activated to search the desired clothes within the personal garment database. Category learning with supervised neural networking is applied to cluster garments into different impression groups. The input stimuli of the neural network are three sensations, warmness, loudness, and softness, which are transformed from the physical garment essentials like major color tone, print type, and fabric material. The system aims to provide such an intelligent user-centric services system functions as a personal fashion advisor.",
"title": ""
},
{
"docid": "6fd1e9896fc1aaa79c769bd600d9eac3",
"text": "In future planetary exploration missions, rovers will be required to autonomously traverse challenging environments. Much of the previous work in robot motion planning cannot be successfully applied to the rough-terrain planning problem. A model-based planning method is presented in this paper that is computationally efficient and takes into account uncertainty in the robot model, terrain model, range sensor data, and rover pathfollowing errors. It is based on rapid path planning through the visible terrain map with a simple graph-search algorithm, followed by a physics-based evaluation of the path with a rover model. Simulation results are presented which demonstrate the method’s effectiveness.",
"title": ""
},
{
"docid": "29749091f6ccdc0c2697c9faf3682c90",
"text": "In traditional video conferencing systems, it is impossible for users to have eye contact when looking at the conversation partner’s face displayed on the screen, due to the disparity between the locations of the camera and the screen. In this work, we implemented a gaze correction system that can automatically maintain eye contact by replacing the eyes of the user with the direct looking eyes (looking directly into the camera) captured in the initialization stage. Our real-time system has good robustness against different lighting conditions and head poses, and it provides visually convincing and natural results while relying only on a single webcam that can be positioned almost anywhere around the",
"title": ""
},
{
"docid": "4859363a5f64977336d107794251a203",
"text": "The paper treats a modular program in which transfers of control between modules follow a semi-Markov process. Each module is failure-prone, and the different failure processes are assumed to be Poisson. The transfers of control between modules (interfaces) are themselves subject to failure. The overall failure process of the program is described, and an asymptotic Poisson process approximation is given for the case when the individual modules and interfaces are very reliable. A simple formula gives the failure rate of the overall program (and hence mean time between failures) under this limiting condition. The remainder of the paper treats the consequences of failures. Each failure results in a cost, represented by a random variable with a distribution typical of the type of failure. The quantity of interest is the total cost of running the program for a time t, and a simple approximating distribution is given for large t. The parameters of this limiting distribution are functions only of the means and variances of the underlying distributions, and are thus readily estimable. A calculation of program availability is given as an example of the cost process. There follows a brief discussion of methods of estimating the parameters of the model, with suggestions of areas in which it might be used.",
"title": ""
},
{
"docid": "7cecfd37e44b26a67bee8e9c7dd74246",
"text": "Forecasting hourly spot prices for real-time electricity usage is a challenging task. This paper investigates a series of forecasting methods to 90 and 180 days of load data collection acquired from the Iberian Electricity Market (MIBEL). This dataset was used to train and test multiple forecast models. The Mean Absolute Percentage Error (MAPE) for the proposed Hybrid combination of Auto Regressive Integrated Moving Average (ARIMA) and Generalized Linear Model (GLM) was compared against ARIMA, GLM, Random forest (RF) and Support Vector Machines (SVM) methods. The results indicate significant improvement in MAPE and correlation co-efficient values for the proposed hybrid ARIMA-GLM method.",
"title": ""
},
{
"docid": "3984af6a6b9dbae761490e8595d22d60",
"text": "In 2013, the IEEE Future Directions Committee (FDC) formed an SDN work group to explore the amount of interest in forming an IEEE Software-Defined Network (SDN) Community. To this end, a Workshop on “SDN for Future Networks and Services” (SDN4FNS’13) was organized in Trento, Italy (Nov. 11-13 2013). Following the results of the workshop, in this paper, we have further analyzed scenarios, prior-art, state of standardization, and further discussed the main technical challenges and socio-economic aspects of SDN and virtualization in future networks and services. A number of research and development directions have been identified in this white paper, along with a comprehensive analysis of the technical feasibility and business availability of those fundamental technologies. A radical industry transition towards the “economy of information through softwarization” is expected in the near future. Keywords—Software-Defined Networks, SDN, Network Functions Virtualization, NFV, Virtualization, Edge, Programmability, Cloud Computing.",
"title": ""
},
{
"docid": "548fb90bf9d665e57ced0547db1477b7",
"text": "In the application of face recognition, eyeglasses could significantly degrade the recognition accuracy. A feasible method is to collect large-scale face images with eyeglasses for training deep learning methods. However, it is difficult to collect the images with and without glasses of the same identity, so that it is difficult to optimize the intra-variations caused by eyeglasses. In this paper, we propose to address this problem in a virtual synthesis manner. The high-fidelity face images with eyeglasses are synthesized based on 3D face model and 3D eyeglasses. Models based on deep learning methods are then trained on the synthesized eyeglass face dataset, achieving better performance than previous ones. Experiments on the real face database validate the effectiveness of our synthesized data for improving eyeglass face recognition performance.",
"title": ""
},
{
"docid": "c091e5b24dc252949b3df837969e263a",
"text": "The emergence of powerful portable computers, along with advances in wireless communication technologies, has made mobile computing a reality. Among the applications that are finding their way to the market of mobile computingthose that involve data managementhold a prominent position. In the past few years, there has been a tremendous surge of research in the area of data management in mobile computing. This research has produced interesting results in areas such as data dissemination over limited bandwith channels, location-dependent querying of data, and advanced interfaces for mobile computers. This paper is an effort to survey these techniques and to classify this research in a few broad areas.",
"title": ""
},
{
"docid": "61bb811aa336e77d2549c51939f9668d",
"text": "Policy languages (such as privacy and rights) have had little impact on the wider community. Now that Social Networks have taken off, the need to revisit Policy languages and realign them towards Social Networks requirements has become more apparent. One such language is explored as to its applicability to the Social Networks masses. We also argue that policy languages alone are not sufficient and thus they should be paired with reasoning mechanisms to provide precise and unambiguous execution models of the policies. To this end we propose a computationally oriented model to represent, reason with and execute policies for Social Networks.",
"title": ""
},
{
"docid": "0d2b905bc0d7f117d192a8b360cc13f0",
"text": "We investigate a previously unknown phase of phosphorus that shares its layered structure and high stability with the black phosphorus allotrope. We find the in-plane hexagonal structure and bulk layer stacking of this structure, which we call \"blue phosphorus,\" to be related to graphite. Unlike graphite and black phosphorus, blue phosphorus displays a wide fundamental band gap. Still, it should exfoliate easily to form quasi-two-dimensional structures suitable for electronic applications. We study a likely transformation pathway from black to blue phosphorus and discuss possible ways to synthesize the new structure.",
"title": ""
},
{
"docid": "263c04402cfe80649b1d3f4a8578e99b",
"text": "This paper presents M3Express (Modular-Mobile-Multirobot), a new design for a low-cost modular robot. The robot is self-mobile, with three independently driven wheels that also serve as connectors. The new connectors can be automatically operated, and are based on stationary magnets coupled to mechanically actuated ferromagnetic yoke pieces. Extensive use is made of plastic castings, laser cut plastic sheets, and low-cost motors and electronic components. Modules interface with a host PC via Bluetooth® radio. An off-board camera, along with a set of modules and a control PC form a convenient, low-cost system for rapidly developing and testing control algorithms for modular reconfigurable robots. Experimental results demonstrate mechanical docking, connector strength, and accuracy of dead reckoning locomotion.",
"title": ""
},
{
"docid": "5e0cff7f2b8e5aa8d112eacf2f149d60",
"text": "THEORIES IN AI FALL INT O TWO broad categories: mechanismtheories and contenttheories. Ontologies are content the ories about the sor ts of objects, properties of objects,and relations between objects tha t re possible in a specif ed domain of kno wledge. They provide potential ter ms for descr ibing our knowledge about the domain. In this article, we survey the recent de velopment of the f ield of ontologies in AI. We point to the some what different roles ontologies play in information systems, naturallanguage under standing, and knowledgebased systems. Most r esear ch on ontologies focuses on what one might characterize as domain factual knowledge, because kno wlede of that type is par ticularly useful in natural-language under standing. There is another class of ontologies that are important in KBS—one that helps in shar ing knoweldge about reasoning str ategies or pr oblemsolving methods. In a f ollow-up article, we will f ocus on method ontolo gies.",
"title": ""
},
{
"docid": "0cf97758f5f7dab46e969af14bb36db9",
"text": "The design complexity of modern high performance processors calls for innovative design methodologies for achieving time-to-market goals. New design techniques are also needed to curtail power increases that inherently arise from ever increasing performance targets. This paper describes new design approaches employed by the POWER8 processor design team to address complexity and power consumption challenges. Improvements in productivity are attained by leveraging a new and more synthesis-centric design methodology. New optimization strategies for synthesized macros allow power reduction without sacrificing performance. These methodology innovations contributed to the industry leading performance of the POWER8 processor. Overall, POWER8 delivers a 2.5x increase in per-socket performance over its predecessor, POWER7+, while maintaining the same power dissipation.",
"title": ""
},
{
"docid": "275aef345bf090486831faf7b243ac99",
"text": "Honey bee colony feeding trials were conducted to determine whether differential effects of carbohydrate feeding (sucrose syrup (SS) vs. high fructose corn syrup, or HFCS) could be measured between colonies fed exclusively on these syrups. In one experiment, there was a significant difference in mean wax production between the treatment groups and a significant interaction between time and treatment for the colonies confined in a flight arena. On average, the colonies supplied with SS built 7916.7 cm(2) ± 1015.25 cm(2) honeycomb, while the colonies supplied with HFCS built 4571.63 cm(2) ± 786.45 cm(2). The mean mass of bees supplied with HFCS was 4.65 kg (± 0.97 kg), while those supplied with sucrose had a mean of 8.27 kg (± 1.26). There was no significant difference between treatment groups in terms of brood rearing. Differences in brood production were complicated due to possible nutritional deficiencies experienced by both treatment groups. In the second experiment, colonies supplemented with SS through the winter months at a remote field site exhibited increased spring brood production when compared to colonies fed with HFCS. The differences in adult bee populations were significant, having an overall average of 10.0 ± 1.3 frames of bees fed the sucrose syrup between November 2008 and April 2009, compared to 7.5 ± 1.6 frames of bees fed exclusively on HFCS. For commercial queen beekeepers, feeding the right supplementary carbohydrates could be especially important, given the findings of this study.",
"title": ""
},
{
"docid": "e7afe834b7ca7be145cb9db57febab39",
"text": "Current approaches to cross-lingual sentiment analysis try to leverage the wealth of labeled English data using bilingual lexicons, bilingual vector space embeddings, or machine translation systems. Here we show that it is possible to use a single linear transformation, with as few as 2000 word pairs, to capture fine-grained sentiment relationships between words in a cross-lingual setting. We apply these cross-lingual sentiment models to a diverse set of tasks to demonstrate their functionality in a non-English context. By effectively leveraging English sentiment knowledge without the need for accurate translation, we can analyze and extract features from other languages with scarce data at a very low cost, thus making sentiment and related analyses for many languages inexpensive.",
"title": ""
},
{
"docid": "6b52bb06c140e5f55f7094cbbf906769",
"text": "A method for tracking and predicting cloud movement using ground based sky imagery is presented. Sequences of partial sky images, with each image taken one second apart with a size of 640 by 480 pixels, were processed to determine the time taken for clouds to reach a user defined region in the image or the Sun. The clouds were first identified by segmenting the image based on the difference between the blue and red colour channels, producing a binary detection image. Good features to track were then located in the image and tracked utilising the Lucas-Kanade method for optical flow. From the trajectory of the tracked features and the binary detection image, cloud signals were generated. The trajectory of the individual features were used to determine the risky cloud signals (signals that pass over the user defined region or Sun). Time to collision estimates were produced based on merging these risky cloud signals. Estimates of times up to 40 seconds were achieved with error in the estimate increasing when the estimated time is larger. The method presented has the potential for tracking clouds travelling in different directions and at different velocities.",
"title": ""
},
{
"docid": "c73d2c65892d5f257b3d4ab1710cd63f",
"text": "Neural-network training can be slow and energy intensive, owing to the need to transfer the weight data for the network between conventional digital memory chips and processor chips. Analogue non-volatile memory can accelerate the neural-network training algorithm known as backpropagation by performing parallelized multiply–accumulate operations in the analogue domain at the location of the weight data. However, the classification accuracies of such in situ training using non-volatile-memory hardware have generally been less than those of software-based training, owing to insufficient dynamic range and excessive weight-update asymmetry. Here we demonstrate mixed hardware–software neural-network implementations that involve up to 204,900 synapses and that combine long-term storage in phase-change memory, near-linear updates of volatile capacitors and weight-data transfer with ‘polarity inversion’ to cancel out inherent device-to-device variations. We achieve generalization accuracies (on previously unseen data) equivalent to those of software-based training on various commonly used machine-learning test datasets (MNIST, MNIST-backrand, CIFAR-10 and CIFAR-100). The computational energy efficiency of 28,065 billion operations per second per watt and throughput per area of 3.6 trillion operations per second per square millimetre that we calculate for our implementation exceed those of today’s graphical processing units by two orders of magnitude. This work provides a path towards hardware accelerators that are both fast and energy efficient, particularly on fully connected neural-network layers. Analogue-memory-based neural-network training using non-volatile-memory hardware augmented by circuit simulations achieves the same accuracy as software-based training but with much improved energy efficiency and speed.",
"title": ""
},
{
"docid": "7f81e1d6a6955cec178c1c811810322b",
"text": "The MATLAB toolbox YALMIP is introduced. It is described how YALMIP can be used to model and solve optimization problems typically occurring in systems and control theory. In this paper, free MATLAB toolbox YALMIP, developed initially to model SDPs and solve these by interfacing eternal solvers. The toolbox makes development of optimization problems in general, and control oriented SDP problems in particular, extremely simple. In fact, learning 3 YALMIP commands is enough for most users to model and solve the optimization problems",
"title": ""
}
] | scidocsrr |
48bcd1cdfb13edcdaf0193991c927ed8 | High Resolution Face Completion with Multiple Controllable Attributes via Fully End-to-End Progressive Generative Adversarial Networks | [
{
"docid": "79ac5ca53e3f6be4bb5fe601ded1582d",
"text": "We propose a method for automatically guiding patch-based image completion using mid-level structural cues. Our method first estimates planar projection parameters, softly segments the known region into planes, and discovers translational regularity within these planes. This information is then converted into soft constraints for the low-level completion algorithm by defining prior probabilities for patch offsets and transformations. Our method handles multiple planes, and in the absence of any detected planes falls back to a baseline fronto-parallel image completion algorithm. We validate our technique through extensive comparisons with state-of-the-art algorithms on a variety of scenes.",
"title": ""
},
{
"docid": "5cfc4911a59193061ab55c2ce5013272",
"text": "What can you do with a million images? In this paper, we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless, but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks, we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data driven, requiring no annotations or labeling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of image completions and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.",
"title": ""
}
] | [
{
"docid": "58c488555240ded980033111a9657be4",
"text": "BACKGROUND\nThe management of opioid-induced constipation (OIC) is often complicated by the fact that clinical measures of constipation do not always correlate with patient perception. As the discomfort associated with OIC can lead to poor compliance with the opioid treatment, a shift in focus towards patient assessment is often advocated.\n\n\nSCOPE\nThe Bowel Function Index * (BFI) is a new patient-assessment scale that has been developed and validated specifically for OIC. It is a physician-administered, easy-to-use scale made up of three items (ease of defecation, feeling of incomplete bowel evacuation, and personal judgement of constipation). An extensive analysis has been performed in order to validate the BFI as reliable, stable, clinically valid, and responsive to change in patients with OIC, with a 12-point change in score constituting a clinically relevant change in constipation.\n\n\nFINDINGS\nThe results of the validation analysis were based on major clinical trials and have been further supported by data from a large open-label study and a pharmaco-epidemiological study, in which the BFI was used effectively to assess OIC in a large population of patients treated with opioids. Although other patient self-report scales exist, the BFI offers several unique advantages. First, by being physician-administered, the BFI minimizes reading and comprehension difficulties; second, by offering general and open-ended questions which capture patient perspective, the BFI is likely to detect most patients suffering from OIC; third, by being short and easy-to-use, it places little burden on the patient, thereby increasing the likelihood of gathering accurate information.\n\n\nCONCLUSION\nAltogether, the available data suggest that the BFI will be useful in clinical trials and in daily practice.",
"title": ""
},
{
"docid": "447e62529ed6b1b428e6edd78aabb637",
"text": "Dexterity robotic hands can (Cummings, 1996) greatly enhance the functionality of humanoid robots, but the making of such hands with not only human-like appearance but also the capability of performing the natural movement of social robots is a challenging problem. The first challenge is to create the hand’s articulated structure and the second challenge is to actuate it to move like a human hand. A robotic hand for humanoid robot should look and behave human like. At the same time, it also needs to be light and cheap for widely used purposes. We start with studying the biomechanical features of a human hand and propose a simplified mechanical model of robotic hands, which can achieve the important local motions of the hand. Then, we use 3D modeling techniques to create a single interlocked hand model that integrates pin and ball joints to our hand model. Compared to other robotic hands, our design saves the time required for assembling and adjusting, which makes our robotic hand ready-to-use right after the 3D printing is completed. Finally, the actuation of the hand is realized by cables and motors. Based on this approach, we have designed a cost-effective, 3D printable, compact, and lightweight robotic hand. Our robotic hand weighs 150 g, has 15 joints, which are similar to a real human hand, and 6 Degree of Freedom (DOFs). It is actuated by only six small size actuators. The wrist connecting part is also integrated into the hand model and could be customized for different robots such as Nadine robot (Magnenat Thalmann et al., 2017). The compact servo bed can be hidden inside the Nadine robot’s sleeve and the whole robotic hand platform will not cause extra load to her arm as the total weight (150 g robotic hand and 162 g artificial skin) is almost the same as her previous unarticulated robotic hand which is 348 g. The paper also shows our test results with and without silicon artificial hand skin, and on Nadine robot.",
"title": ""
},
{
"docid": "0c5143b222e1a8956dfb058b222ddc28",
"text": "Partially observed control problems are a challenging aspect of reinforcement learning. We extend two related, model-free algorithms for continuous control – deterministic policy gradient and stochastic value gradient – to solve partially observed domains using recurrent neural networks trained with backpropagation through time. We demonstrate that this approach, coupled with long-short term memory is able to solve a variety of physical control problems exhibiting an assortment of memory requirements. These include the short-term integration of information from noisy sensors and the identification of system parameters, as well as long-term memory problems that require preserving information over many time steps. We also demonstrate success on a combined exploration and memory problem in the form of a simplified version of the well-known Morris water maze task. Finally, we show that our approach can deal with high-dimensional observations by learning directly from pixels. We find that recurrent deterministic and stochastic policies are able to learn similarly good solutions to these tasks, including the water maze where the agent must learn effective search strategies.",
"title": ""
},
{
"docid": "270319820586068f09954ec9c358232f",
"text": "Recent years have seen exciting developments in join algorithms. In 2008, Atserias, Grohe and Marx (henceforth AGM) proved a tight bound on the maximum result size of a full conjunctive query, given constraints on the input rel ation sizes. In 2012, Ngo, Porat, R «e and Rudra (henceforth NPRR) devised a join algorithm with worst-case running time proportional to the AGM bound [8]. Our commercial database system LogicBlox employs a novel join algorithm, leapfrog triejoin, which compared conspicuously well to the NPRR algorithm in preliminary benchmarks. This spurred us to analyze the complexity of leapfrog triejoin. In this pa per we establish that leapfrog triejoin is also worst-case o ptimal, up to a log factor, in the sense of NPRR. We improve on the results of NPRR by proving that leapfrog triejoin achieves worst-case optimality for finer-grained classes o f database instances, such as those defined by constraints on projection cardinalities. We show that NPRR is not worstcase optimal for such classes, giving a counterexamplewher e leapfrog triejoin runs inO(n log n) time and NPRR runs in Θ(n) time. On a practical note, leapfrog triejoin can be implemented using conventional data structures such as B-trees, and extends naturally to ∃1 queries. We believe our algorithm offers a useful addition to the existing toolbox o f join algorithms, being easy to absorb, simple to implement, and having a concise optimality proof.",
"title": ""
},
{
"docid": "d7b51db5584534f74c34b281c7fb1cad",
"text": "Nodes residing in different parts of a graph can have similar structural roles within their local network topology. The identification of such roles provides key insight into the organization of networks and can be used for a variety of machine learning tasks. However, learning structural representations of nodes is a challenging problem, and it has typically involved manually specifying and tailoring topological features for each node. In this paper, we develop GraphWave, a method that represents each node's network neighborhood via a low-dimensional embedding by leveraging heat wavelet diffusion patterns. Instead of training on hand-selected features, GraphWave learns these embeddings in an unsupervised way. We mathematically prove that nodes with similar network neighborhoods will have similar GraphWave embeddings even though these nodes may reside in very different parts of the network, and our method scales linearly with the number of edges. Experiments in a variety of different settings demonstrate GraphWave's real-world potential for capturing structural roles in networks, and our approach outperforms existing state-of-the-art baselines in every experiment, by as much as 137%.",
"title": ""
},
{
"docid": "161e962e8e68a941324ec7b20b0ae877",
"text": "The number of malicious programs has grown both in number and in sophistication. Analyzing the malicious intent of vast amounts of data requires huge resources and thus, effective categorization of malware is required. In this paper, the content of a malicious program is represented as an entropy stream, where each value describes the amount of entropy of a small chunk of code in a specific location of the file. Wavelet transforms are then applied to this entropy signal to describe the variation in the entropic energy. Motivated by the visual similarity between streams of entropy of malicious software belonging to the same family, we propose a file agnostic deep learning approach for categorization of malware. Our method exploits the fact that most variants are generated by using common obfuscation techniques and that compression and encryption algorithms retain some properties present in the original code. This allows us to find discriminative patterns that almost all variants in a family share. Our method has been evaluated using the data provided by Microsoft for the BigData Innovators Gathering Anti-Malware Prediction Challenge, and achieved promising results in comparison with the State of the Art.",
"title": ""
},
{
"docid": "bf46f77a03bd6915145bee472bde6525",
"text": "©2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. DOI: 10.1109/IJCNN.2018.8489656 Abstract—Recurrent neural networks are now the state-ofthe-art in natural language processing because they can build rich contextual representations and process texts of arbitrary length. However, recent developments on attention mechanisms have equipped feedforward networks with similar capabilities, hence enabling faster computations due to the increase in the number of operations that can be parallelized. We explore this new type of architecture in the domain of question-answering and propose a novel approach that we call Fully Attention Based Information Retriever (FABIR). We show that FABIR achieves competitive results in the Stanford Question Answering Dataset (SQuAD) while having fewer parameters and being faster at both learning and inference than rival methods.",
"title": ""
},
{
"docid": "471f4399e42aa0b00effac824a309ad6",
"text": "Resource management in Cloud Computing has been dominated by system-level virtual machines to enable the management of resources using a coarse grained approach, largely in a manner independent from the applications running on these infrastructures. However, in such environments, although different types of applications can be running, the resources are delivered equally to each one, missing the opportunity to manage the available resources in a more efficient and application driven way. So, as more applications target managed runtimes, high level virtualization is a relevant abstraction layer that has not been properly explored to enhance resource usage, control, and effectiveness. We propose a VM economics model to manage cloud infrastructures, governed by a quality-of-execution (QoE) metric and implemented by an extended virtual machine. The Adaptive and Resource-Aware Java Virtual Machine (ARA-JVM) is a cluster-enabled virtual execution environment with the ability to monitor base mechanisms (e.g. thread cheduling, garbage collection, memory or network consumptions) to assess application's performance and reconfigure these mechanisms in runtime according to previously defined resource allocation policies. Reconfiguration is driven by incremental gains in quality-of-execution (QoE), used by the VM economics model to balance relative resource savings and perceived performance degradation. Our work in progress, aims to allow cloud providers to exchange resource slices among virtual machines, continually addressing where those resources are required, while being able to determine where the reduction will be more economically effective, i.e., will contribute in lesser extent to performance degradation.",
"title": ""
},
{
"docid": "f48cc4c9884bac97e50e222776f15413",
"text": "An active contour tracker is presented which can be used for gaze-based interaction with off-the-shelf components. The underlying contour model is based on image statistics and avoids explicit feature detection. The tracker combines particle filtering with the EM algorithm. The method exhibits robustness to light changes and camera defocusing; consequently, the model is well suited for use in systems using off-the-shelf hardware, but may equally well be used in controlled environments, such as in IR-based settings. The method is even capable of handling sudden changes between IR and non-IR light conditions, without changing parameters. For the purpose of determining where the user is looking, calibration is usually needed. The number of calibration points used in different methods varies from a few to several thousands, depending on the prior knowledge used on the setup and equipment. We examine basic properties of gaze determination when the geometry of the camera, screen, and user is unknown. In particular we present a lower bound on the number of calibration points needed for gaze determination on planar objects, and we examine degenerate configurations. Based on this lower bound we apply a simple calibration procedure, to facilitate gaze estimation. 2004 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "ce839ea9b5cc8de275b634c920f45329",
"text": "As a matter of fact, most natural structures are complex topology structures with intricate holes or irregular surface morphology. These structures can be used as lightweight infill, porous scaffold, energy absorber or micro-reactor. With the rapid advancement of 3D printing, the complex topology structures can now be efficiently and accurately fabricated by stacking layered materials. The novel manufacturing technology and application background put forward new demands and challenges to the current design methodologies of complex topology structures. In this paper, a brief review on the development of recent complex topology structure design methods was provided; meanwhile, the limitations of existing methods and future work are also discussed in the end.",
"title": ""
},
{
"docid": "9ef55f4e23603bf2e47564191ae428d1",
"text": "Shulman, Carl. 2010. Omohundro's \" Basic AI Drives \" and Catastrophic Risks.",
"title": ""
},
{
"docid": "568c7ef495bfc10936398990e72a04d2",
"text": "Accurate estimation of heart rates from photoplethysmogram (PPG) signals during intense physical activity is a very challenging problem. This is because strenuous and high intensity exercise can result in severe motion artifacts in PPG signals, making accurate heart rate (HR) estimation difficult. In this study we investigated a novel technique to accurately reconstruct motion-corrupted PPG signals and HR based on time-varying spectral analysis. The algorithm is called Spectral filter algorithm for Motion Artifacts and heart rate reconstruction (SpaMA). The idea is to calculate the power spectral density of both PPG and accelerometer signals for each time shift of a windowed data segment. By comparing time-varying spectra of PPG and accelerometer data, those frequency peaks resulting from motion artifacts can be distinguished from the PPG spectrum. The SpaMA approach was applied to three different datasets and four types of activities: (1) training datasets from the 2015 IEEE Signal Process. Cup Database recorded from 12 subjects while performing treadmill exercise from 1 km/h to 15 km/h; (2) test datasets from the 2015 IEEE Signal Process. Cup Database recorded from 11 subjects while performing forearm and upper arm exercise. (3) Chon Lab dataset including 10 min recordings from 10 subjects during treadmill exercise. The ECG signals from all three datasets provided the reference HRs which were used to determine the accuracy of our SpaMA algorithm. The performance of the SpaMA approach was calculated by computing the mean absolute error between the estimated HR from the PPG and the reference HR from the ECG. The average estimation errors using our method on the first, second and third datasets are 0.89, 1.93 and 1.38 beats/min respectively, while the overall error on all 33 subjects is 1.86 beats/min and the performance on only treadmill experiment datasets (22 subjects) is 1.11 beats/min. Moreover, it was found that dynamics of heart rate variability can be accurately captured using the algorithm where the mean Pearson's correlation coefficient between the power spectral densities of the reference and the reconstructed heart rate time series was found to be 0.98. These results show that the SpaMA method has a potential for PPG-based HR monitoring in wearable devices for fitness tracking and health monitoring during intense physical activities.",
"title": ""
},
{
"docid": "07e2dae7b1ed0c7164e59bd31b0d3f87",
"text": "The requirement to perform complicated statistic analysis of big data by institutions of engineering, scientific research, health care, commerce, banking and computer research is immense. However, the limitations of the widely used current desktop software like R, excel, minitab and spss gives a researcher limitation to deal with big data. The big data analytic tools like IBM Big Insight, Revolution Analytics, and tableau software are commercial and heavily license. Still, to deal with big data, client has to invest in infrastructure, installation and maintenance of hadoop cluster to deploy these analytical tools. Apache Hadoop is an open source distributed computing framework that uses commodity hardware. With this project, I intend to collaborate Apache Hadoop and R software over the on the Cloud. Objective is to build a SaaS (Software-as-a-Service) analytic platform that stores & analyzes big data using open source Apache Hadoop and open source R software. The benefits of this cloud based big data analytical service are user friendliness & cost as it is developed using open-source software. The system is cloud based so users have their own space in cloud where user can store there data. User can browse data, files, folders using browser and arrange datasets. User can select dataset and analyze required dataset and store result back to cloud storage. Enterprise with a cloud environment can save cost of hardware, upgrading software, maintenance or network configuration, thus it making it more economical.",
"title": ""
},
{
"docid": "f3d934a354b44c79dfafb6bbb79b7f7c",
"text": "The large number of rear end collisions due to driver inattention has been identified as a major automotive safety issue. Even a short advance warning can significantly reduce the number and severity of the collisions. This paper describes a vision based forward collision warning (FCW) system for highway safety. The algorithm described in this paper computes time to contact (TTC) and possible collision course directly from the size and position of the vehicles in the image - which are the natural measurements for a vision based system - without having to compute a 3D representation of the scene. The use of a single low cost image sensor results in an affordable system which is simple to install. The system has been implemented on real-time hardware and has been test driven on highways. Collision avoidance tests have also been performed on test tracks.",
"title": ""
},
{
"docid": "572705832f767399e7de3bfb790bee0f",
"text": "This paper presents the research and development of two terahertz imaging systems based on photonic and electronic principles, respectively. As part of this study, a survey of ongoing research in the field of terahertz imaging is provided focusing on security applications. Existing terahertz imaging systems are reviewed in terms of the employed architecture and data processing strategies. Active multichannel measurement method is found to be promising for real-time applications among the various terahertz imaging techniques and is chosen as a basis for the imaging instruments presented in this paper. An active system operation allows for a wide dynamic range, which is important for image quality. The described instruments employ a multichannel high-sensitivity heterodyne architecture and aperture filling techniques, with close to real-time image acquisition time. In the case of the photonic imaging system, mechanical scanning is completely obsolete. We show 2-D images of simulated 3-D image data for both systems. The reconstruction algorithms are suitable for 3-D real-time operation, only limited by mechanical scanning.",
"title": ""
},
{
"docid": "4bf5fd6fdb2cb82fa13abdb13653f3ac",
"text": "Customer relationship management (CRM) has once again gained prominence amongst academics and practitioners. However, there is a tremendous amount of confusion regarding its domain and meaning. In this paper, the authors explore the conceptual foundations of CRM by examining the literature on relationship marketing and other disciplines that contribute to the knowledge of CRM. A CRM process framework is proposed that builds on other relationship development process models. CRM implementation challenges as well as CRM's potential to become a distinct discipline of marketing are also discussed in this paper. JEL Classification Codes: M31.",
"title": ""
},
{
"docid": "ecd7da1f742b4c92f3c748fd19098159",
"text": "Abstract. Today, a paradigm shift is being observed in science, where the focus is gradually shifting toward the cloud environments to obtain appropriate, robust and affordable services to deal with Big Data challenges (Sharma et al. 2014, 2015a, 2015b). Cloud computing avoids any need to locally maintain the overly scaled computing infrastructure that include not only dedicated space, but the expensive hardware and software also. In this paper, we study the evolution of as-a-Service modalities, stimulated by cloud computing, and explore the most complete inventory of new members beyond traditional cloud computing stack.",
"title": ""
},
{
"docid": "7af567a60ce0bc0a67d7431184ac54ac",
"text": "Users of social media sites like Facebook and Twitter rely on crowdsourced content recommendation systems (e.g., Trending Topics) to retrieve important and useful information.",
"title": ""
}
] | scidocsrr |
45152911817d270e1896874a457c297a | Type-Aware Distantly Supervised Relation Extraction with Linked Arguments | [
{
"docid": "afd00b4795637599f357a7018732922c",
"text": "We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.",
"title": ""
},
{
"docid": "79ad9125b851b6d2c3ed6fb1c5cf48e1",
"text": "In this paper, we extend distant supervision (DS) based on Wikipedia for Relation Extraction (RE) by considering (i) relations defined in external repositories, e.g. YAGO, and (ii) any subset of Wikipedia documents. We show that training data constituted by sentences containing pairs of named entities in target relations is enough to produce reliable supervision. Our experiments with state-of-the-art relation extraction models, trained on the above data, show a meaningful F1 of 74.29% on a manually annotated test set: this highly improves the state-of-art in RE using DS. Additionally, our end-to-end experiments demonstrated that our extractors can be applied to any general text document.",
"title": ""
},
{
"docid": "c4a925ced6eb9bea9db96136905c3e19",
"text": "Knowledge of objects and their parts, meronym relations, are at the heart of many question-answering systems, but manually encoding these facts is impractical. Past researchers have tried hand-written patterns, supervised learning, and bootstrapped methods, but achieving both high precision and recall has proven elusive. This paper reports on a thorough exploration of distant supervision to learn a meronym extractor for the domain of college biology. We introduce a novel algorithm, generalizing the ``at least one'' assumption of multi-instance learning to handle the case where a fixed (but unknown) percentage of bag members are positive examples. Detailed experiments compare strategies for mention detection, negative example generation, leveraging out-of-domain meronyms, and evaluate the benefit of our multi-instance percentage model.",
"title": ""
},
{
"docid": "44582f087f9bb39d6e542ff7b600d1c7",
"text": "We propose a new deterministic approach to coreference resolution that combines the global information and precise features of modern machine-learning models with the transparency and modularity of deterministic, rule-based systems. Our sieve architecture applies a battery of deterministic coreference models one at a time from highest to lowest precision, where each model builds on the previous model's cluster output. The two stages of our sieve-based architecture, a mention detection stage that heavily favors recall, followed by coreference sieves that are precision-oriented, offer a powerful way to achieve both high precision and high recall. Further, our approach makes use of global information through an entity-centric model that encourages the sharing of features across all mentions that point to the same real-world entity. Despite its simplicity, our approach gives state-of-the-art performance on several corpora and genres, and has also been incorporated into hybrid state-of-the-art coreference systems for Chinese and Arabic. Our system thus offers a new paradigm for combining knowledge in rule-based systems that has implications throughout computational linguistics.",
"title": ""
},
{
"docid": "9c44aba7a9802f1fe95fbeb712c23759",
"text": "In relation extraction, distant supervision seeks to extract relations between entities from text by using a knowledge base, such as Freebase, as a source of supervision. When a sentence and a knowledge base refer to the same entity pair, this approach heuristically labels the sentence with the corresponding relation in the knowledge base. However, this heuristic can fail with the result that some sentences are labeled wrongly. This noisy labeled data causes poor extraction performance. In this paper, we propose a method to reduce the number of wrong labels. We present a novel generative model that directly models the heuristic labeling process of distant supervision. The model predicts whether assigned labels are correct or wrong via its hidden variables. Our experimental results show that this model detected wrong labels with higher performance than baseline methods. In the experiment, we also found that our wrong label reduction boosted the performance of relation extraction.",
"title": ""
},
{
"docid": "904db9e8b0deb5027d67bffbd345b05f",
"text": "Entity Recognition (ER) is a key component of relation extraction systems and many other natural-language processing applications. Unfortunately, most ER systems are restricted to produce labels from to a small set of entity classes, e.g., person, organization, location or miscellaneous. In order to intelligently understand text and extract a wide range of information, it is useful to more precisely determine the semantic classes of entities mentioned in unstructured text. This paper defines a fine-grained set of 112 tags, formulates the tagging problem as multi-class, multi-label classification, describes an unsupervised method for collecting training data, and presents the FIGER implementation. Experiments show that the system accurately predicts the tags for entities. Moreover, it provides useful information for a relation extraction system, increasing the F1 score by 93%. We make FIGER and its data available as a resource for future work.",
"title": ""
}
] | [
{
"docid": "5e14acfc68e8cb1ae7ea9b34eba420e0",
"text": "Education University of California, Berkeley (2008-2013) Ph.D. in Computer Science Thesis: Surface Web Semantics for Structured Natural Language Processing Advisor: Dan Klein. Committee members: Dan Klein, Marti Hearst, Line Mikkelsen, Nelson Morgan University of California, Berkeley (2012) Master of Science (M.S.) in Computer Science Thesis: An All-Fragments Grammar for Simple and Accurate Parsing Advisor: Dan Klein Indian Institute of Technology, Kanpur (2004-2008) Bachelor of Technology (B.Tech.) in Computer Science and Engineering GPA: 3.96/4.00 (Institute and Department Rank 2) Cornell University (Summer 2007) CS490 (Independent Research and Reading) GPA: 4.00/4.00 Advisors: Lillian Lee, Claire Cardie",
"title": ""
},
{
"docid": "5f94ad6047ec9cf565b9960e89bbc913",
"text": "In this paper, we compare the geometrical performance between the rigorous sensor model (RSM) and rational function model (RFM) in the sensor modeling of FORMOSAT-2 satellite images. For the RSM, we provide a least squares collocation procedure to determine the precise orbits. As for the RFM, we analyze the model errors when a large amount of quasi-control points, which are derived from the satellite ephemeris and attitude data, are employed. The model errors with respect to the length of the image strip are also demonstrated. Experimental results show that the RFM is well behaved, indicating that its positioning errors is similar to that of the RSM. Introduction Sensor orientation modeling is a prerequisite for the georeferencing of satellite images or 3D object reconstruction from satellite stereopairs. Nowadays, most of the high-resolution satellites use linear array pushbroom scanners. Based on the pushbroom scanning geometry, a number of investigations have been reported regarding the geometric accuracy of linear array images (Westin, 1990; Chen and Lee, 1993; Li, 1998; Tao et al., 2000; Toutin, 2003; Grodecki and Dial, 2003). The geometric modeling of the sensor orientation may be divided into two categories, namely, the rigorous sensor model (RSM) and the rational function model (RFM) (Toutin, 2004). Capable of fully delineating the imaging geometry between the image space and object space, the RSM has been recognized in providing the most precise geometrical processing of satellite images. Based on the collinearity condition, an image point corresponds to a ground point using the employment of the orientation parameters, which are expressed as a function of the sampling time. Due to the dynamic sampling, the RSM contains many mathematical calculations, which can cause problems for researchers who are not familiar with the data preprocessing. Moreover, with the increasing number of Earth resource satellites, researchers need to familiarize themselves with the uniqueness and complexity of each sensor model. Therefore, a generic sensor model of the geometrical processing is needed for simplification. (Dowman and Michalis, 2003). The RFM is a generalized sensor model that is used as an alternative for the RSM. The model uses a pair of ratios of two polynomials to approximate the collinearity condition equations. The RFM has been successfully applied to several high-resolution satellite images such as Ikonos (Di et al., 2003; Grodecki and Dial, 2003; Fraser and Hanley, 2003) and QuickBird (Robertson, 2003). Due to its simple impleThe Geometrical Comparisons of RSM and RFM for FORMOSAT-2 Satellite Images Liang-Chien Chen, Tee-Ann Teo, and Chien-Liang Liu mentation and standardization (NIMA, 2000), the approach has been widely used in the remote sensing community. Launched on 20 May 2004, FORMOSAT-2 is operated by the National Space Organization of Taiwan. The satellite operates in a sun-synchronous orbit at an altitude of 891 km and with an inclination of 99.1 degrees. It has a swath width of 24 km and orbits the Earth exactly 14 times per day, which makes daily revisits possible (NSPO, 2005). Its panchromatic images have a resolution of 2 meters, while the multispectral sensor produces 8 meter resolution images covering the blue, green, red, and NIR bands. Its high performance provides an excellent data resource for the remote sensing researchers. The major objective of this investigation is to compare the geometrical performances between the RSM and RFM when FORMOSAT-2 images are employed. A least squares collocation-based RSM will also be proposed in the paper. In the reconstruction of the RFM, rational polynomial coefficients are generated by using the on-board ephemeris and attitude data. In addition to the comparison of the two models, the modeling error of the RFM is analyzed when long image strips are used. Rigorous Sensor Models The proposed method comprises essentially of two parts. The first involves the development of the mathematical model for time-dependent orientations. The second performs the least squares collocation to compensate the local systematic errors. Orbit Fitting There are two types of sensor models for pushbroom satellite images, i.e., orbital elements (Westin, 1990) and state vectors (Chen and Chang, 1998). The orbital elements use the Kepler elements as the orbital parameters, while the state vectors calculate the orbital parameters directly by using the position vector. Although both sensor models are robust, the state vector model provides simpler mathematical calculations. For this reason, we select the state vector approach in this investigation. Three steps are included in the orbit modeling: (a) Initialization of the orientation parameters using on-board ephemeris data; (b) Compensation of the systematic errors of the orbital parameters and attitude data via ground control points (GCPs); and (c) Modification of the orbital parameters by using the Least Squares Collocation (Mikhail and Ackermann, 1982) technique. PHOTOGRAMMETRIC ENGINEER ING & REMOTE SENS ING May 2006 573 Center for Space and Remote Sensing Research National Central University, Chung-Li, Taiwan ([email protected]). Photogrammetric Engineering & Remote Sensing Vol. 72, No. 5, May 2006, pp. 573–579. 0099-1112/06/7205–0573/$3.00/0 © 2006 American Society for Photogrammetry and Remote Sensing HR-05-016.qxd 4/10/06 2:55 PM Page 573",
"title": ""
},
{
"docid": "945b2067076bd47485b39c33fb062ec1",
"text": "Computation of floating-point transcendental functions has a relevant importance in a wide variety of scientific applications, where the area cost, error and latency are important requirements to be attended. This paper describes a flexible FPGA implementation of a parameterizable floating-point library for computing sine, cosine, arctangent and exponential functions using the CORDIC algorithm. The novelty of the proposed architecture is that by sharing the same resources the CORDIC algorithm can be used in two operation modes, allowing it to compute the sine, cosine or arctangent functions. Additionally, in case of the exponential function, the architectures change automatically between the CORDIC or a Taylor approach, which helps to improve the precision characteristics of the circuit, specifically for small input values after the argument reduction. Synthesis of the circuits and an experimental analysis of the errors have demonstrated the correctness and effectiveness of the implemented cores and allow the designer to choose, for general-purpose applications, a suitable bit-width representation and number of iterations of the CORDIC algorithm.",
"title": ""
},
{
"docid": "e3e4d19aa9a5db85f30698b7800d2502",
"text": "In this paper we examine the use of a mathematical procedure, called Principal Component Analysis, in Recommender Systems. The resulting filtering algorithm applies PCA on user ratings and demographic data, aiming to improve various aspects of the recommendation process. After a brief introduction to PCA, we provide a discussion of the proposed PCADemog algorithm, along with possible ways of combining it with different sources of filtering data. The experimental part of this work tests distinct parameterizations for PCA-Demog, identifying those with the best performance. Finally, the paper compares their results with those achieved by other filtering approaches, and draws interesting conclusions.",
"title": ""
},
{
"docid": "b4e3d2f5e4bb1238cb6f4dad5c952c4c",
"text": "Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to 'meaningful information about the logic' behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles---under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.",
"title": ""
},
{
"docid": "a39364020ec95a3d35dfe929d4a000c0",
"text": "The Internet of Things (IoTs) refers to the inter-connection of billions of smart devices. The steadily increasing number of IoT devices with heterogeneous characteristics requires that future networks evolve to provide a new architecture to cope with the expected increase in data generation. Network function virtualization (NFV) provides the scale and flexibility necessary for IoT services by enabling the automated control, management and orchestration of network resources. In this paper, we present a novel NFV enabled IoT architecture targeted for a state-of-the art operating room environment. We use web services based on the representational state transfer (REST) web architecture as the IoT application's southbound interface and illustrate its applicability via two different scenarios.",
"title": ""
},
{
"docid": "6c5cabfa5ee5b9d67ef25658a4b737af",
"text": "Sentence compression is the task of producing a summary of a single sentence. The compressed sentence should be shorter, contain the important content from the original, and itself be grammatical. The three papers discussed here take different approaches to identifying important content, determining which sentences are grammatical, and jointly optimizing these objectives. One family of approaches we will discuss is those that are tree-based, which create a compressed sentence by making edits to the syntactic tree of the original sentence. A second type of approach is sentence-based, which generates strings directly. Orthogonal to either of these two approaches is whether sentences are treated in isolation or if the surrounding discourse affects compressions. We compare a tree-based, a sentence-based, and a discourse-based approach and conclude with ideas for future work in this area. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-10-20. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/929 Methods for Sentence Compression",
"title": ""
},
{
"docid": "684555a1b5eb0370eebee8cbe73a82ff",
"text": "This paper identifies and examines the key principles underlying building a state-of-the-art grammatical error correction system. We do this by analyzing the Illinois system that placed first among seventeen teams in the recent CoNLL-2013 shared task on grammatical error correction. The system focuses on five different types of errors common among non-native English writers. We describe four design principles that are relevant for correcting all of these errors, analyze the system along these dimensions, and show how each of these dimensions contributes to the performance.",
"title": ""
},
{
"docid": "0d2b90dad65e01289008177a4ebbbade",
"text": "A good test suite is one that detects real faults. Because the set of faults in a program is usually unknowable, this definition is not useful to practitioners who are creating test suites, nor to researchers who are creating and evaluating tools that generate test suites. In place of real faults, testing research often uses mutants, which are artificial faults -- each one a simple syntactic variation -- that are systematically seeded throughout the program under test. Mutation analysis is appealing because large numbers of mutants can be automatically-generated and used to compensate for low quantities or the absence of known real faults. Unfortunately, there is little experimental evidence to support the use of mutants as a replacement for real faults. This paper investigates whether mutants are indeed a valid substitute for real faults, i.e., whether a test suite’s ability to detect mutants is correlated with its ability to detect real faults that developers have fixed. Unlike prior studies, these investigations also explicitly consider the conflating effects of code coverage on the mutant detection rate. Our experiments used 357 real faults in 5 open-source applications that comprise a total of 321,000 lines of code. Furthermore, our experiments used both developer-written and automatically-generated test suites. The results show a statistically significant correlation between mutant detection and real fault detection, independently of code coverage. The results also give concrete suggestions on how to improve mutation analysis and reveal some inherent limitations.",
"title": ""
},
{
"docid": "d8567a34caacdb22a0aea281a1dbbccb",
"text": "Traditionally, interference protection is guaranteed through a policy of spectrum licensing, whereby wireless systems get exclusive access to spectrum. This is an effective way to prevent interference, but it leads to highly inefficient use of spectrum. Cognitive radio along with software radio, spectrum sensors, mesh networks, and other emerging technologies can facilitate new forms of spectrum sharing that greatly improve spectral efficiency and alleviate scarcity, if policies are in place that support these forms of sharing. On the other hand, new technology that is inconsistent with spectrum policy will have little impact. This paper discusses policies that can enable or facilitate use of many spectrum-sharing arrangements, where the arrangements are categorized as being based on coexistence or cooperation and as sharing among equals or primary-secondary sharing. A shared spectrum band may be managed directly by the regulator, or this responsibility may be delegated in large part to a license-holder. The type of sharing arrangement and the entity that manages it have a great impact on which technical approaches are viable and effective. The most efficient and cost-effective form of spectrum sharing will depend on the type of systems involved, where systems under current consideration are as diverse as television broadcasters, cellular carriers, public safety systems, point-to-point links, and personal and local-area networks. In addition, while cognitive radio offers policy-makers the opportunity to improve spectral efficiency, cognitive radio also provides new challenges for policy enforcement. A responsible regulator will not allow a device into the marketplace that might harm other systems. Thus, designers must seek innovative ways to assure regulators that new devices will comply with policy requirements and will not cause harmful interference.",
"title": ""
},
{
"docid": "395dcc7c09562f358c07af9c999fbdc7",
"text": "Protecting source code against reverse engineering and theft is an important problem. The goal is to carry out computations using confidential algorithms on an untrusted party while ensuring confidentiality of algorithms. This problem has been addressed for Boolean circuits known as ‘circuit privacy’. Circuits corresponding to real-world programs are impractical. Well-known obfuscation techniques are highly practicable, but provide only limited security, e.g., no piracy protection. In this work, we modify source code yielding programs with adjustable performance and security guarantees ranging from indistinguishability obfuscators to (non-secure) ordinary obfuscation. The idea is to artificially generate ‘misleading’ statements. Their results are combined with the outcome of a confidential statement using encrypted selector variables. Thus, an attacker must ‘guess’ the encrypted selector variables to disguise the confidential source code. We evaluated our method using more than ten programmers as well as pattern mining across open source code repositories to gain insights of (micro-)coding patterns that are relevant for generating misleading statements. The evaluation reveals that our approach is effective in that it successfully preserves source code confidentiality.",
"title": ""
},
{
"docid": "5cdb981566dfd741c9211902c0c59d50",
"text": "Since parental personality traits are assumed to play a role in parenting behaviors, the current study examined the relation between parental personality and parenting style among 688 Dutch parents of adolescents in the SMILE study. The study assessed Big Five personality traits and derived parenting styles (authoritative, authoritarian, indulgent, and uninvolved) from scores on the underlying dimensions of support and strict control. Regression analyses were used to determine which personality traits were associated with parenting dimensions and styles. As regards dimensions, the two aspects of personality reflecting interpersonal interactions (extraversion and agreeableness) were related to supportiveness. Emotional stability was associated with lower strict control. As regards parenting styles, extraverted, agreeable, and less emotionally stable individuals were most likely to be authoritative parents. Conscientiousness and openness did not relate to general parenting, but might be associated with more content-specific acts of parenting.",
"title": ""
},
{
"docid": "2bd3f3e72d99401cdf6f574982bc65ff",
"text": "In the future smart grid, both users and power companies can potentially benefit from the economical and environmental advantages of smart pricing methods to more effectively reflect the fluctuations of the wholesale price into the customer side. In addition, smart pricing can be used to seek social benefits and to implement social objectives. To achieve social objectives, the utility company may need to collect various information about users and their energy consumption behavior, which can be challenging. In this paper, we propose an efficient pricing method to tackle this problem. We assume that each user is equipped with an energy consumption controller (ECC) as part of its smart meter. All smart meters are connected to not only the power grid but also a communication infrastructure. This allows two-way communication among smart meters and the utility company. We analytically model each user's preferences and energy consumption patterns in form of a utility function. Based on this model, we propose a Vickrey-Clarke-Groves (VCG) mechanism which aims to maximize the social welfare, i.e., the aggregate utility functions of all users minus the total energy cost. Our design requires that each user provides some information about its energy demand. In return, the energy provider will determine each user's electricity bill payment. Finally, we verify some important properties of our proposed VCG mechanism for demand side management such as efficiency, user truthfulness, and nonnegative transfer. Simulation results confirm that the proposed pricing method can benefit both users and utility companies.",
"title": ""
},
{
"docid": "4f6979ca99ec7fb0010fd102e7796248",
"text": "Cryptographic systems are essential for computer and communication security, for instance, RSA is used in PGP Email clients and AES is employed in full disk encryption. In practice, the cryptographic keys are loaded and stored in RAM as plain-text, and therefore vulnerable to physical memory attacks (e.g., cold-boot attacks). To tackle this problem, we propose Copker, which implements asymmetric cryptosystems entirely within the CPU, without storing plain-text private keys in the RAM. In its active mode, Copker stores kilobytes of sensitive data, including the private key and the intermediate states, only in onchip CPU caches (and registers). Decryption/signing operations are performed without storing sensitive information in system memory. In the suspend mode, Copker stores symmetrically encrypted private keys in memory, while employs existing solutions to keep the key-encryption key securely in CPU registers. Hence, Copker releases the system resources in the suspend mode. In this paper, we implement Copker with the most common asymmetric cryptosystem, RSA, with the support of multiple private keys. We show that Copker provides decryption/signing services that are secure against physical memory attacks. Meanwhile, with intensive experiments, we demonstrate that our implementation of Copker is secure and requires reasonable overhead. Keywords—Cache-as-RAM; cold-boot attack; key management; asymmetric cryptography implementation.",
"title": ""
},
{
"docid": "5565f51ad8e1aaee43f44917befad58a",
"text": "We explore the application of deep residual learning and dilated convolutions to the keyword spotting task, using the recently-released Google Speech Commands Dataset as our benchmark. Our best residual network (ResNet) implementation significantly outperforms Google's previous convolutional neural networks in terms of accuracy. By varying model depth and width, we can achieve compact models that also outperform previous small-footprint variants. To our knowledge, we are the first to examine these approaches for keyword spotting, and our results establish an open-source state-of-the-art reference to support the development of future speech-based interfaces.",
"title": ""
},
{
"docid": "4daec6170f18cc8896411e808e53355f",
"text": "The goal of this note is to point out that any distributed representation can be turned into a classifier through inversion via Bayes rule. The approach is simple and modular, in that it will work with any language representation whose training can be formulated as optimizing a probability model. In our application to 2 million sentences from Yelp reviews, we also find that it performs as well as or better than complex purpose-built algorithms.",
"title": ""
},
{
"docid": "a53f26ef068d11ea21b9ba8609db6ddf",
"text": "This paper presents a novel approach based on enhanced local directional patterns (ELDP) to face recognition, which adopts local edge gradient information to represent face images. Specially, each pixel of every facial image sub-block gains eight edge response values by convolving the local 3 3 neighborhood with eight Kirsch masks, respectively. ELDP just utilizes the directions of the most encoded into a double-digit octal number to produce the ELDP codes. The ELDP dominant patterns (ELDP) are generated by statistical analysis according to the occurrence rates of the ELDP codes in a mass of facial images. Finally, the face descriptor is represented by using the global concatenated histogram based on ELDP or ELDP extracted from the face image which is divided into several sub-regions. The performances of several single face descriptors not integrated schemes are evaluated in face recognition under different challenges via several experiments. The experimental results demonstrate that the proposed method is more robust to non-monotonic illumination changes and slight noise without any filter. & 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "77754266da79a87b99e51b0088888550",
"text": "The paper proposed a novel automatic target recognition (ATR) system for classification of three types of ground vehicles in the moving and stationary target acquisition and recognition (MSTAR) public release database. First MSTAR image chips are represented as fine and raw feature vectors, where raw features compensate for the target pose estimation error that corrupts fine image features. Then, the chips are classified by using the adaptive boosting (AdaBoost) algorithm with the radial basis function (RBF) network as the base learner. Since the RBF network is a binary classifier, the multiclass problem was decomposed into a set of binary ones through the error-correcting output codes (ECOC) method, specifying a dictionary of code words for the set of three possible classes. AdaBoost combines the classification results of the RBF network for each binary problem into a code word, which is then \"decoded\" as one of the code words (i.e., ground-vehicle classes) in the specified dictionary. Along with classification, within the AdaBoost framework, we also conduct efficient fusion of the fine and raw image-feature vectors. The results of large-scale experiments demonstrate that our ATR scheme outperforms the state-of-the-art systems reported in the literature",
"title": ""
},
{
"docid": "ba16a6634b415dd2c478c83e1f65cb3c",
"text": "Reasoning and inference are central to human and artificial intelligence. Modeling inference in human language is notoriously challenging but is fundamental to natural language understanding and many applications. With the availability of large annotated data, neural network models have recently advanced the field significantly. In this paper, we present a new state-of-the-art result, achieving the accuracy of 88.3% on the standard benchmark, the Stanford Natural Language Inference dataset. This result is achieved first through our enhanced sequential encoding model, which outperforms the previous best model that employs more complicated network architectures, suggesting that the potential of sequential LSTM-based models have not been fully explored yet in previous work. We further show that by explicitly considering recursive architectures, we achieve additional improvement. Particularly, incorporating syntactic parse information contributes to our best result; it improves the performance even when the parse information is added to an already very strong system.",
"title": ""
}
] | scidocsrr |
2e33c79d5d3f826ab8c431bf0eba277c | Selfishness, Altruism and Message Spreading in Mobile Social Networks | [
{
"docid": "2af231da02dbfb4db5c44c386870142c",
"text": "Mobile ad hoc routing protocols allow nodes with wireless adaptors to communicate with one another without any pre-existing network infrastructure. Existing ad hoc routing protocols, while robust to rapidly changing network topology, assume the presence of a connected path from source to destination. Given power limitations, the advent of short-range wireless networks, and the wide physical conditions over which ad hoc networks must be deployed, in some scenarios it is likely that this assumption is invalid. In this work, we develop techniques to deliver messages in the case where there is never a connected path from source to destination or when a network partition exists at the time a message is originated. To this end, we introduce Epidemic Routing, where random pair-wise exchanges of messages among mobile hosts ensure eventual message delivery. The goals of Epidemic Routing are to: i) maximize message delivery rate, ii) minimize message latency, and iii) minimize the total resources consumed in message delivery. Through an implementation in the Monarch simulator, we show that Epidemic Routing achieves eventual delivery of 100% of messages with reasonable aggregate resource consumption in a number of interesting scenarios.",
"title": ""
}
] | [
{
"docid": "f8275a80021312a58c9cd52bbcd4c431",
"text": "Mobile online social networks (OSNs) are emerging as the popular mainstream platform for information and content sharing among people. In order to provide Quality of Experience (QoE) support for mobile OSN services, in this paper we propose a socially-driven learning-based framework, namely Spice, for media content prefetching to reduce the access delay and enhance mobile user's satisfaction. Through a large-scale data-driven analysis over real-life mobile Twitter traces from over 17,000 users during a period of five months, we reveal that the social friendship has a great impact on user's media content click behavior. To capture this effect, we conduct social friendship clustering over the set of user's friends, and then develop a cluster-based Latent Bias Model for socially-driven learning-based prefetching prediction. We then propose a usage-adaptive prefetching scheduling scheme by taking into account that different users may possess heterogeneous patterns in the mobile OSN app usage. We comprehensively evaluate the performance of Spice framework using trace-driven emulations on smartphones. Evaluation results corroborate that the Spice can achieve superior performance, with an average 67.2% access delay reduction at the low cost of cellular data and energy consumption. Furthermore, by enabling users to offload their machine learning procedures to a cloud server, our design can achieve speed-up of a factor of 1000 over the local data training execution on smartphones.",
"title": ""
},
{
"docid": "a29ee41e8f46d1feebeb67886b657f70",
"text": "Feeling emotion is a critical characteristic to distinguish people from machines. Among all the multi-modal resources for emotion detection, textual datasets are those containing the least additional information in addition to semantics, and hence are adopted widely for testing the developed systems. However, most of the textual emotional datasets consist of emotion labels of only individual words, sentences or documents, which makes it challenging to discuss the contextual flow of emotions. In this paper, we introduce EmotionLines, the first dataset with emotions labeling on all utterances in each dialogue only based on their textual content. Dialogues in EmotionLines are collected from Friends TV scripts and private Facebook messenger dialogues. Then one of seven emotions, six Ekman’s basic emotions plus the neutral emotion, is labeled on each utterance by 5 Amazon MTurkers. A total of 29,245 utterances from 2,000 dialogues are labeled in EmotionLines. We also provide several strong baselines for emotion detection models on EmotionLines in this paper.",
"title": ""
},
{
"docid": "a3fafe73615c434375cd3f35323c939e",
"text": "In this paper, Magnetic Resonance Images,T2 weighte d modality , have been pre-processed by bilateral filter to reduce th e noise and maintaining edges among the different tissues. Four different t echniques with morphological operations have been applied to extra c the tumor region. These were: Gray level stretching and Sobel edge de tection, K-Means Clustering technique based on location and intensit y, Fuzzy C-Means Clustering, and An Adapted K-Means clustering techn ique and Fuzzy CMeans technique. The area of the extracted tumor re gions has been calculated. The present work showed that the four i mplemented techniques can successfully detect and extract the brain tumor and thereby help doctors in identifying tumor's size and region.",
"title": ""
},
{
"docid": "3205184f918eab105ee17bfb12277696",
"text": "The Trilobita were characterized by a cephalic region in which the biomineralized exoskeleton showed relatively high morphological differentiation among a taxonomically stable set of well defined segments, and an ontogenetically and taxonomically dynamic trunk region in which both exoskeletal segments and ventral appendages were similar in overall form. Ventral appendages were homonomous biramous limbs throughout both the cephalon and trunk, except for the most anterior appendage pair that was antenniform, preoral, and uniramous, and a posteriormost pair of antenniform cerci, known only in one species. In some clades trunk exoskeletal segments were divided into two batches. In some, but not all, of these clades the boundary between batches coincided with the boundary between the thorax and the adult pygidium. The repeated differentiation of the trunk into two batches of segments from the homonomous trunk condition indicates an evolutionary trend in aspects of body patterning regulation that was achieved independently in several trilobite clades. The phylogenetic placement of trilobites and congruence of broad patterns of tagmosis with those seen among extant arthropods suggest that the expression domains of trilobite cephalic Hox genes may have overlapped in a manner similar to that seen among extant arachnates. This, coupled with the fact that trilobites likely possessed ten Hox genes, presents one alternative to a recent model in which Hox gene distribution in trilobites was equated to eight putative divisions of the trilobite body plan.",
"title": ""
},
{
"docid": "3ebe9aecd4c84e9b9ed0837bd294b4ed",
"text": "A bond graph model of a hybrid electric vehicle (HEV) powertrain test cell is proposed. The test cell consists of a motor/generator coupled to a HEV powertrain and powered by a bidirectional power converter. Programmable loading conditions, including positive and negative resistive and inertial loads of any magnitude are modeled, avoiding the use of mechanical inertial loads involved in conventional test cells. The dynamics and control equations of the test cell are derived directly from the bond graph models. The modeling and simulation results of the dynamics of the test cell are validated through experiments carried out on a scaled-down system.",
"title": ""
},
{
"docid": "6620fdf695d7e89a703fc17e007de7e2",
"text": "This paper presents the machine translation system known as TransLI (Translation of Legal Information) developed by the authors for automatic translation of Canadian Court judgments from English to French and from French to English. Normally, a certified translation of a legal judgment takes several months to complete. The authors attempted to shorten this time significantly using a unique statistical machine translation system which has attracted the attention of the federal courts in Canada for its accuracy and speed. This paper also describes the results of a human evaluation of the output of the system in the context of a pilot project in collaboration with the federal courts of Canada. 1. Context of the work NLP Technologies is an enterprise devoted to the use of advanced information technologies in the judicial domain. Its main focus is DecisionExpressTM a service utilizing automatic summarization technology with respect to legal information. DecisionExpress is a weekly bulletin of recent decisions of Canadian federal courts and tribunals. It is an tool that processes judicial decisions automatically and makes the daily information used by jurists more accessible by presenting the legal record of the proceedings of federal courts in Canada as a table-style summary (Farzindar et al., 2004, Chieze et al. 2008). NLP Technologies in collaboration with researchers from the RALI at Université de Montréal have developed TransLI to translate automatically the judgments from the Canadian Federal Courts. As it happens, for the new weekly published judgments, 75% of decisions are originally written in English 1 http://www.nlptechnologies.ca 2 http://rali.iro.umontreal.ca Machine Translation of Legal Information and Its Evaluation 2 and 25% in French. By law, the Federal Courts have to provide a translation in the other official language of Canada. The legal domain has continuous publishing and translation cycles, large volumes of digital content and growing demand to distribute more multilingual information. It is necessary to handle a high volume of translations quickly. Currently, a certified translation of a legal judgment takes several months to complete. Afterwards, there is a significant delay between the publication of a judgment in the original language and the availability of its human translation into the other official language. Initially, the goal of this work was to allow the court, during the few months when the official translation is pending, to publish automatically translated judgments and summaries with the appropriate caveat. Once the official translation would become available, the Court would replace the machine translations by the official ones. However, the high quality of the machine translation system obtained, developed and trained specifically on the Federal Courts corpora, opens further opportunities which are currently being investigated: machine translations could be considered as first drafts for official translations that would only need to be revised before their publication. This procedure would thus reduce the delay between the publication of the decision in the original language and its official translation. It would also provide opportunities for saving on the cost of translation. We evaluated the French and English output and performed a more detailed analysis of the modifications made to the translations by the evaluators in the context of a pilot study to be conducted in cooperation with the Federal Courts. This paper describes our statistical machine translation system, whose performance has been assessed with the usual automatic evaluation metrics. We also present the results of a manual evaluation of the translations and the result of a completed translation pilot project in a real context of publication of the federal courts of Canada. To our knowledge, this is the first attempt to build a large-scale translation system of complete judgments for eventual publication.",
"title": ""
},
{
"docid": "a727d23d78f794ce437351c5f603195f",
"text": "We initiate the study of secure multi-party computation (MPC) in a server-aided setting, where the parties have access to a single server that (1) does not have any input to the computation; (2) does not receive any output from the computation; but (3) has a vast (but bounded) amount of computational resources. In this setting, we are concerned with designing protocols that minimize the computation of the parties at the expense of the server. We develop new definitions of security for this server-aided setting that generalize the standard simulation-based definitions for MPC and allow us to formally capture the existence of dishonest but non-colluding participants. This requires us to introduce a formal characterization of non-colluding adversaries that may be of independent interest. We then design general and special-purpose server-aided MPC protocols that are more efficient (in terms of computation and communication) for the parties than the alternative of running a standard MPC protocol (i.e., without the server). Our main general-purpose protocol provides security when there is at least one honest party with input. We also construct a new and efficient server-aided protocol for private set intersection and give a general transformation from any secure delegated computation scheme to a server-aided two-party protocol. ∗Microsoft Research. [email protected]. †University of Calgary. [email protected]. Work done while visiting Microsoft Research. ‡Columbia University. [email protected]. Work done as an intern at Microsoft Research.",
"title": ""
},
{
"docid": "8fcf31f2de602cf10f769c41acccc221",
"text": "This book contains materials that come out of the Artificial General Intelligence Research Institute (AGIRI) Workshop, held in May 20-21, 2006 at Washington DC. The theme of the workshop is “Transitioning from Narrow AI to Artificial General Intelligence.” In this introductory chapter, we will clarify the notion of “Artificial General Intelligence”, briefly survey the past and present situation of the field, analyze and refute some common objections and doubts regarding this area of research, and discuss what we believe needs to be addressed by the field as a whole in the near future. Finally, we will briefly summarize the contents of the other chapters in this collection.",
"title": ""
},
{
"docid": "8b947250873921478dd7798c47314979",
"text": "In this letter, an ultra-wideband (UWB) bandpass filter (BPF) using stepped-impedance stub-loaded resonator (SISLR) is presented. Characterized by theoretical analysis, the proposed SISLR is found to have the advantage of providing more degrees of freedom to adjust the resonant frequencies. Besides, two transmission zeros can be created at both lower and upper sides of the passband. Benefiting from these features, a UWB BPF is then investigated by incorporating this SISLR and two aperture-backed interdigital coupled-lines. Finally, this filter is built and tested. The simulated and measured results are in good agreement with each other, showing good wideband filtering performance with sharp rejection skirts outside the passband.",
"title": ""
},
{
"docid": "b318cfcbe82314cc7fa898f0816dbab8",
"text": "Flow experience is often considered as an important standard of ideal user experience (UX). Till now, flow is mainly measured via self-report questionnaires, which cannot evaluate flow immediately and objectively. In this paper, we constructed a physiological evaluation model to evaluate flow in virtual reality (VR) game. The evaluation model consists of five first-level indicators and their respective second-level indicators. Then, we conducted an empirical experiment to test the effectiveness of partial indicators to predict flow experience. Most results supported the model and revealed that heart rate, interbeat interval, heart rate variability (HRV), low-frequency HRV (LF-HRV), high-frequency HRV (HF-HRV), and respiratory rate are all effective indicators in predicting flow experience. Further research should be conducted to improve the evaluation model and conclude practical implications in UX and VR game design.",
"title": ""
},
{
"docid": "48a3c9d1f41f9b7ed28f8ef46b5c4533",
"text": "We introduce two new methods of deriving the classical PCA in the framework of minimizing the mean square error upon performing a lower-dimensional approximation of the data. These methods are based on two forms of the mean square error function. One of the novelties of the presented methods is that the commonly employed process of subtraction of the mean of the data becomes part of the solution of the optimization problem and not a pre-analysis heuristic. We also derive the optimal basis and the minimum error of approximation in this framework and demonstrate the elegance of our solution in comparison with a recent solution in the framework.",
"title": ""
},
{
"docid": "2f0e767a5d4524ed2fed6b43d4b22a70",
"text": "The cerebellum is involved in learning and memory of sensory motor skills. However, the way this process takes place in local microcircuits is still unclear. The initial proposal, casted into the Motor Learning Theory, suggested that learning had to occur at the parallel fiber–Purkinje cell synapse under supervision of climbing fibers. However, the uniqueness of this mechanism has been questioned, and multiple forms of long-term plasticity have been revealed at various locations in the cerebellar circuit, including synapses and neurons in the granular layer, molecular layer and deep-cerebellar nuclei. At present, more than 15 forms of plasticity have been reported. There has been a long debate on which plasticity is more relevant to specific aspects of learning, but this question turned out to be hard to answer using physiological analysis alone. Recent experiments and models making use of closed-loop robotic simulations are revealing a radically new view: one single form of plasticity is insufficient, while altogether, the different forms of plasticity can explain the multiplicity of properties characterizing cerebellar learning. These include multi-rate acquisition and extinction, reversibility, self-scalability, and generalization. Moreover, when the circuit embeds multiple forms of plasticity, it can easily cope with multiple behaviors endowing therefore the cerebellum with the properties needed to operate as an effective generalized forward controller.",
"title": ""
},
{
"docid": "7120d5acf58f8ec623d65b4f41bef97d",
"text": "BACKGROUND\nThis study analyzes the problems and consequences associated with prolonged use of laparoscopic instruments (dissector and needle holder) and equipments.\n\n\nMETHODS\nA total of 390 questionnaires were sent to the laparoscopic surgeons of the Spanish Health System. Questions were structured on the basis of 4 categories: demographics, assessment of laparoscopic dissector, assessment of needle holder, and other informations.\n\n\nRESULTS\nA response rate of 30.26% was obtained. Among them, handle shape of laparoscopic instruments was identified as the main element that needed to be improved. Furthermore, the type of instrument, electrocautery pedals and height of the operating table were identified as major causes of forced positions during the use of both surgical instruments.\n\n\nCONCLUSIONS\nAs far as we know, this is the largest Spanish survey conducted on this topic. From this survey, some ergonomic drawbacks have been identified in: (a) the instruments' design, (b) the operating tables, and (c) the posture of the surgeons.",
"title": ""
},
{
"docid": "88e72e039de541b00722901a8eff7d19",
"text": "When building agents and synthetic characters, and in order to achieve believability, we must consider the emotional relations established between users and characters, that is, we must consider the issue of \"empathy\". Defined in broad terms as \"An observer reacting emotionally because he perceives that another is experiencing or about to experience an emotion\", empathy is an important element to consider in the creation of relations between humans and agents. In this paper we will focus on the role of empathy in the construction of synthetic characters, providing some requirements for such construction and illustrating the presented concepts with a specific system called FearNot!. FearNot! was developed to address the difficult and often devastating problem of bullying in schools. By using role playing and empathic synthetic characters in a 3D environment, FearNot! allows children from 8 to 12 to experience a virtual scenario where they can witness (in a third-person perspective) bullying situations. To build empathy into FearNot! we have considered the following components: agentýs architecture; the charactersý embodiment and emotional expression; proximity with the user and emotionally charged situations.We will describe how these were implemented in FearNot! and report on the preliminary results we have with it.",
"title": ""
},
{
"docid": "b7bf7d430e4132a4d320df3a155ee74c",
"text": "We present Wave menus, a variant of multi-stroke marking menus designed for improving the novice mode of marking while preserving their efficiency in the expert mode of marking. Focusing on the novice mode, a criteria-based analysis of existing marking menus motivates the design of Wave menus. Moreover a user experiment is presented that compares four hierarchical marking menus in novice mode. Results show that Wave and compound-stroke menus are significantly faster and more accurate than multi-stroke menus in novice mode, while it has been shown that in expert mode the multi-stroke menus and therefore the Wave menus outperform the compound-stroke menus. Wave menus also require significantly less screen space than compound-stroke menus. As a conclusion, Wave menus offer the best performance for both novice and expert modes in comparison with existing multi-level marking menus, while requiring less screen space than compound-stroke menus.",
"title": ""
},
{
"docid": "732aa9623301d4d3cc6fc9d15c6836fe",
"text": "Growing network traffic brings huge pressure to the server cluster. Using load balancing technology in server cluster becomes the choice of most enterprises. Because of many limitations, the development of the traditional load balancing technology has encountered bottlenecks. This has forced companies to find new load balancing method. Software Defined Network (SDN) provides a good method to solve the load balancing problem. In this paper, we implemented two load balancing algorithm that based on the latest SDN network architecture. The first one is a static scheduling algorithm and the second is a dynamic scheduling algorithm. Our experiments show that the performance of the dynamic algorithm is better than the static algorithm.",
"title": ""
},
{
"docid": "f5c60102070450489f7301d089d6fbd4",
"text": "This study presents a new approach to solve the well-known power system Economic Load Dispatch problem (ED) using a hybrid algorithm consisting of Genetic Algorithm (GA), Pattern Search (PS) and Sequential Quadratic Programming (SQP). GA is the main optimizer of this algorithm, whereas PS and SQP are used to fine-tune the results obtained from the GA, thereby increasing solution confidence. To test the effectiveness of this approach it was applied to various test systems. Furthermore, the convergence characteristics and robustness of the proposed method have been explored through comparisons with results reported in literature. The outcome is very encouraging and suggests that the hybrid GA-PS-SQP algorithm is very effective in solving the power system economic load dispatch problem.",
"title": ""
},
{
"docid": "a1306f761e45fdd56ae91d1b48909d74",
"text": "We propose a graphical model for representing networks of stochastic processes, the minimal generative model graph. It is based on reduced factorizations of the joint distribution over time. We show that under appropriate conditions, it is unique and consistent with another type of graphical model, the directed information graph, which is based on a generalization of Granger causality. We demonstrate how directed information quantifies Granger causality in a particular sequential prediction setting. We also develop efficient methods to estimate the topological structure from data that obviate estimating the joint statistics. One algorithm assumes upper bounds on the degrees and uses the minimal dimension statistics necessary. In the event that the upper bounds are not valid, the resulting graph is nonetheless an optimal approximation in terms of Kullback-Leibler (KL) divergence. Another algorithm uses near-minimal dimension statistics when no bounds are known, but the distribution satisfies a certain criterion. Analogous to how structure learning algorithms for undirected graphical models use mutual information estimates, these algorithms use directed information estimates. We characterize the sample-complexity of two plug-in directed information estimators and obtain confidence intervals. For the setting when point estimates are unreliable, we propose an algorithm that uses confidence intervals to identify the best approximation that is robust to estimation error. Last, we demonstrate the effectiveness of the proposed algorithms through the analysis of both synthetic data and real data from the Twitter network. In the latter case, we identify which news sources influence users in the network by merely analyzing tweet times.",
"title": ""
},
{
"docid": "c80dbfc2e1f676a7ffe4a6a4f7460d36",
"text": "Coarse-grained semantic categories such as supersenses have proven useful for a range of downstream tasks such as question answering or machine translation. To date, no effort has been put into integrating the supersenses into distributional word representations. We present a novel joint embedding model of words and supersenses, providing insights into the relationship between words and supersenses in the same vector space. Using these embeddings in a deep neural network model, we demonstrate that the supersense enrichment leads to a significant improvement in a range of downstream classification tasks.",
"title": ""
},
{
"docid": "7363b433f17e1f3dfecc805b58a8706b",
"text": "Mobile Edge Computing (MEC) consists of deploying computing resources (CPU, storage) at the edge of mobile networks; typically near or with eNodeBs. Besides easing the deployment of applications and services requiring low access to the remote server, such as Virtual Reality and Vehicular IoT, MEC will enable the development of context-aware and context-optimized applications, thanks to the Radio API (e.g. information on user channel quality) exposed by eNodeBs. Although ETSI is defining the architecture specifications, solutions to integrate MEC to the current 3GPP architecture are still open. In this paper, we fill this gap by proposing and implementing a Software Defined Networking (SDN)-based MEC framework, compliant with both ETSI and 3GPP architectures. It provides the required data-plane flexibility and programmability, which can on-the-fly improve the latency as a function of the network deployment and conditions. To illustrate the benefit of using SDN concept for the MEC framework, we present the details of software architecture as well as performance evaluations.",
"title": ""
}
] | scidocsrr |
dd6d0043d3f20121ef26a4bb2f3e7e56 | A parallel spatial data analysis infrastructure for the cloud | [
{
"docid": "ee3815cd041ff70bcefd7b3c7accbfa0",
"text": "Prior research shows that database system performance is dominated by off-chip data stalls, resulting in a concerted effort to bring data into on-chip caches. At the same time, high levels of integration have enabled the advent of chip multiprocessors and increasingly large (and slow) on-chip caches. These two trends pose the imminent technical and research challenge of adapting high-performance data management software to a shifting hardware landscape. In this paper we characterize the performance of a commercial database server running on emerging chip multiprocessor technologies. We find that the major bottleneck of current software is data cache stalls, with L2 hit stalls rising from oblivion to become the dominant execution time component in some cases. We analyze the source of this shift and derive a list of features for future database designs to attain maximum",
"title": ""
}
] | [
{
"docid": "a4738508bec1fe5975ce92c2239d30d0",
"text": "The transpalatal arch might be one of the most common intraoral auxiliary fixed appliances used in orthodontics in order to provide dental anchorage. The aim of the present case report is to describe a case in which an adult patient with a tendency to class III, palatal compression, and bilateral posterior crossbite was treated with double transpalatal bars in order to control the torque of both the first and the second molars. Double transpalatal arches on both first and second maxillary molars are a successful appliance in order to control the posterior sectors and improve the torsion of the molars. They allow the professional to gain overbite instead of losing it as may happen with other techniques and avoid enlarging of Wilson curve, obtaining a more stable occlusion without the need for extra help from bone anchorage.",
"title": ""
},
{
"docid": "e9b2f987c4744e509b27cbc2ab1487be",
"text": "Analogy and similarity are often assumed to be distinct psychological processes. In contrast to this position, the authors suggest that both similarity and analogy involve a process of structural alignment and mapping, that is, that similarity is like analogy. In this article, the authors first describe the structure-mapping process as it has been worked out for analogy. Then, this view is extended to similarity, where it is used to generate new predictions. Finally, the authors explore broader implications of structural alignment for psychological processing.",
"title": ""
},
{
"docid": "d09e4f8c58f9ff0760addfe1e313d5f6",
"text": "Currently, color image encryption is important to ensure its confidentiality during its transmission on insecure networks or its storage. The fact that chaotic properties are related with cryptography properties in confusion, diffusion, pseudorandom, etc., researchers around the world have presented several image (gray and color) encryption algorithms based on chaos, but almost all them with serious security problems have been broken with the powerful chosen/known plain image attack. In this work, we present a color image encryption algorithm based on total plain image characteristics (to resist a chosen/known plain image attack), and 1D logistic map with optimized distribution (for fast encryption process) based on Murillo-Escobar's algorithm (Murillo-Escobar et al. (2014) [38]). The security analysis confirms that the RGB image encryption is fast and secure against several known attacks; therefore, it can be implemented in real-time applications where a high security is required. & 2014 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "b50c010e8606de8efb7a9e861ca31059",
"text": "A Software Defined Network (SDN) is a new network architecture that provides central control over the network. Although central control is the major advantage of SDN, it is also a single point of failure if it is made unreachable by a Distributed Denial of Service (DDoS) Attack. To mitigate this threat, this paper proposes to use the central control of SDN for attack detection and introduces a solution that is effective and lightweight in terms of the resources that it uses. More precisely, this paper shows how DDoS attacks can exhaust controller resources and provides a solution to detect such attacks based on the entropy variation of the destination IP address. This method is able to detect DDoS within the first five hundred packets of the attack traffic.",
"title": ""
},
{
"docid": "84b2dbea13df9e6ee70570a05f82049f",
"text": "The main aim of this position paper is to identify and briefly discuss design-related issues commonly encountered with the implementation of both behaviour change techniques and persuasive design principles in physical activity smartphone applications. These overlapping issues highlight a disconnect in the perspectives held between health scientists' focus on the application of behaviour change theories and components of interventions, and the information systems designers' focus on the application of persuasive design principles as software design features intended to motivate, facilitate and support individuals through the behaviour change process. A review of the current status and some examples of these different perspectives is presented, leading to the identification of the main issues associated with this disconnection. The main behaviour change technique issues identified are concerned with: the fragmented integration of techniques, hindrances in successful use, diversity of user needs and preferences, and the informational flow and presentation. The main persuasive design issues identified are associated with: the fragmented application of persuasive design principles, hindrances in successful usage, diversity of user needs and preferences, informational flow and presentation, the lack of pragmatic guidance for application designers, and the maintenance of immersive user interactions and engagements. Given the common overlap across four of the identified issues, it is concluded that a methodological approach for integrating these two perspectives, and their associated issues, into a consolidated framework is necessary to address the apparent disconnect between these two independently-established, yet complementary fields.",
"title": ""
},
{
"docid": "e5691e6bb32f06a34fab7b692539d933",
"text": "Öz Supplier evaluation and selection includes both qualitative and quantitative criteria and it is considered as a complex Multi Criteria Decision Making (MCDM) problem. Uncertainty and impreciseness of data is an integral part of decision making process for a real life application. The fuzzy set theory allows making decisions under uncertain environment. In this paper, a trapezoidal type 2 fuzzy multicriteria decision making methods based on TOPSIS is proposed to select convenient supplier under vague information. The proposed method is applied to the supplier selection process of a textile firm in Turkey. In addition, the same problem is solved with type 1 fuzzy TOPSIS to confirm the findings of type 2 fuzzy TOPSIS. A sensitivity analysis is conducted to observe how the decision changes under different scenarios. Results show that the presented type 2 fuzzy TOPSIS method is more appropriate and effective to handle the supplier selection in uncertain environment. Tedarikçi değerlendirme ve seçimi, nitel ve nicel çok sayıda faktörün değerlendirilmesini gerektiren karmaşık birçok kriterli karar verme problemi olarak görülmektedir. Gerçek hayatta, belirsizlikler ve muğlaklık bir karar verme sürecinin ayrılmaz bir parçası olarak karşımıza çıkmaktadır. Bulanık küme teorisi, belirsizlik durumunda karar vermemize imkân sağlayan metotlardan bir tanesidir. Bu çalışmada, ikizkenar yamuk tip 2 bulanık TOPSIS yöntemi kısaca tanıtılmıştır. Tanıtılan yöntem, Türkiye’de bir tekstil firmasının tedarikçi seçimi problemine uygulanmıştır. Ayrıca, tip 2 bulanık TOPSIS yönteminin sonuçlarını desteklemek için aynı problem tip 1 bulanık TOPSIS ile de çözülmüştür. Duyarlılık analizi yapılarak önerilen çözümler farklı senaryolar altında incelenmiştir. Duyarlılık analizi sonuçlarına göre tip 2 bulanık TOPSIS daha efektif ve uygun çözümler üretmektedir.",
"title": ""
},
{
"docid": "2fe0639b8a1fc6c64bb8e177576ec06e",
"text": "A new approach for ranking fuzzy numbers based on a distance measure is introduced. A new class of distance measures for interval numbers that takes into account all the points in both intervals is developed -rst, and then it is used to formulate the distance measure for fuzzy numbers. The approach is illustrated by numerical examples, showing that it overcomes several shortcomings such as the indiscriminative and counterintuitive behavior of several existing fuzzy ranking approaches. c © 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "7bedcb8eb5f458ba238c82249c80657d",
"text": "The spread of antibiotic-resistant bacteria is a growing problem and a public health issue. In recent decades, various genetic mechanisms involved in the spread of resistance genes among bacteria have been identified. Integrons - genetic elements that acquire, exchange, and express genes embedded within gene cassettes (GC) - are one of these mechanisms. Integrons are widely distributed, especially in Gram-negative bacteria; they are carried by mobile genetic elements, plasmids, and transposons, which promote their spread within bacterial communities. Initially studied mainly in the clinical setting for their involvement in antibiotic resistance, their role in the environment is now an increasing focus of attention. The aim of this review is to provide an in-depth analysis of recent studies of antibiotic-resistance integrons in the environment, highlighting their potential involvement in antibiotic-resistance outside the clinical context. We will focus particularly on the impact of human activities (agriculture, industries, wastewater treatment, etc.).",
"title": ""
},
{
"docid": "309f7b25ebf83f27a7f9c120e6e8bd27",
"text": "Human-robotinteractionis becominganincreasinglyimportant researcharea. In this paper , we presentour work on designinga human-robotsystemwith adjustableautonomy anddescribenotonly theprototypeinterfacebut alsothecorrespondingrobot behaviors. In our approach,we grant the humanmeta-level control over the level of robot autonomy, but we allow the robot a varying amountof self-direction with eachlevel. Within this framework of adjustableautonomy, we explore appropriateinterfaceconceptsfor controlling multiple robotsfrom multiple platforms.",
"title": ""
},
{
"docid": "440858614aba25dfa9039b20a1caefc4",
"text": "A natural image usually conveys rich semantic content and can be viewed from different angles. Existing image description methods are largely restricted by small sets of biased visual paragraph annotations, and fail to cover rich underlying semantics. In this paper, we investigate a semi-supervised paragraph generative framework that is able to synthesize diverse and semantically coherent paragraph descriptions by reasoning over local semantic regions and exploiting linguistic knowledge. The proposed Recurrent Topic-Transition Generative Adversarial Network (RTT-GAN) builds an adversarial framework between a structured paragraph generator and multi-level paragraph discriminators. The paragraph generator generates sentences recurrently by incorporating region-based visual and language attention mechanisms at each step. The quality of generated paragraph sentences is assessed by multi-level adversarial discriminators from two aspects, namely, plausibility at sentence level and topic-transition coherence at paragraph level. The joint adversarial training of RTT-GAN drives the model to generate realistic paragraphs with smooth logical transition between sentence topics. Extensive quantitative experiments on image and video paragraph datasets demonstrate the effectiveness of our RTT-GAN in both supervised and semi-supervised settings. Qualitative results on telling diverse stories for an image verify the interpretability of RTT-GAN.",
"title": ""
},
{
"docid": "dc5b6cd087a99d7dd123a69b1991eb3e",
"text": "Current top-N recommendation methods compute the recommendations by taking into account only relations between pairs of items, thus leading to potential unused information when higher-order relations between the items exist. Past attempts to incorporate the higherorder information were done in the context of neighborhood-based methods. However, in many datasets, they did not lead to significant improvements in the recommendation quality. We developed a top-N recommendation method that revisits the issue of higher-order relations, in the context of the model-based Sparse LInear Method (SLIM). The approach followed (Higher-Order Sparse LInear Method, or HOSLIM) learns two sparse aggregation coefficient matrices S and S′ that capture the item-item and itemset-item similarities, respectively. Matrix S′ allows HOSLIM to capture higher-order relations, whose complexity is determined by the length of the itemset. Following the spirit of SLIM, matrices S and S′ are estimated using an elastic net formulation, which promotes model sparsity. We conducted extensive experiments which show that higher-order interactions exist in real datasets and when incorporated in the HOSLIM framework, the recommendations made are improved. The experimental results show that the greater the presence of higher-order relations, the more substantial the improvement in recommendation quality is, over the best existing methods. In addition, our experiments show that the performance of HOSLIM remains good when we select S′ such that its number of nonzeros is comparable to S, which reduces the time required to compute the recommendations.",
"title": ""
},
{
"docid": "0046aca3e98d75f9d3c414a6de42e017",
"text": "Fast Downward is a classical planning system based on heuris tic search. It can deal with general deterministic planning problems encoded in the propos itional fragment of PDDL2.2, including advanced features like ADL conditions and effects and deriv ed predicates (axioms). Like other well-known planners such as HSP and FF, Fast Downward is a pro gression planner, searching the space of world states of a planning task in the forward direct ion. However, unlike other PDDL planning systems, Fast Downward does not use the propositional P DDL representation of a planning task directly. Instead, the input is first translated into an alternative representation called multivalued planning tasks , which makes many of the implicit constraints of a propositi nal planning task explicit. Exploiting this alternative representatio n, Fast Downward uses hierarchical decompositions of planning tasks for computing its heuristic fun ction, called thecausal graph heuristic , which is very different from traditional HSP-like heuristi cs based on ignoring negative interactions of operators. In this article, we give a full account of Fast Downward’s app roach to solving multi-valued planning tasks. We extend our earlier discussion of the caus al graph heuristic to tasks involving axioms and conditional effects and present some novel techn iques for search control that are used within Fast Downward’s best-first search algorithm: preferred operatorstransfer the idea of helpful actions from local search to global best-first search, deferred evaluationof heuristic functions mitigates the negative effect of large branching factors on earch performance, and multi-heuristic best-first searchcombines several heuristic evaluation functions within a s ingle search algorithm in an orthogonal way. We also describe efficient data structu es for fast state expansion ( successor generatorsandaxiom evaluators ) and present a new non-heuristic search algorithm called focused iterative-broadening search , which utilizes the information encoded in causal graphs in a ovel way. Fast Downward has proven remarkably successful: It won the “ classical” (i. e., propositional, non-optimising) track of the 4th International Planning Co mpetition at ICAPS 2004, following in the footsteps of planners such as FF and LPG. Our experiments show that it also performs very well on the benchmarks of the earlier planning competitions a d provide some insights about the usefulness of the new search enhancements.",
"title": ""
},
{
"docid": "f29ed3c9f3de56bd3e8ec7a24860043b",
"text": "Antennas implanted in a human body are largely applicable to hyperthermia and biotelemetry. To make practical use of antennas inside a human body, resonance characteristics of the implanted antennas and their radiation signature outside the body must be evaluated through numerical analysis and measurement setup. Most importantly, the antenna must be designed with an in-depth consideration given to its surrounding environment. In this paper, the spherical dyadic Green's function (DGF) expansions and finite-difference time-domain (FDTD) code are applied to analyze the electromagnetic characteristics of dipole antennas and low-profile patch antennas implanted in the human head and body. All studies to characterize and design the implanted antennas are performed at the biomedical frequency band of 402-405 MHz. By comparing the results from two numerical methodologies, the accuracy of the spherical DGF application for a dipole antenna at the center of the head is evaluated. We also consider how much impact a shoulder has on the performance of the dipole inside the head using FDTD. For the ease of the design of implanted low-profile antennas, simplified planar geometries based on a real human body are proposed. Two types of low-profile antennas, i.e., a spiral microstrip antenna and a planar inverted-F antenna, with superstrate dielectric layers are initially designed for medical devices implanted in the chest of the human body using FDTD simulations. The radiation performances of the designed low-profile antennas are estimated in terms of radiation patterns, radiation efficiency, and specific absorption rate. Maximum available power calculated to characterize the performance of a communication link between the designed antennas and an exterior antenna show how sensitive receivers are required to build a reliable telemetry link.",
"title": ""
},
{
"docid": "890c2b3413760599f9021e83fa6338a4",
"text": "Ensuring the efficient and robust operation of distributed computational infrastructures is critical, given that their scale and overall complexity is growing at an alarming rate and that their management is rapidly exceeding human capability. Clustering analysis can be used to find patterns and trends in system operational data, as well as highlight deviations from these patterns. Such analysis can be essential for verifying the correctness and efficiency of the operation of the system, as well as for discovering specific situations of interest, such as anomalies or faults, that require appropriate management actions.\n This work analyzes the automated application of clustering for online system management, from the point of view of the suitability of different clustering approaches for the online analysis of system data in a distributed environment, with minimal prior knowledge and within a timeframe that allows the timely interpretation of and response to clustering results. For this purpose, we evaluate DOC (Decentralized Online Clustering), a clustering algorithm designed to support data analysis for autonomic management, and compare it to existing and widely used clustering algorithms. The comparative evaluations will show that DOC achieves a good balance in the trade-offs inherent in the challenges for this type of online management.",
"title": ""
},
{
"docid": "8a8acb74a69005a37a0adbb3b6e45746",
"text": "We introduce Similarity Group Proposal Network (SGPN), a simple and intuitive deep learning framework for 3D object instance segmentation on point clouds. SGPN uses a single network to predict point grouping proposals and a corresponding semantic class for each proposal, from which we can directly extract instance segmentation results. Important to the effectiveness of SGPN is its novel representation of 3D instance segmentation results in the form of a similarity matrix that indicates the similarity between each pair of points in embedded feature space, thus producing an accurate grouping proposal for each point. Experimental results on various 3D scenes show the effectiveness of our method on 3D instance segmentation, and we also evaluate the capability of SGPN to improve 3D object detection and semantic segmentation results. We also demonstrate its flexibility by seamlessly incorporating 2D CNN features into the framework to boost performance.",
"title": ""
},
{
"docid": "7e6de21317f08e934ecba93a5a8735d7",
"text": "Robot technology is emerging for applications in disaster prevention with devices such as fire-fighting robots, rescue robots, and surveillance robots. In this paper, we suggest an portable fire evacuation guide robot system that can be thrown into a fire site to gather environmental information, search displaced people, and evacuate them from the fire site. This spool-like small and light mobile robot can be easily carried and remotely controlled by means of a laptop-sized tele-operator. It contains the following functional units: a camera to capture the fire site; sensors to gather temperature data, CO gas, and O2 concentrations; and a microphone with speaker for emergency voice communications between firefighter and victims. The robot's design gives its high-temperature protection, excellent waterproofing, and high impact resistance. Laboratory tests were performed for evaluating the performance of the proposed evacuation guide robot system.",
"title": ""
},
{
"docid": "ae28bc02e9f0891d8338980cd169ada4",
"text": "We investigated the possibility of using a machine-learning scheme in conjunction with commercial wearable EEG-devices for translating listener's subjective experience of music into scores that can be used in popular on-demand music streaming services. Our study resulted into two variants, differing in terms of performance and execution time, and hence, subserving distinct applications in online streaming music platforms. The first method, NeuroPicks, is extremely accurate but slower. It is based on the well-established neuroscientific concepts of brainwave frequency bands, activation asymmetry index and cross frequency coupling (CFC). The second method, NeuroPicksVQ, offers prompt predictions of lower credibility and relies on a custom-built version of vector quantization procedure that facilitates a novel parameterization of the music-modulated brainwaves. Beyond the feature engineering step, both methods exploit the inherent efficiency of extreme learning machines (ELMs) so as to translate, in a personalized fashion, the derived patterns into a listener's score. NeuroPicks method may find applications as an integral part of contemporary music recommendation systems, while NeuroPicksVQ can control the selection of music tracks. Encouraging experimental results, from a pragmatic use of the systems, are presented.",
"title": ""
},
{
"docid": "37ec45b042e70f175f2ef5b09b6a16c1",
"text": "Online participation and content contribution are pillars of the Internet revolution and are core activities for younger generations online. This study investigated participation patterns, users' contributions and gratification mechanisms, as well as the gender differences of Israeli learners in the Scratch online community. The findings showed that: (1) Participation patterns reveal two distinct participation types \"project creators\" and \"social participators\", suggesting different users' needs. (2) Community members gratified \"project creators\" and \"social participators\" for their investment – using several forms of community feedback. Gratification at the user level was given both to \"project creators\" and \"social participators\" – community members added them as friends. The majority of the variance associated with community feedback was explained by seven predictors. However, gratification at the project level was different for the two participation types active \"project creators\" received less feedback on their projects, while active \"social participators\" received more. Project feedback positively correlated with social participation investment, but negatively correlated with project creation investment. A possible explanation is that community members primarily left feedback to their friends. (3) No gender differences were found in participation patterns or in project complexity, suggesting that Scratch provides similar opportunities to both genders in programming, learning, and participation.",
"title": ""
},
{
"docid": "6c2d48d47d3550dc55f7e4d02777f60f",
"text": "The purpose of this study is to analyze three separate constructs (demographics, study habits, and technology familiarity) that can be used to identify university students’ characteristics and the relationship between each of these constructs with student achievement. A survey method was used for the current study, and the participants included 2,949 university students from 11 faculties of a public university in Turkey. A survey was used to collect data, and the data were analyzed using the chi-squared automatic interaction detection (CHAID) algorithm. The results of the study revealed that female students are significantly more successful than male students. In addition, the more introverted students, whether male or female, have higher grade point averages (GPAs) than those students who are more extroverted. Furthermore, male students who use the Internet more than 22 hours per week and use the Internet for up to six different aims have the lowest GPAs among all students, while female students who use the Internet for up to 21 hours per week have the highest GPAs among all students. The implications of these findings are also discussed herein.",
"title": ""
}
] | scidocsrr |
9ed1a87bc398c68eab5380f3b343704e | To catch a chorus: using chroma-based representations for audio thumbnailing | [
{
"docid": "0297b1f3565e4d1a3554137ac4719cfd",
"text": "Systems to automatically provide a representative summary or `Key Phrase' of a piece of music are described. For a `rock' song with `verse' and `chorus' sections, we aim to return the chorus or in any case the most repeated and hence most memorable section. The techniques are less applicable to music with more complicated structure although possibly our general framework could still be used with di erent heuristics. Our process consists of three steps. First we parameterize the song into features. Next we use these features to discover the song structure, either by clustering xed-length segments or by training a hidden Markov model (HMM) for the song. Finally, given this structure, we use heuristics to choose the Key Phrase. Results for summaries of 18 Beatles songs evaluated by ten users show that the technique based on clustering is superior to the HMM approach and to choosing the Key Phrase at random.",
"title": ""
}
] | [
{
"docid": "18f9fff4bd06f28cd39c97ff40467d0f",
"text": "Smart agriculture is an emerging concept, because IOT sensors are capable of providing information about agriculture fields and then act upon based on the user input. In this Paper, it is proposed to develop a Smart agriculture System that uses advantages of cutting edge technologies such as Arduino, IOT and Wireless Sensor Network. The paper aims at making use of evolving technology i.e. IOT and smart agriculture using automation. Monitoring environmental conditions is the major factor to improve yield of the efficient crops. The feature of this paper includes development of a system which can monitor temperature, humidity, moisture and even the movement of animals which may destroy the crops in agricultural field through sensors using Arduino board and in case of any discrepancy send a SMS notification as well as a notification on the application developed for the same to the farmer’s smartphone using Wi-Fi/3G/4G. The system has a duplex communication link based on a cellularInternet interface that allows for data inspection and irrigation scheduling to be programmed through an android application. Because of its energy autonomy and low cost, the system has the potential to be useful in water limited geographically isolated areas.",
"title": ""
},
{
"docid": "2e89bc59f85b14cf40a868399a3ce351",
"text": "CONTEXT\nYouth worldwide play violent video games many hours per week. Previous research suggests that such exposure can increase physical aggression.\n\n\nOBJECTIVE\nWe tested whether high exposure to violent video games increases physical aggression over time in both high- (United States) and low- (Japan) violence cultures. We hypothesized that the amount of exposure to violent video games early in a school year would predict changes in physical aggressiveness assessed later in the school year, even after statistically controlling for gender and previous physical aggressiveness.\n\n\nDESIGN\nIn 3 independent samples, participants' video game habits and physically aggressive behavior tendencies were assessed at 2 points in time, separated by 3 to 6 months.\n\n\nPARTICIPANTS\nOne sample consisted of 181 Japanese junior high students ranging in age from 12 to 15 years. A second Japanese sample consisted of 1050 students ranging in age from 13 to 18 years. The third sample consisted of 364 United States 3rd-, 4th-, and 5th-graders ranging in age from 9 to 12 years. RESULTS. Habitual violent video game play early in the school year predicted later aggression, even after controlling for gender and previous aggressiveness in each sample. Those who played a lot of violent video games became relatively more physically aggressive. Multisample structure equation modeling revealed that this longitudinal effect was of a similar magnitude in the United States and Japan for similar-aged youth and was smaller (but still significant) in the sample that included older youth.\n\n\nCONCLUSIONS\nThese longitudinal results confirm earlier experimental and cross-sectional studies that had suggested that playing violent video games is a significant risk factor for later physically aggressive behavior and that this violent video game effect on youth generalizes across very different cultures. As a whole, the research strongly suggests reducing the exposure of youth to this risk factor.",
"title": ""
},
{
"docid": "1e4daa242bfee88914b084a1feb43212",
"text": "In this paper, we present a novel approach of human activity prediction. Human activity prediction is a probabilistic process of inferring ongoing activities from videos only containing onsets (i.e. the beginning part) of the activities. The goal is to enable early recognition of unfinished activities as opposed to the after-the-fact classification of completed activities. Activity prediction methodologies are particularly necessary for surveillance systems which are required to prevent crimes and dangerous activities from occurring. We probabilistically formulate the activity prediction problem, and introduce new methodologies designed for the prediction. We represent an activity as an integral histogram of spatio-temporal features, efficiently modeling how feature distributions change over time. The new recognition methodology named dynamic bag-of-words is developed, which considers sequential nature of human activities while maintaining advantages of the bag-of-words to handle noisy observations. Our experiments confirm that our approach reliably recognizes ongoing activities from streaming videos with a high accuracy.",
"title": ""
},
{
"docid": "0e068a4e7388ed456de4239326eb9b08",
"text": "The Web so far has been incredibly successful at delivering information to human users. So successful actually, that there is now an urgent need to go beyond a browsing human. Unfortunately, the Web is not yet a well organized repository of nicely structured documents but rather a conglomerate of volatile HTML pages. To address this problem, we present the World Wide Web Wrapper Factory (W4F), a toolkit for the generation of wrappers for Web sources, that offers: (1) an expressive language to specify the extraction of complex structures from HTML pages; (2) a declarative mapping to various data formats like XML; (3) some visual tools to make the engineering of wrappers faster and easier.",
"title": ""
},
{
"docid": "946330bdcc96711090f15dbaf772edf6",
"text": "This paper deals with the estimation of the channel impulse response (CIR) in orthogonal frequency division multiplexed (OFDM) systems. In particular, we focus on two pilot-aided schemes: the maximum likelihood estimator (MLE) and the Bayesian minimum mean square error estimator (MMSEE). The advantage of the former is that it is simpler to implement as it needs no information on the channel statistics. On the other hand, the MMSEE is expected to have better performance as it exploits prior information about the channel. Theoretical analysis and computer simulations are used in the comparisons. At SNR values of practical interest, the two schemes are found to exhibit nearly equal performance, provided that the number of pilot tones is sufficiently greater than the CIRs length. Otherwise, the MMSEE is superior. In any case, the MMSEE is more complex to implement.",
"title": ""
},
{
"docid": "50dc3186ad603ef09be8cca350ff4d77",
"text": "Design iteration time in SoC design flow is reduced through performance exploration at a higher level of abstraction. This paper proposes an accurate and fast performance analysis method in early stage of design process using a behavioral model written in C/C++ language. We made a cycle-accurate but fast and flexible compiled instruction set simulator (ISS) and IP models that represent hardware functionality and performance. System performance analyzer configured by the target communication architecture analyzes the performance utilizing event-traces obtained by running the ISS and IP models. This solution is automated and implemented in the tool, HIPA. We obtain diverse performance profiling results and achieve 95% accuracy using an abstracted C model. We also achieve about 20 times speed-up over corresponding co-simulation tools.",
"title": ""
},
{
"docid": "2c9e17d4c5bfb803ea1ff20ea85fbd10",
"text": "In this paper, we present a new and significant theoretical discovery. If the absolute height difference between base station (BS) antenna and user equipment (UE) antenna is larger than zero, then the network capacity performance in terms of the area spectral efficiency (ASE) will continuously decrease as the BS density increases for ultra-dense (UD) small cell networks (SCNs). This performance behavior has a tremendous impact on the deployment of UD SCNs in the 5th- generation (5G) era. Network operators may invest large amounts of money in deploying more network infrastructure to only obtain an even worse network performance. Our study results reveal that it is a must to lower the SCN BS antenna height to the UE antenna height to fully achieve the capacity gains of UD SCNs in 5G. However, this requires a revolutionized approach of BS architecture and deployment, which is explored in this paper too.",
"title": ""
},
{
"docid": "74afc31d233f76e28b58f019dfc28df4",
"text": "We present a motion planner for autonomous highway driving that adapts the state lattice framework pioneered for planetary rover navigation to the structured environment of public roadways. The main contribution of this paper is a search space representation that allows the search algorithm to systematically and efficiently explore both spatial and temporal dimensions in real time. This allows the low-level trajectory planner to assume greater responsibility in planning to follow a leading vehicle, perform lane changes, and merge between other vehicles. We show that our algorithm can readily be accelerated on a GPU, and demonstrate it on an autonomous passenger vehicle.",
"title": ""
},
{
"docid": "12fb3e47b285dcabe11806aeb7949520",
"text": "This paper presents a differential low-noise highresolution switched-capacitor readout circuit that is intended for capacitive sensors. Amplitude modulation/demodulation and correlated double sampling are used to minimize the adverse effects of the amplifier offset and flicker (1/f) noise and improve the sensitivity of the readout circuit. In order to simulate the response of the readout circuit, a Verilog-A model is used to model the variable sense capacitor. The interface circuit is designed and laid out in a 0.8 µm CMOS process. Postlayout simulation results show that the readout interface is able to linearly resolve sense capacitance variation from 2.8 aF to 0.3 fF with a sensitivity of 7.88 mV/aF from a single 5V supply (the capacitance-to-voltage conversion is approximately linear for capacitance changes from 0.3 fF to~1.2 fF). The power consumption of the circuit is 9.38 mW.",
"title": ""
},
{
"docid": "c7c1bafc295af6ebc899e391daae04c1",
"text": "Non-orthogonal multiple access (NOMA) is expected to be a promising multiple access technique for 5G networks due to its superior spectral efficiency. In this letter, the ergodic capacity maximization problem is first studied for the Rayleigh fading multiple-input multiple-output (MIMO) NOMA systems with statistical channel state information at the transmitter (CSIT). We propose both optimal and low complexity suboptimal power allocation schemes to maximize the ergodic capacity of MIMO NOMA system with total transmit power constraint and minimum rate constraint of the weak user. Numerical results show that the proposed NOMA schemes significantly outperform the traditional orthogonal multiple access scheme.",
"title": ""
},
{
"docid": "a845a36fb352f347224e9902087d9625",
"text": "Electroencephalography (EEG) is the most popular brain activity recording technique used in wide range of applications. One of the commonly faced problems in EEG recordings is the presence of artifacts that come from sources other than brain and contaminate the acquired signals significantly. Therefore, much research over the past 15 years has focused on identifying ways for handling such artifacts in the preprocessing stage. However, this is still an active area of research as no single existing artifact detection/removal method is complete or universal. This article presents an extensive review of the existing state-of-the-art artifact detection and removal methods from scalp EEG for all potential EEG-based applications and analyses the pros and cons of each method. First, a general overview of the different artifact types that are found in scalp EEG and their effect on particular applications are presented. In addition, the methods are compared based on their ability to remove certain types of artifacts and their suitability in relevant applications (only functional comparison is provided not performance evaluation of methods). Finally, the future direction and expected challenges of current research is discussed. Therefore, this review is expected to be helpful for interested researchers who will develop and/or apply artifact handling algorithm/technique in future for their applications as well as for those willing to improve the existing algorithms or propose a new solution in this particular area of research.",
"title": ""
},
{
"docid": "3eebd7a2d8f7ae93a6a70c7e680b4b68",
"text": "BACKGROUND\nThis longitudinal community study assessed the prevalence and development of psychiatric disorders from age 9 through 16 years and examined homotypic and heterotypic continuity.\n\n\nMETHODS\nA representative population sample of 1420 children aged 9 to 13 years at intake were assessed annually for DSM-IV disorders until age 16 years.\n\n\nRESULTS\nAlthough 3-month prevalence of any disorder averaged 13.3% (95% confidence interval [CI], 11.7%-15.0%), during the study period 36.7% of participants (31% of girls and 42% of boys) had at least 1 psychiatric disorder. Some disorders (social anxiety, panic, depression, and substance abuse) increased in prevalence, whereas others, including separation anxiety disorder and attention-deficit/hyperactivity disorder (ADHD), decreased. Lagged analyses showed that children with a history of psychiatric disorder were 3 times more likely than those with no previous disorder to have a diagnosis at any subsequent wave (odds ratio, 3.7; 95% CI, 2.9-4.9; P<.001). Risk from a previous diagnosis was high among both girls and boys, but it was significantly higher among girls. Continuity of the same disorder (homotypic) was significant for all disorders except specific phobias. Continuity from one diagnosis to another (heterotypic) was significant from depression to anxiety and anxiety to depression, from ADHD to oppositional defiant disorder, and from anxiety and conduct disorder to substance abuse. Almost all the heterotypic continuity was seen in girls.\n\n\nCONCLUSIONS\nThe risk of having at least 1 psychiatric disorder by age 16 years is much higher than point estimates would suggest. Concurrent comorbidity and homotypic and heterotypic continuity are more marked in girls than in boys.",
"title": ""
},
{
"docid": "adabd3971fa0abe5c60fcf7a8bb3f80c",
"text": "The present paper describes the development of a query focused multi-document automatic summarization. A graph is constructed, where the nodes are sentences of the documents and edge scores reflect the correlation measure between the nodes. The system clusters similar texts having related topical features from the graph using edge scores. Next, query dependent weights for each sentence are added to the edge score of the sentence and accumulated with the corresponding cluster score. Top ranked sentence of each cluster is identified and compressed using a dependency parser. The compressed sentences are included in the output summary. The inter-document cluster is revisited in order until the length of the summary is less than the maximum limit. The summarizer has been tested on the standard TAC 2008 test data sets of the Update Summarization Track. Evaluation of the summarizer yielded accuracy scores of 0.10317 (ROUGE-2) and 0.13998 (ROUGE–SU-4).",
"title": ""
},
{
"docid": "60182038191a764fd7070e8958185718",
"text": "Shales of very low metamorphic grade from the 2.78 to 2.45 billion-year-old (Ga) Mount Bruce Supergroup, Pilbara Craton, Western Australia, were analyzed for solvent extractable hydrocarbons. Samples were collected from ten drill cores and two mines in a sampling area centered in the Hamersley Basin near Wittenoom and ranging 200 km to the southeast, 100 km to the southwest and 70 km to the northwest. Almost all analyzed kerogenous sedimentary rocks yielded solvent extractable organic matter. Concentrations of total saturated hydrocarbons were commonly in the range of 1 to 20 ppm ( g/g rock) but reached maximum values of 1000 ppm. The abundance of aromatic hydrocarbons was 1 to 30 ppm. Analysis of the extracts by gas chromatography-mass spectrometry (GC-MS) and GC-MS metastable reaction monitoring (MRM) revealed the presence of n-alkanes, midand end-branched monomethylalkanes, -cyclohexylalkanes, acyclic isoprenoids, diamondoids, trito pentacyclic terpanes, steranes, aromatic steroids and polyaromatic hydrocarbons. Neither plant biomarkers nor hydrocarbon distributions indicative of Phanerozoic contamination were detected. The host kerogens of the hydrocarbons were depleted in C by 2 to 21‰ relative ton-alkanes, a pattern typical of, although more extreme than, other Precambrian samples. Acyclic isoprenoids showed carbon isotopic depletion relative to n-alkanes and concentrations of 2 -methylhopanes were relatively high, features rarely observed in the Phanerozoic but characteristic of many other Precambrian bitumens. Molecular parameters, including sterane and hopane ratios at their apparent thermal maxima, condensate-like alkane profiles, high monoand triaromatic steroid maturity parameters, high methyladamantane and methyldiamantane indices and high methylphenanthrene maturity ratios, indicate thermal maturities in the wet-gas generation zone. Additionally, extracts from shales associated with iron ore deposits at Tom Price and Newman have unusual polyaromatic hydrocarbon patterns indicative of pyrolytic dealkylation. The saturated hydrocarbons and biomarkers in bitumens from the Fortescue and Hamersley Groups are characterized as ‘probably syngenetic with their Archean host rock’ based on their typical Precambrian molecular and isotopic composition, extreme maturities that appear consistent with the thermal history of the host sediments, the absence of biomarkers diagnostic of Phanerozoic age, the absence of younger petroleum source rocks in the basin and the wide geographic distribution of the samples. Aromatic hydrocarbons detected in shales associated with iron ore deposits at Mt Tom Price and Mt Whaleback are characterized as ‘clearly Archean’ based on their hypermature composition and covalent bonding to kerogen. Copyright © 2003 Elsevier Ltd",
"title": ""
},
{
"docid": "1d562cc5517fa367a0f807ce7bb1c897",
"text": "Wireless sensor networks for environmental monitoring and agricultural applications often face long-range requirements at low bit-rates together with large numbers of nodes. This paper presents the design and test of a novel wireless sensor network that combines a large radio range with very low power consumption and cost. Our asymmetric sensor network uses ultralow-cost 40 MHz transmitters and a sensitive software defined radio receiver with multichannel capability. Experimental radio range measurements in two different outdoor environments demonstrate a single-hop range of up to 1.8 km. A theoretical model for radio propagation at 40 MHz in outdoor environments is proposed and validated with the experimental measurements. The reliability and fidelity of network communication over longer time periods is evaluated with a deployment for distributed temperature measurements. Our results demonstrate the feasibility of the transmit-only low-frequency system design approach for future environmental sensor networks. Although there have been several papers proposing the theoretical benefits of this approach, to the best of our knowledge this is the first paper to provide experimental validation of such claims.",
"title": ""
},
{
"docid": "440b90f61bc7826c1165a1f3d306bd5e",
"text": "Image descriptors based on activations of Convolutional Neural Networks (CNNs) have become dominant in image retrieval due to their discriminative power, compactness of representation, and search efficiency. Training of CNNs, either from scratch or fine-tuning, requires a large amount of annotated data, where a high quality of annotation is often crucial. In this work, we propose to fine-tune CNNs for image retrieval on a large collection of unordered images in a fully automated manner. Reconstructed 3D models obtained by the state-of-the-art retrieval and structure-from-motion methods guide the selection of the training data. We show that both hard-positive and hard-negative examples, selected by exploiting the geometry and the camera positions available from the 3D models, enhance the performance of particular-object retrieval. CNN descriptor whitening discriminatively learned from the same training data outperforms commonly used PCA whitening. We propose a novel trainable Generalized-Mean (GeM) pooling layer that generalizes max and average pooling and show that it boosts retrieval performance. Applying the proposed method to the VGG network achieves state-of-the-art performance on the standard benchmarks: Oxford Buildings, Paris, and Holidays datasets.",
"title": ""
},
{
"docid": "6fd9793e9f44b726028f8c879157f1f7",
"text": "Modeling, simulation and implementation of Voltage Source Inverter (VSI) fed closed loop control of 3-phase induction motor drive is presented in this paper. A mathematical model of the drive system is developed and is used for the simulation study. Simulation is carried out using Scilab/Scicos, which is free and open source software. The above said drive system is implemented in laboratory using a PC and an add-on card. In this study the air gap flux of the machine is kept constant by maintaining Volt/Hertz (v/f) ratio constant. The experimental transient responses of the drive system obtained for change in speed under no load as well as under load conditions are presented.",
"title": ""
},
{
"docid": "61998885a181e074eadd41a2f067f697",
"text": "Introduction. Opinion mining has been receiving increasing attention from a broad range of scientific communities since early 2000s. The present study aims to systematically investigate the intellectual structure of opinion mining research. Method. Using topic search, citation expansion, and patent search, we collected 5,596 bibliographic records of opinion mining research. Then, intellectual landscapes, emerging trends, and recent developments were identified. We also captured domain-level citation trends, subject category assignment, keyword co-occurrence, document co-citation network, and landmark articles. Analysis. Our study was guided by scientometric approaches implemented in CiteSpace, a visual analytic system based on networks of co-cited documents. We also employed a dual-map overlay technique to investigate epistemological characteristics of the domain. Results. We found that the investigation of algorithmic and linguistic aspects of opinion mining has been of the community’s greatest interest to understand, quantify, and apply the sentiment orientation of texts. Recent thematic trends reveal that practical applications of opinion mining such as the prediction of market value and investigation of social aspects of product feedback have received increasing attention from the community. Conclusion. Opinion mining is fast-growing and still developing, exploring the refinements of related techniques and applications in a variety of domains. We plan to apply the proposed analytics to more diverse domains and comprehensive publication materials to gain more generalized understanding of the true structure of a science.",
"title": ""
},
{
"docid": "dbafea1fbab901ff5a53f752f3bfb4b8",
"text": "Three studies were conducted to test the hypothesis that high trait aggressive individuals are more affected by violent media than are low trait aggressive individuals. In Study 1, participants read film descriptions and then chose a film to watch. High trait aggressive individuals were more likely to choose a violent film to watch than were low trait aggressive individuals. In Study 2, participants reported their mood before and after the showing of a violet or nonviolent videotape. High trait aggressive individuals felt more angry after viewing the violent videotape than did low trait aggressive individuals. In Study 3, participants first viewed either a violent or a nonviolent videotape and then competed with an \"opponent\" on a reaction time task in which the loser received a blast of unpleasant noise. Videotape violence was more likely to increase aggression in high trait aggressive individuals than in low trait aggressive individuals.",
"title": ""
},
{
"docid": "7d8dcb65acd5e0dc70937097ded83013",
"text": "This paper addresses the problem of mapping natural language sentences to lambda–calculus encodings of their meaning. We describe a learning algorithm that takes as input a training set of sentences labeled with expressions in the lambda calculus. The algorithm induces a grammar for the problem, along with a log-linear model that represents a distribution over syntactic and semantic analyses conditioned on the input sentence. We apply the method to the task of learning natural language interfaces to databases and show that the learned parsers outperform previous methods in two benchmark database domains.",
"title": ""
}
] | scidocsrr |
f6489b25ff7c3f5aa56afd450b184e34 | To BLOB or Not To BLOB: Large Object Storage in a Database or a Filesystem? | [
{
"docid": "80e4748abbb22d2bfefa5e5cbd78fb86",
"text": "A reimplementation of the UNIX file system is described. The reimplementation provides substantially higher throughput rates by using more flexible allocation policies that allow better locality of reference and can be adapted to a wide range of peripheral and processor characteristics. The new file system clusters data that is sequentially accessed and provides tw o block sizes to allo w fast access to lar ge files while not wasting large amounts of space for small files. File access rates of up to ten times f aster than the traditional UNIX file system are e xperienced. Longneeded enhancements to the programmers’ interface are discussed. These include a mechanism to place advisory locks on files, extensions of the name space across file systems, the ability to use long file names, and provisions for administrati ve control of resource usage. Revised February 18, 1984 CR",
"title": ""
}
] | [
{
"docid": "e603d2a71580691cf6a61f0e892127cc",
"text": "Advances in tourism economics have enabled us to collect massive amounts of travel tour data. If properly analyzed, this data can be a source of rich intelligence for providing real-time decision making and for the provision of travel tour recommendations. However, tour recommendation is quite different from traditional recommendations, because the tourist's choice is directly affected by the travel cost, which includes the financial cost and the time. To that end, in this paper, we provide a focused study of cost-aware tour recommendation. Along this line, we develop two cost-aware latent factor models to recommend travel packages by considering both the travel cost and the tourist's interests. Specifically, we first design a cPMF model, which models the tourist's cost with a 2-dimensional vector. Also, in this cPMF model, the tourist's interests and the travel cost are learnt by exploring travel tour data. Furthermore, in order to model the uncertainty in the travel cost, we further introduce a Gaussian prior into the cPMF model and develop the GcPMF model, where the Gaussian prior is used to express the uncertainty of the travel cost. Finally, experiments on real-world travel tour data show that the cost-aware recommendation models outperform state-of-the-art latent factor models with a significant margin. Also, the GcPMF model with the Gaussian prior can better capture the impact of the uncertainty of the travel cost, and thus performs better than the cPMF model.",
"title": ""
},
{
"docid": "0d733d7f0782bfaf245bf344a46b58b8",
"text": "Smart Cities rely on the use of ICTs for a more efficient and intelligent use of resources, whilst improving citizens' quality of life and reducing the environmental footprint. As far as the livability of cities is concerned, traffic is one of the most frequent and complex factors directly affecting citizens. Particularly, drivers in search of a vacant parking spot are a non-negligible source of atmospheric and acoustic pollution. Although some cities have installed sensor-based vacant parking spot detectors in some neighbourhoods, the cost of this approach makes it unfeasible at large scale. As an approach to implement a sustainable solution to the vacant parking spot detection problem in urban environments, this work advocates fusing the information from small-scale sensor-based detectors with that obtained from exploiting the widely-deployed video surveillance camera networks. In particular, this paper focuses on how video analytics can be exploited as a prior step towards Smart City solutions based on data fusion. Through a set of experiments carefully planned to replicate a real-world scenario, the vacant parking spot detection success rate of the proposed system is evaluated through a critical comparison of local and global visual features (either alone or fused at feature level) and different classifier systems applied to the task. Furthermore, the system is tested under setup scenarios of different complexities, and experimental results show that while local features are best when training with small amounts of highly accurate on-site data, they are outperformed by their global counterparts when training with more samples from an external vehicle database.",
"title": ""
},
{
"docid": "74c386f9d3bc9bbe747a2186542c1fcf",
"text": "Assessment of right ventricular afterload in systolic heart failure seems mandatory as it plays an important role in predicting outcome. The purpose of this study is to estimate pulmonary vascular elastance as a reliable surrogate for right ventricular afterload in systolic heart failure. Forty-two patients with systolic heart failure (ejection fraction <35%) were studied by right heart catheterization. Pulmonary arterial elastance was calculated with three methods: Ea(PV) = (end-systolic pulmonary arterial pressure)/stroke volume; Ea*(PV) = (mean pulmonary arterial pressure - pulmonary capillary wedge pressure)/stroke volume; and PPSV = pulmonary arterial pulse pressure (systolic - diastolic)/stroke volume. These measures were compared with pulmonary vascular resistance ([mean pulmonary arterial pressure - pulmonary capillary wedge pressure]/CO). All estimates of pulmonary vascular elastance were significantly correlated with pulmonary vascular resistance (r=0.772, 0.569, and 0.935 for Ea(PV), Ea*(PV), and PPSV, respectively; P <.001). Pulmonary vascular elastance can easily be estimated by routine right heart catheterization in systolic heart failure and seems promising in assessment of right ventricular afterload.",
"title": ""
},
{
"docid": "8f73870d5e999c0269059c73bb85e05c",
"text": "Placing the DRAM in the same package as a processor enables several times higher memory bandwidth than conventional off-package DRAM. Yet, the latency of in-package DRAM is not appreciably lower than that of off-package DRAM. A promising use of in-package DRAM is as a large cache. Unfortunately, most previous DRAM cache designs optimize mainly for cache hit latency and do not consider bandwidth efficiency as a first-class design constraint. Hence, as we show in this paper, these designs are suboptimal for use with in-package DRAM.\n We propose a new DRAM cache design, Banshee, that optimizes for both in-package and off-package DRAM bandwidth efficiency without degrading access latency. Banshee is based on two key ideas. First, it eliminates the tag lookup overhead by tracking the contents of the DRAM cache using TLBs and page table entries, which is efficiently enabled by a new lightweight TLB coherence protocol we introduce. Second, it reduces unnecessary DRAM cache replacement traffic with a new bandwidth-aware frequency-based replacement policy. Our evaluations show that Banshee significantly improves performance (15% on average) and reduces DRAM traffic (35.8% on average) over the best-previous latency-optimized DRAM cache design.",
"title": ""
},
{
"docid": "c37da50c2d31d262cb903405a7990ea0",
"text": "The automotive industry could be facing a situation of profound change and opportunity in the coming decades. There are a number of influencing factors such as increasing urban and aging populations, self-driving cars, 3D parts printing, energy innovation, and new models of transportation service delivery (Zipcar, Uber). The connected car means that vehicles are now part of the connected world, continuously Internet-connected, generating and transmitting data, which on the one hand can be helpfully integrated into applications, like real-time traffic alerts broadcast to smartwatches, but also raises security and privacy concerns. This paper explores the automotive connected world, and describes five killer QS (Quantified Self)-auto sensor applications that link quantified-self sensors (sensors that measure the personal biometrics of individuals like heart rate) and automotive sensors (sensors that measure driver and passenger biometrics or quantitative automotive performance metrics like speed and braking activity). The applications are fatigue detection, real-time assistance for parking and accidents, anger management and stress reduction, keyless authentication and digital identity verification, and DIY diagnostics. These kinds of applications help to demonstrate the benefit of connected world data streams in the automotive industry and beyond where, more fundamentally for human progress, the automation of both physical and now cognitive tasks is underway.",
"title": ""
},
{
"docid": "e0a2031394922edec46eaac60c473358",
"text": "In-wheel-motor drive electric vehicle (EV) is an innovative configuration, in which each wheel is driven individually by an electric motor. It is possible to use an electronic differential (ED) instead of the heavy mechanical differential because of the fast response time of the motor. A new ED control approach for a two-in-wheel-motor drive EV is devised based on the fuzzy logic control method. The fuzzy logic method employs to estimate the slip rate of each wheel considering the complex and nonlinear of the system. Then, the ED system distributes torque and power to each motor according to requirements. The effectiveness and validation of the proposed control method are evaluated in the Matlab/Simulink environment. Simulation results show that the new ED control system can keep the slip rate within the optimized range, ensuring the stability of the vehicle either in a straight or a curve lane.",
"title": ""
},
{
"docid": "b682d1da4fd31e470aa96244a47f081a",
"text": "With Android being the most widespread mobile platform, protecting it against malicious applications is essential. Android users typically install applications from large remote repositories, which provides ample opportunities for malicious newcomers. In this paper, we propose a simple, and yet highly effective technique for detecting malicious Android applications on a repository level. Our technique performs automatic classification based on tracking system calls while applications are executed in a sandbox environment. We implemented the technique in a tool called MALINE, and performed extensive empirical evaluation on a suite of around 12,000 applications. The evaluation yields an overall detection accuracy of 93% with a 5% benign application classification error, while results are improved to a 96% detection accuracy with up-sampling. This indicates that our technique is viable to be used in practice. Finally, we show that even simplistic feature choices are highly effective, suggesting that more heavyweight approaches should be thoroughly (re)evaluated. Android Malware Detection Based on System Calls Marko Dimjašević, Simone Atzeni, Zvonimir Rakamarić University of Utah, USA {marko,simone,zvonimir}@cs.utah.edu Ivo Ugrina University of Zagreb, Croatia",
"title": ""
},
{
"docid": "48427804f2e704ab6ea15251c624cdf2",
"text": "In this work, we propose Residual Attention Network, a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers. Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90% error), CIFAR-100 (20.45% error) and ImageNet (4.8% single model and single crop, top-5 error). Note that, our method achieves 0.6% top-1 accuracy improvement with 46% trunk depth and 69% forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.",
"title": ""
},
{
"docid": "0c61bfbb7106c5592ecb9677e617f83f",
"text": "BACKGROUND\nAcute exacerbations of chronic obstructive pulmonary disease (COPD) are associated with accelerated decline in lung function, diminished quality of life, and higher mortality. Proactively monitoring patients for early signs of an exacerbation and treating them early could prevent these outcomes. The emergence of affordable wearable technology allows for nearly continuous monitoring of heart rate and physical activity as well as recording of audio which can detect features such as coughing. These signals may be able to be used with predictive analytics to detect early exacerbations. Prior to full development, however, it is important to determine the feasibility of using wearable devices such as smartwatches to intensively monitor patients with COPD.\n\n\nOBJECTIVE\nWe conducted a feasibility study to determine if patients with COPD would wear and maintain a smartwatch consistently and whether they would reliably collect and transmit sensor data.\n\n\nMETHODS\nPatients with COPD were recruited from 3 hospitals and were provided with a smartwatch that recorded audio, heart rate, and accelerations. They were asked to wear and charge it daily for 90 days. They were also asked to complete a daily symptom diary. At the end of the study period, participants were asked what would motivate them to regularly use a wearable for monitoring of their COPD.\n\n\nRESULTS\nOf 28 patients enrolled, 16 participants completed the full 90 days. The average age of participants was 68.5 years, and 36% (10/28) were women. Survey, heart rate, and activity data were available for an average of 64.5, 65.1, and 60.2 days respectively. Technical issues caused heart rate and activity data to be unavailable for approximately 13 and 17 days, respectively. Feedback provided by participants indicated that they wanted to actively engage with the smartwatch and receive feedback about their activity, heart rate, and how to better manage their COPD.\n\n\nCONCLUSIONS\nSome patients with COPD will wear and maintain smartwatches that passively monitor audio, heart rate, and physical activity, and wearables were able to reliably capture near-continuous patient data. Further work is necessary to increase acceptability and improve the patient experience.",
"title": ""
},
{
"docid": "89d91df8511c0b0f424dd5fa20fcd212",
"text": "We present a new fast algorithm for background modeling and subtraction. Sample background values at each pixel are quantized into codebooks which represent a compressed form of background model for a long image sequence. This allows us to capture structural background variation due to periodic-like motion over a long period of time under limited memory. Our method can handle scenes containing moving backgrounds or illumination variations (shadows and highlights), and it achieves robust detection for compressed videos. We compared our method with other multimode modeling techniques.",
"title": ""
},
{
"docid": "8a293b95b931f4f72fe644fdfe30564a",
"text": "Today, the concept of brain connectivity plays a central role in the neuroscience. While functional connectivity is defined as the temporal coherence between the activities of different brain areas, the effective connectivity is defined as the simplest brain circuit that would produce the same temporal relationship as observed experimentally between cortical sites. The most used method to estimate effective connectivity in neuroscience is the structural equation modeling (SEM), typically used on data related to the brain hemodynamic behavior. However, the use of hemodynamic measures limits the temporal resolution on which the brain process can be followed. The present research proposes the use of the SEM approach on the cortical waveforms estimated from the high-resolution EEG data, which exhibits a good spatial resolution and a higher temporal resolution than hemodynamic measures. We performed a simulation study, in which different main factors were systematically manipulated in the generation of test signals, and the errors in the estimated connectivity were evaluated by the analysis of variance (ANOVA). Such factors were the signal-to-noise ratio and the duration of the simulated cortical activity. Since SEM technique is based on the use of a model formulated on the basis of anatomical and physiological constraints, different experimental conditions were analyzed, in order to evaluate the effect of errors made in the a priori model formulation on its performances. The feasibility of the proposed approach has been shown in a human study using high-resolution EEG recordings related to finger tapping movements.",
"title": ""
},
{
"docid": "acba717edc26ae7ba64debc5f0d73ded",
"text": "Previous phase I-II clinical trials have shown that recombinant human erythropoietin (rHuEpo) can ameliorate anemia in a portion of patients with multiple myeloma (MM) and non-Hodgkin's lymphoma (NHL). Therefore, we performed a randomized controlled multicenter study to define the optimal initial dosage and to identify predictors of response to rHuEpo. A total of 146 patients who had hemoglobin (Hb) levels < or = 11 g/dL and who had no need for transfusion at the time of enrollment entered this trial. Patients were randomized to receive 1,000 U (n = 31), 2,000 U (n = 29), 5,000 U (n = 31), or 10,000 U (n = 26) of rHuEpo daily subcutaneously for 8 weeks or to receive no therapy (n = 29). Of the patients, 84 suffered from MM and 62 from low- to intermediate-grade NHL, including chronic lymphocytic leukemia; 116 of 146 (79%) received chemotherapy during the study. The mean baseline Hb level was 9.4 +/- 1.0 g/dL. The median serum Epo level was 32 mU/mL, and endogenous Epo production was found to be defective in 77% of the patients, as judged by a value for the ratio of observed-to-predicted serum Epo levels (O/P ratio) of < or = 0.9. An intention-to-treat analysis was performed to evaluate treatment efficacy. The median average increase in Hb levels per week was 0.04 g/dL in the control group and -0.04 (P = .57), 0.22 (P = .05), 0.43 (P = .01), and 0.58 (P = .0001) g/dL in the 1,000 U, 2,000 U, 5,000 U, and 10,000 U groups, respectively (P values versus control). The probability of response (delta Hb > or = 2 g/dL) increased steadily and, after 8 weeks, reached 31% (2,000 U), 61% (5,000 U), and 62% (10,000 U), respectively. Regression analysis using Cox's proportional hazard model and classification and regression tree analysis showed that serum Epo levels and the O/P ratio were the most important factors predicting response in patients receiving 5,000 or 10,000 U. Approximately three quarters of patients presenting with Epo levels inappropriately low for the degree of anemia responded to rHuEpo, whereas only one quarter of those with adequate Epo levels did so. Classification and regression tree analysis also showed that doses of 2,000 U daily were effective in patients with an average platelet count greater than 150 x 10(9)/L. About 50% of these patients are expected to respond to rHuEpo. Thus, rHuEpo was safe and effective in ameliorating the anemia of MM and NHL patients who showed defective endogenous Epo production. From a practical point of view, we conclude that the decision to use rHuEpo in an individual anemic patient with MM or NHL should be based on serum Epo levels, whereas the choice of the initial dosage should be based on residual marrow function.",
"title": ""
},
{
"docid": "d9e0fd8abb80d6256bd86306b7112f20",
"text": "Visible light LEDs, due to their numerous advantages, are expected to become the dominant indoor lighting technology. These lights can also be switched ON/OFF at high frequency, enabling their additional use for wireless communication and indoor positioning. In this article, visible LED light--based indoor positioning systems are surveyed and classified into two broad categories based on the receiver structure. The basic principle and architecture of each design category, along with various position computation algorithms, are discussed and compared. Finally, several new research, implementation, commercialization, and standardization challenges are identified and highlighted for this relatively novel and interesting indoor localization technology.",
"title": ""
},
{
"docid": "cbed0b87ebae159115277322b21299ca",
"text": "The present work describes a classification schema for irony detection in Greek political tweets. Our hypothesis states that humorous political tweets could predict actual election results. The irony detection concept is based on subjective perceptions, so only relying on human-annotator driven labor might not be the best route. The proposed approach relies on limited labeled training data, thus a semi-supervised approach is followed, where collective-learning algorithms take both labeled and unlabeled data into consideration. We compare the semi-supervised results with the supervised ones from a previous research of ours. The hypothesis is evaluated via a correlation study between the irony that a party receives on Twitter, its respective actual election results during the Greek parliamentary elections of May 2012, and the difference between these results and the ones of the preceding elections of 2009. & 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5c9013c9514dc7deaa0b87fe9cd6db16",
"text": "To predict the uses of new technology, we present an approach grounded in science and technology studies (STS) that examines the social uses of current technology. As part of ongoing research on next-generation mobile imaging applications, we conducted an empirical study of the social uses of personal photography. We identify three: memory, creating and maintaining relationships, and self-expression. The roles of orality and materiality in these uses help us explain the observed resistances to intangible digital images and to assigning metadata and annotations. We conclude that this approach is useful for understanding the potential uses of technology and for design.",
"title": ""
},
{
"docid": "8f1d7499280f94b92044822c1dd4e59d",
"text": "WORK-LIFE BALANCE means bringing work, whether done on the job or at home, and leisure time into balance to live life to its fullest. It doesn’t mean that you spend half of your life working and half of it playing; instead, it means balancing the two to achieve harmony in physical, emotional, and spiritual health. In today’s economy, can nurses achieve work-life balance? Although doing so may be difficult, the consequences to our health can be enormous if we don’t try. This article describes some of the stresses faced by nurses and tips for attaining a healthy balance of work and leisure.",
"title": ""
},
{
"docid": "0f3cb3d8a841e0de31438da1dd99c176",
"text": "In this paper we give the details of the numerical solution of a three-dimensional multispecies diffuse interface model of tumor growth, which was derived in (Wise et al., J. Theor. Biol. 253 (2008)) and used to study the development of glioma in (Frieboes et al., NeuroImage 37 (2007) and tumor invasion in (Bearer et al., Cancer Research, 69 (2009)) and (Frieboes et al., J. Theor. Biol. 264 (2010)). The model has a thermodynamic basis, is related to recently developed mixture models, and is capable of providing a detailed description of tumor progression. It utilizes a diffuse interface approach, whereby sharp tumor boundaries are replaced by narrow transition layers that arise due to differential adhesive forces among the cell-species. The model consists of fourth-order nonlinear advection-reaction-diffusion equations (of Cahn-Hilliard-type) for the cell-species coupled with reaction-diffusion equations for the substrate components. Numerical solution of the model is challenging because the equations are coupled, highly nonlinear, and numerically stiff. In this paper we describe a fully adaptive, nonlinear multigrid/finite difference method for efficiently solving the equations. We demonstrate the convergence of the algorithm and we present simulations of tumor growth in 2D and 3D that demonstrate the capabilities of the algorithm in accurately and efficiently simulating the progression of tumors with complex morphologies.",
"title": ""
},
{
"docid": "459f368625415f80c88da01b69e94258",
"text": "Data visualization and feature selection methods are proposed based on the )oint mutual information and ICA. The visualization methods can find many good 2-D projections for high dimensional data interpretation, which cannot be easily found by the other existing methods. The new variable selection method is found to be better in eliminating redundancy in the inputs than other methods based on simple mutual information. The efficacy of the methods is illustrated on a radar signal analysis problem to find 2-D viewing coordinates for data visualization and to select inputs for a neural network classifier.",
"title": ""
},
{
"docid": "51da4d5923b30db560227155edd0621d",
"text": "The fifth generation wireless 5G development initiative is based upon 4G, which at present is struggling to meet its performance goals. The comparison between 3G and 4G wireless communication systems in relation to its architecture, speed, frequency band, switching design basis and forward error correction is studied, and were discovered that their performances are still unable to solve the unending problems of poor coverage, bad interconnectivity, poor quality of service and flexibility. An ideal 5G model to accommodate the challenges and shortfalls of 3G and 4G deployments is discussed as well as the significant system improvements on the earlier wireless technologies. The radio channel propagation characteristics for 4G and 5G systems is discussed. Major advantages of 5G network in providing myriads of services to end users personalization, terminal and network heterogeneity, intelligence networking and network convergence among other benefits are highlighted.The significance of the study is evaluated for a fast and effective connection and communication of devices like mobile phones and computers, including the capability of supporting and allowing a highly flexible network connectivity.",
"title": ""
},
{
"docid": "a903f9eb225a79ebe963d1905af6d3c8",
"text": "We have developed a multithreaded implementation of breadth-first search (BFS) of a sparse graph using the Cilk++ extensions to C++. Our PBFS program on a single processor runs as quickly as a standar. C++ breadth-first search implementation. PBFS achieves high work-efficiency by using a novel implementation of a multiset data structure, called a \"bag,\" in place of the FIFO queue usually employed in serial breadth-first search algorithms. For a variety of benchmark input graphs whose diameters are significantly smaller than the number of vertices -- a condition met by many real-world graphs -- PBFS demonstrates good speedup with the number of processing cores.\n Since PBFS employs a nonconstant-time \"reducer\" -- \"hyperobject\" feature of Cilk++ -- the work inherent in a PBFS execution depends nondeterministically on how the underlying work-stealing scheduler load-balances the computation. We provide a general method for analyzing nondeterministic programs that use reducers. PBFS also is nondeterministic in that it contains benign races which affect its performance but not its correctness. Fixing these races with mutual-exclusion locks slows down PBFS empirically, but it makes the algorithm amenable to analysis. In particular, we show that for a graph G=(V,E) with diameter D and bounded out-degree, this data-race-free version of PBFS algorithm runs it time O((V+E)/P + Dlg3(V/D)) on P processors, which means that it attains near-perfect linear speedup if P << (V+E)/Dlg3(V/D).",
"title": ""
}
] | scidocsrr |
097ea71b3e7607aaffe383426ecdcfc4 | Two axes orthogonal drive transmission for omnidirectional crawler with surface contact | [
{
"docid": "ac644a44b1e8cfe99e49461d37ff74e6",
"text": "Holonomic omnidirectional mobile robots are useful because of their high level of mobility in narrow or crowded areas, and omnidirectional robots equipped with normal tires are desired for their ability to surmount difference in level as well as their vibration suppression and ride comfort. A caster-drive mechanism using normal tires has been developed to realize a holonomic omnidiredctional robot, but some problems has remain. Here we describe effective systems to control the caster-drive wheels of an omnidirectional mobile robot. We propose a Differential-Drive Steering System (DDSS) using differential gearing to improve the operation ratio of motors. The DDSS generates driving and steering torque effectively from two motors. Simulation and experimental results show that the proposed system is effective for holonomic omnidirectional mobile robots.",
"title": ""
},
{
"docid": "9b646ef8c6054f9a4d85cf25e83d415c",
"text": "In this paper, a mobile robot with a tetrahedral shape for its basic structure is presented as a thrown robot for search and rescue robot application. The Tetrahedral Mobile Robot has its body in the center of the whole structure. The driving parts that produce the propelling force are located at each corner. As a driving wheel mechanism, we have developed the \"Omni-Ball\" with one active and two passive rotational axes, which are explained in detail. An actual prototype model has been developed to illustrate the concept and to perform preliminary motion experiments, through which the basic performance of the Tetrahedral Mobile Robot was confirmed",
"title": ""
}
] | [
{
"docid": "0213b953415a2aa9bab63f9c210c3dcf",
"text": "Purpose – The purpose of this paper is to distinguish and describe knowledge management (KM) technologies according to their support for strategy. Design/methodology/approach – This study employed an ontology development method to describe the relations between technology, KM and strategy, and to categorize available KM technologies according to those relations. Ontologies are formal specifications of concepts in a domain and their inter-relationships, and can be used to facilitate common understanding and knowledge sharing. The study focused particularly on two sub-domains of the KM field: KM strategies and KM technologies. Findings – ’’KM strategy’’ has three meanings in the literature: approach to KM, knowledge strategy, and KM implementation strategy. Also, KM technologies support strategy via KM initiatives based on particular knowledge strategies and approaches to KM. The study distinguishes three types of KM technologies: component technologies, KM applications, and business applications. They all can be described in terms of ’’creation’’ and ’’transfer’’ knowledge strategies, and ’’personalization’’ and ’’codification’’ approaches to KM. Research limitations/implications – The resulting framework suggests that KM technologies can be analyzed better in the context of KM initiatives, instead of the usual approach associating them with knowledge processes. KM initiatives provide the background and contextual elements necessary to explain technology adoption and use. Practical implications – The framework indicates three alternative modes for organizational adoption of KM technologies: custom development of KM systems from available component technologies; purchase of KM-specific applications; or purchase of business-driven applications that embed KM functionality. It also lists adequate technologies and provides criteria for selection in any of the cases. Originality/value – Among the many studies analyzing the role of technology in KM, an association with strategy has been missing. This paper contributes to filling this gap, integrating diverse contributions via a clearer definition of concepts and a visual representation of their relationships. This use of ontologies as a method, instead of an artifact, is also uncommon in the literature.",
"title": ""
},
{
"docid": "7e40c98b9760e1f47a0140afae567b7f",
"text": "Low-level saliency cues or priors do not produce good enough saliency detection results especially when the salient object presents in a low-contrast background with confusing visual appearance. This issue raises a serious problem for conventional approaches. In this paper, we tackle this problem by proposing a multi-context deep learning framework for salient object detection. We employ deep Convolutional Neural Networks to model saliency of objects in images. Global context and local context are both taken into account, and are jointly modeled in a unified multi-context deep learning framework. To provide a better initialization for training the deep neural networks, we investigate different pre-training strategies, and a task-specific pre-training scheme is designed to make the multi-context modeling suited for saliency detection. Furthermore, recently proposed contemporary deep models in the ImageNet Image Classification Challenge are tested, and their effectiveness in saliency detection are investigated. Our approach is extensively evaluated on five public datasets, and experimental results show significant and consistent improvements over the state-of-the-art methods.",
"title": ""
},
{
"docid": "e58036f93195603cb7dc7265b9adeb25",
"text": "Pseudomonas aeruginosa thrives in many aqueous environments and is an opportunistic pathogen that can cause both acute and chronic infections. Environmental conditions and host defenses cause differing stresses on the bacteria, and to survive in vastly different environments, P. aeruginosa must be able to adapt to its surroundings. One strategy for bacterial adaptation is to self-encapsulate with matrix material, primarily composed of secreted extracellular polysaccharides. P. aeruginosa has the genetic capacity to produce at least three secreted polysaccharides; alginate, Psl, and Pel. These polysaccharides differ in chemical structure and in their biosynthetic mechanisms. Since alginate is often associated with chronic pulmonary infections, its biosynthetic pathway is the best characterized. However, alginate is only produced by a subset of P. aeruginosa strains. Most environmental and other clinical isolates secrete either Pel or Psl. Little information is available on the biosynthesis of these polysaccharides. Here, we review the literature on the alginate biosynthetic pathway, with emphasis on recent findings describing the structure of alginate biosynthetic proteins. This information combined with the characterization of the domain architecture of proteins encoded on the Psl and Pel operons allowed us to make predictive models for the biosynthesis of these two polysaccharides. The results indicate that alginate and Pel share certain features, including some biosynthetic proteins with structurally or functionally similar properties. In contrast, Psl biosynthesis resembles the EPS/CPS capsular biosynthesis pathway of Escherichia coli, where the Psl pentameric subunits are assembled in association with an isoprenoid lipid carrier. These models and the environmental cues that cause the cells to produce predominantly one polysaccharide over the others are subjects of current investigation.",
"title": ""
},
{
"docid": "518b96236ffa2ce0413a0e01d280937a",
"text": "In this paper, we propose a low-rank representation with symmetric constraint (LRRSC) method for robust subspace clustering. Given a collection of data points approximately drawn from multiple subspaces, the proposed technique can simultaneously recover the dimension and members of each subspace. LRRSC extends the original low-rank representation algorithm by integrating a symmetric constraint into the low-rankness property of high-dimensional data representation. The symmetric low-rank representation, which preserves the subspace structures of high-dimensional data, guarantees weight consistency for each pair of data points so that highly correlated data points of subspaces are represented together. Moreover, it can be efficiently calculated by solving a convex optimization problem. We provide a rigorous proof for minimizing the nuclear-norm regularized least square problem with a symmetric constraint. The affinity matrix for spectral clustering can be obtained by further exploiting the angular information of the principal directions of the symmetric low-rank representation. This is a critical step towards evaluating the memberships between data points. Experimental results on benchmark databases demonstrate the effectiveness and robustness of LRRSC compared with several state-of-the-art subspace clustering algorithms.",
"title": ""
},
{
"docid": "d4678cdbc3963b44a905947be836d53d",
"text": "A multimodal network encodes relationships between the same set of nodes in multiple settings, and network alignment is a powerful tool for transferring information and insight between a pair of networks. We propose a method for multimodal network alignment that computes a matrix which indicates the alignment, but produces the result as a low-rank factorization directly. We then propose new methods to compute approximate maximum weight matchings of low-rank matrices to produce an alignment. We evaluate our approach by applying it on synthetic networks and use it to de-anonymize a multimodal transportation network.",
"title": ""
},
{
"docid": "ed39d4d541eb261e41a4f000347b954b",
"text": "In metazoans, gamma-tubulin acts within two main complexes, gamma-tubulin small complexes (gamma-TuSCs) and gamma-tubulin ring complexes (gamma-TuRCs). In higher eukaryotes, it is assumed that microtubule nucleation at the centrosome depends on gamma-TuRCs, but the role of gamma-TuRC components remains undefined. For the first time, we analyzed the function of all four gamma-TuRC-specific subunits in Drosophila melanogaster: Dgrip75, Dgrip128, Dgrip163, and Dgp71WD. Grip-motif proteins, but not Dgp71WD, appear to be required for gamma-TuRC assembly. Individual depletion of gamma-TuRC components, in cultured cells and in vivo, induces mitotic delay and abnormal spindles. Surprisingly, gamma-TuSCs are recruited to the centrosomes. These defects are less severe than those resulting from the inhibition of gamma-TuSC components and do not appear critical for viability. Simultaneous cosilencing of all gamma-TuRC proteins leads to stronger phenotypes and partial recruitment of gamma-TuSC. In conclusion, gamma-TuRCs are required for assembly of fully functional spindles, but we suggest that gamma-TuSC could be targeted to the centrosomes, which is where basic microtubule assembly activities are maintained.",
"title": ""
},
{
"docid": "eb0672f019c82dfe0614b39d3e89be2e",
"text": "The support of medical decisions comes from several sources. These include individual physician experience, pathophysiological constructs, pivotal clinical trials, qualitative reviews of the literature, and, increasingly, meta-analyses. Historically, the first of these four sources of knowledge largely informed medical and dental decision makers. Meta-analysis came on the scene around the 1970s and has received much attention. What is meta-analysis? It is the process of combining the quantitative results of separate (but similar) studies by means of formal statistical methods. Statistically, the purpose is to increase the precision with which the treatment effect of an intervention can be estimated. Stated in another way, one can say that meta-analysis combines the results of several studies with the purpose of addressing a set of related research hypotheses. The underlying studies can come in the form of published literature, raw data from individual clinical studies, or summary statistics in reports or abstracts. More broadly, a meta-analysis arises from a systematic review. There are three major components to a systematic review and meta-analysis. The systematic review starts with the formulation of the research question and hypotheses. Clinical or substantive insight about the particular domain of research often identifies not only the unmet investigative needs, but helps prepare for the systematic review by defining the necessary initial parameters. These include the hypotheses, endpoints, important covariates, and exposures or treatments of interest. Like any basic or clinical research endeavor, a prospectively defined and clear study plan enhances the expected utility and applicability of the final results for ultimately influencing practice or policy. After this foundational preparation, the second component, a systematic review, commences. The systematic review proceeds with an explicit and reproducible protocol to locate and evaluate the available data. The collection, abstraction, and compilation of the data follow a more rigorous and prospectively defined objective process. The definitions, structure, and methodologies of the underlying studies must be critically appraised. Hence, both “the content” and “the infrastructure” of the underlying data are analyzed, evaluated, and systematically recorded. Unlike an informal review of the literature, this systematic disciplined approach is intended to reduce the potential for subjectivity or bias in the subsequent findings. Typically, a literature search of an online database is the starting point for gathering the data. The most common sources are MEDLINE (United States Library of Overview, Strengths, and Limitations of Systematic Reviews and Meta-Analyses",
"title": ""
},
{
"docid": "4f1949af3455bd5741e731a9a60ecdf1",
"text": "BACKGROUND\nGuava leaf tea (GLT), exhibiting a diversity of medicinal bioactivities, has become a popularly consumed daily beverage. To improve the product quality, a new process was recommended to the Ser-Tou Farmers' Association (SFA), who began field production in 2005. The new process comprised simplified steps: one bud-two leaves were plucked at 3:00-6:00 am, in the early dawn period, followed by withering at ambient temperature (25-28 °C), rolling at 50 °C for 50-70 min, with or without fermentation, then drying at 45-50 °C for 70-90 min, and finally sorted.\n\n\nRESULTS\nThe product manufactured by this new process (named herein GLTSF) exhibited higher contents (in mg g(-1), based on dry ethyl acetate fraction/methanolic extract) of polyphenolics (417.9 ± 12.3) and flavonoids (452.5 ± 32.3) containing a compositional profile much simpler than previously found: total quercetins (190.3 ± 9.1), total myricetin (3.3 ± 0.9), total catechins (36.4 ± 5.3), gallic acid (8.8 ± 0.6), ellagic acid (39.1 ± 6.4) and tannins (2.5 ± 9.1).\n\n\nCONCLUSION\nWe have successfully developed a new process for manufacturing GLTSF with a unique polyphenolic profile. Such characteristic compositional distribution can be ascribed to the right harvesting hour in the early dawn and appropriate treatment process at low temperature, avoiding direct sunlight.",
"title": ""
},
{
"docid": "83b50f380f500bf6e140b3178431f0c6",
"text": "Leader election protocols are a fundamental building block for replicated distributed services. They ease the design of leader-based coordination protocols that tolerate failures. In partially synchronous systems, designing a leader election algorithm, that does not permit multiple leaders while the system is unstable, is a complex task. As a result many production systems use third-party distributed coordination services, such as ZooKeeper and Chubby, to provide a reliable leader election service. However, adding a third-party service such as ZooKeeper to a distributed system incurs additional operational costs and complexity. ZooKeeper instances must be kept running on at least three machines to ensure its high availability. In this paper, we present a novel leader election protocol using NewSQL databases for partially synchronous systems, that ensures at most one leader at any given time. The leader election protocol uses the database as distributed shared memory. Our work enables distributed systems that already use NewSQL databases to save the operational overhead of managing an additional third-party service for leader election. Our main contribution is the design, implementation and validation of a practical leader election algorithm, based on NewSQL databases, that has performance comparable to a leader election implementation using a state-of-the-art distributed coordination service, ZooKeeper.",
"title": ""
},
{
"docid": "509fa5630ed7e3e7bd914fb474da5071",
"text": "Languages with rich type systems are beginning to employ a blend of type inference and type checking, so that the type inference engine is guided by programmer-supplied type annotations. In this paper we show, for the first time, how to combine the virtues of two well-established ideas: unification-based inference, and bidi-rectional propagation of type annotations. The result is a type system that conservatively extends Hindley-Milner, and yet supports both higher-rank types and impredicativity.",
"title": ""
},
{
"docid": "5fc6b0e151762560c8f09d0fe6983ca2",
"text": "The increasing popularity of wearable devices that continuously capture video, and the prevalence of third-party applications that utilize these feeds have resulted in a new threat to privacy. In many situations, sensitive objects/regions are maliciously (or accidentally) captured in a video frame by third-party applications. However, current solutions do not allow users to specify and enforce fine grained access control over video feeds.\n In this paper, we describe MarkIt, a computer vision based privacy marker framework, that allows users to specify and enforce fine grained access control over video feeds. We present two example privacy marker systems -- PrivateEye and WaveOff. We conclude with a discussion of the computer vision, privacy and systems challenges in building a comprehensive system for fine grained access control over video feeds.",
"title": ""
},
{
"docid": "98f814584c555baa05a1292e7e14f45a",
"text": "This paper presents two types of dual band (2.4 and 5.8 GHz) wearable planar dipole antennas, one printed on a conventional substrate and the other on a two-dimensional metamaterial surface (Electromagnetic Bandgap (EBG) structure). The operation of both antennas is investigated and compared under different bending conditions (in E and H-planes) around human arm and leg of different radii. A dual band, Electromagnetic Band Gap (EBG) structure on a wearable substrate is used as a high impedance surface to control the Specific Absorption Rate (SAR) as well as to improve the antenna gain up to 4.45 dBi. The EBG inspired antenna has reduced the SAR effects on human body to a safe level (< 2W/Kg). I.e. the SAR is reduced by 83.3% for lower band and 92.8% for higher band as compared to the conventional antenna. The proposed antenna can be used for wearable applications with least health hazard to human body in Industrial, Scientific and Medical (ISM) band (2.4 GHz, 5.2 GHz) applications. The antennas on human body are simulated and analyzed in CST Microwave Studio (CST MWS).",
"title": ""
},
{
"docid": "81aa60b514bb11efb9e137b8d13b92e8",
"text": "Linguistic creativity is a marriage of form and content in which each works together to convey our meanings with concision, resonance and wit. Though form clearly influences and shapes our content, the most deft formal trickery cannot compensate for a lack of real insight. Before computers can be truly creative with language, we must first imbue them with the ability to formulate meanings that are worthy of creative expression. This is especially true of computer-generated poetry. If readers are to recognize a poetic turn-of-phrase as more than a superficial manipulation of words, they must perceive and connect with the meanings and the intent behind the words. So it is not enough for a computer to merely generate poem-shaped texts; poems must be driven by conceits that build an affective worldview. This paper describes a conceit-driven approach to computational poetry, in which metaphors and blends are generated for a given topic and affective slant. Subtle inferences drawn from these metaphors and blends can then drive the process of poetry generation. In the same vein, we consider the problem of generating witty insights from the banal truisms of common-sense knowledge bases. Ode to a Keatsian Turn Poetic licence is much more than a licence to frill. Indeed, it is not so much a licence as a contract, one that allows a speaker to subvert the norms of both language and nature in exchange for communicating real insights about some relevant state of affairs. Of course, poetry has norms and conventions of its own, and these lend poems a range of recognizably “poetic” formal characteristics. When used effectively, formal devices such as alliteration, rhyme and cadence can mold our meanings into resonant and incisive forms. However, even the most poetic devices are just empty frills when used only to disguise the absence of real insight. Computer models of poem generation must model more than the frills of poetry, and must instead make these formal devices serve the larger goal of meaning creation. Nonetheless, is often said that we “eat with our eyes”, so that the stylish presentation of food can subtly influence our sense of taste. So it is with poetry: a pleasing form can do more than enhance our recall and comprehension of a meaning – it can also suggest a lasting and profound truth. Experiments by McGlone & Tofighbakhsh (1999, 2000) lend empirical support to this so-called Keats heuristic, the intuitive belief – named for Keats’ memorable line “Beauty is truth, truth beauty” – that a meaning which is rendered in an aesthetically-pleasing form is much more likely to be perceived as truthful than if it is rendered in a less poetic form. McGlone & Tofighbakhsh demonstrated this effect by searching a book of proverbs for uncommon aphorisms with internal rhyme – such as “woes unite foes” – and by using synonym substitution to generate non-rhyming (and thus less poetic) variants such as “troubles unite enemies”. While no significant differences were observed in subjects’ ease of comprehension for rhyming/non-rhyming forms, subjects did show a marked tendency to view the rhyming variants as more truthful expressions of the human condition than the corresponding non-rhyming forms. So a well-polished poetic form can lend even a modestly interesting observation the lustre of a profound insight. An automated approach to poetry generation can exploit this symbiosis of form and content in a number of useful ways. It might harvest interesting perspectives on a given topic from a text corpus, or it might search its stores of commonsense knowledge for modest insights to render in immodest poetic forms. We describe here a system that combines both of these approaches for meaningful poetry generation. As shown in the sections to follow, this system – named Stereotrope – uses corpus analysis to generate affective metaphors for a topic on which it is asked to wax poetic. Stereotrope can be asked to view a topic from a particular affective stance (e.g., view love negatively) or to elaborate on a familiar metaphor (e.g. love is a prison). In doing so, Stereotrope takes account of the feelings that different metaphors are likely to engender in an audience. These metaphors are further integrated to yield tight conceptual blends, which may in turn highlight emergent nuances of a viewpoint that are worthy of poetic expression (see Lakoff and Turner, 1989). Stereotrope uses a knowledge-base of conceptual norms to anchor its understanding of these metaphors and blends. While these norms are the stuff of banal clichés and stereotypes, such as that dogs chase cats and cops eat donuts. we also show how Stereotrope finds and exploits corpus evidence to recast these banalities as witty, incisive and poetic insights. Mutual Knowledge: Norms and Stereotypes Samuel Johnson opined that “Knowledge is of two kinds. We know a subject ourselves, or we know where we can find information upon it.” Traditional approaches to the modelling of metaphor and other figurative devices have typically sought to imbue computers with the former (Fass, 1997). More recently, however, the latter kind has gained traction, with the use of the Web and text corpora to source large amounts of shallow knowledge as it is needed (e.g., Veale & Hao 2007a,b; Shutova 2010; Veale & Li, 2011). But the kind of knowledge demanded by knowledgehungry phenomena such as metaphor and blending is very different to the specialist “book” knowledge so beloved of Johnson. These demand knowledge of the quotidian world that we all tacitly share but rarely articulate in words, not even in the thoughtful definitions of Johnson’s dictionary. Similes open a rare window onto our shared expectations of the world. Thus, the as-as-similes “as hot as an oven”, “as dry as sand” and “as tough as leather” illuminate the expected properties of these objects, while the like-similes “crying like a baby”, “singing like an angel” and “swearing like a sailor” reflect intuitons of how these familiar entities are tacitly expected to behave. Veale & Hao (2007a,b) thus harvest large numbers of as-as-similes from the Web to build a rich stereotypical model of familiar ideas and their salient properties, while Özbal & Stock (2012) apply a similar approach on a smaller scale using Google’s query completion service. Fishelov (1992) argues convincingly that poetic and non-poetic similes are crafted from the same words and ideas. Poetic conceits use familiar ideas in non-obvious combinations, often with the aim of creating semantic tension. The simile-based model used here thus harvests almost 10,000 familiar stereotypes (drawing on a range of ~8,000 features) from both as-as and like-similes. Poems construct affective conceits, but as shown in Veale (2012b), the features of a stereotype can be affectively partitioned as needed into distinct pleasant and unpleasant perspectives. We are thus confident that a stereotype-based model of common-sense knowledge is equal to the task of generating and elaborating affective conceits for a poem. A stereotype-based model of common-sense knowledge requires both features and relations, with the latter showing how stereotypes relate to each other. It is not enough then to know that cops are tough and gritty, or that donuts are sweet and soft; our stereotypes of each should include the cliché that cops eat donuts, just as dogs chew bones and cats cough up furballs. Following Veale & Li (2011), we acquire inter-stereotype relationships from the Web, not by mining similes but by mining questions. As in Özbal & Stock (2012), we target query completions from a popular search service (Google), which offers a smaller, public proxy for a larger, zealously-guarded search query log. We harvest questions of the form “Why do Xs <relation> Ys”, and assume that since each relationship is presupposed by the question (so “why do bikers wear leathers” presupposes that everyone knows that bikers wear leathers), the triple of subject/relation/object captures a widely-held norm. In this way we harvest over 40,000 such norms from the Web. Generating Metaphors, N-Gram Style! The Google n-grams (Brants & Franz, 2006) is a rich source of popular metaphors of the form Target is Source, such as “politicians are crooks”, “Apple is a cult”, “racism is a disease” and “Steve Jobs is a god”. Let src(T) denote the set of stereotypes that are commonly used to describe a topic T, where commonality is defined as the presence of the corresponding metaphor in the Google n-grams. To find metaphors for proper-named entities, we also analyse n-grams of the form stereotype First [Middle] Last, such as “tyrant Adolf Hitler” and “boss Bill Gates”. Thus, e.g.: src(racism) = {problem, disease, joke, sin, poison, crime, ideology, weapon} src(Hitler) = {monster, criminal, tyrant, idiot, madman, vegetarian, racist, ...} Let typical(T) denote the set of properties and behaviors harvested for T from Web similes (see previous section), and let srcTypical(T) denote the aggregate set of properties and behaviors ascribable to T via the metaphors in src(T): (1) srcTypical (T) = M∈src(T) typical(M) We can generate conceits for a topic T by considering not just obvious metaphors for T, but metaphors of metaphors: (2) conceits(T) = src(T) ∪ M∈src(T) src(M) The features evoked by the conceit T as M are given by: (3) salient (T,M) = [srcTypical(T) ∪ typical(T)]",
"title": ""
},
{
"docid": "7000ea96562204dfe2c0c23f7cdb6544",
"text": "In this paper, the dynamic modeling of a doubly-fed induction generator-based wind turbine connected to infinite bus (SMIB) system, is carried out in detail. In most of the analysis, the DFIG stator transients and network transients are neglected. In this paper the interfacing problems while considering stator transients and network transients in the modeling of SMIB system are resolved by connecting a resistor across the DFIG terminals. The effect of simplification of shaft system on the controller gains is also discussed. In addition, case studies are presented to demonstrate the effect of mechanical parameters and controller gains on system stability when accounting the two-mass shaft model for the drive train system.",
"title": ""
},
{
"docid": "97f3ac1c69b518436c908ffecfffbd18",
"text": "The study presented in this paper examines the fit of total quality management (TQM) practices in mediating the relationship between organization strategy and organization performance. By examining TQM in relation to organization strategy, the study seeks to advance the understanding of TQM in a broader context. It also resolves some controversies that appear in the literature concerning the relationship between TQM and differentiation and cost leadership strategies as well as quality and innovation performance. The empirical data for this study was drawn from a survey of 194 middle/senior managers from Australian firms. The analysis was conducted using structural equation modeling (SEM) technique by examining two competing models that represent full and partial mediation. The findings indicate that TQM is positively and significantly related to differentiation strategy, and it only partially mediates the relationship between differentiation strategy and three performance measures (product quality, product innovation, and process innovation). The implication is that TQM needs to be complemented by other resources to more effectively realize the strategy in achieving a high level of performance, particularly innovation. 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "8c46f24d8e710c5fb4e25be76fc5b060",
"text": "This paper presents the novel design of a wideband circularly polarized (CP) Radio Frequency Identification (RFID) reader microstrip patch antenna for worldwide Ultra High Frequency (UHF) band which covers 840–960 MHz. The proposed antenna, which consists of a microstrip patch with truncated corners and a cross slot, is placed on a foam substrate (εr = 1.06) above a ground plane and is fed through vias through ground plane holes that extend from the quadrature 3 dB branch line hybrid coupler placed below the ground plane. This helps to separate feed network radiation, from the patch antenna and keeping the CP purity. The prototype antenna was fabricated with a total size of 225 × 250 × 12.8 mm3 which shows a measured impedance matching band of 840–1150MHz (31.2%) as well as measured rotating linear based circularly polarized radiation patterns. The simulated and measured 3 dB Axial Ratio (AR) bandwidth is better than 23% from 840–1050 MHz meeting and exceeding the target worldwide RFID UHF band.",
"title": ""
},
{
"docid": "f1c5f6f2bdff251e91df1dbd1e2302b2",
"text": "In this paper, mathematical models for permutation flow shop scheduling and job shop scheduling problems are proposed. The first problem is based on a mixed integer programming model. As the problem is NP-complete, this model can only be used for smaller instances where an optimal solution can be computed. For large instances, another model is proposed which is suitable for solving the problem by stochastic heuristic methods. For the job shop scheduling problem, a mathematical model and its main representation schemes are presented. Keywords—Flow shop, job shop, mixed integer model, representation scheme.",
"title": ""
},
{
"docid": "8f1d7499280f94b92044822c1dd4e59d",
"text": "WORK-LIFE BALANCE means bringing work, whether done on the job or at home, and leisure time into balance to live life to its fullest. It doesn’t mean that you spend half of your life working and half of it playing; instead, it means balancing the two to achieve harmony in physical, emotional, and spiritual health. In today’s economy, can nurses achieve work-life balance? Although doing so may be difficult, the consequences to our health can be enormous if we don’t try. This article describes some of the stresses faced by nurses and tips for attaining a healthy balance of work and leisure.",
"title": ""
},
{
"docid": "a144b5969c30808f0314218248c48ed6",
"text": "A new form of variational autoencoder (VAE) is developed, in which the joint distribution of data and codes is considered in two (symmetric) forms: (i) from observed data fed through the encoder to yield codes, and (ii) from latent codes drawn from a simple prior and propagated through the decoder to manifest data. Lower bounds are learned for marginal log-likelihood fits observed data and latent codes. When learning with the variational bound, one seeks to minimize the symmetric Kullback-Leibler divergence of joint density functions from (i) and (ii), while simultaneously seeking to maximize the two marginal log-likelihoods. To facilitate learning, a new form of adversarial training is developed. An extensive set of experiments is performed, in which we demonstrate state-of-the-art data reconstruction and generation on several image benchmark datasets.",
"title": ""
},
{
"docid": "de5fd8ae40a2d078101d5bb1859f689b",
"text": "The number and variety of mobile multicast applications are growing at an unprecedented and unanticipated pace. Mobile network providers are in front of a dramatic increase in multicast traffic load, and this growth is forecasted to continue in fifth-generation (5G) networks. The major challenges come from the fact that multicast traffic not only targets groups of end-user devices; it also involves machine-type communications (MTC) for the Internet of Things (IoT). The increase in the MTC load, predicted for 5G, calls into question the effectiveness of the current multimedia broadcast multicast service (MBMS). The aim of this paper is to provide a survey of 5G challenges in the view of effective management of multicast applications, and to identify how to enhance the mobile network architecture to enable multicast applications in future 5G scenarios. By accounting for the presence of both human and machine-related traffic, strengths and weaknesses of the state-of-the-art achievements in multicasting are critically analyzed to provide guidelines for future research on 5G networks and more conscious design choices.",
"title": ""
}
] | scidocsrr |
60c02cef732b387703ca70aac707a40e | Pedestrian Detection: An Evaluation of the State of the Art | [
{
"docid": "13d94a3afd97c4c5f8839652c58ab05f",
"text": "We present an approach for learning to detect objects in still gray images, that is based on a sparse, part-based representation of objects. A vocabulary of information-rich object parts is automatically constructed from a set of sample images of the object class of interest. Images are then represented using parts from this vocabulary, along with spatial relations observed among them. Based on this representation, a feature-efficient learning algorithm is used to learn to detect instances of the object class. The framework developed can be applied to any object with distinguishable parts in a relatively fixed spatial configuration. We report experiments on images of side views of cars. Our experiments show that the method achieves high detection accuracy on a difficult test set of real-world images, and is highly robust to partial occlusion and background variation. In addition, we discuss and offer solutions to several methodological issues that are significant for the research community to be able to evaluate object detection",
"title": ""
},
{
"docid": "1589e72380265787a10288c5ad906670",
"text": "The goal of this work is to accurately detect and localize boundaries in natural scenes using local image measurements. We formulate features that respond to characteristic changes in brightness, color, and texture associated with natural boundaries. In order to combine the information from these features in an optimal way, we train a classifier using human labeled images as ground truth. The output of this classifier provides the posterior probability of a boundary at each image location and orientation. We present precision-recall curves showing that the resulting detector significantly outperforms existing approaches. Our two main results are 1) that cue combination can be performed adequately with a simple linear model and 2) that a proper, explicit treatment of texture is required to detect boundaries in natural images.",
"title": ""
},
{
"docid": "359b6308a6e6e3d6857cb6b4f59fd1bc",
"text": "Significant research has been devoted to detecting people in images and videos. In this paper we describe a human detection method that augments widely used edge-based features with texture and color information, providing us with a much richer descriptor set. This augmentation results in an extremely high-dimensional feature space (more than 170,000 dimensions). In such high-dimensional spaces, classical machine learning algorithms such as SVMs are nearly intractable with respect to training. Furthermore, the number of training samples is much smaller than the dimensionality of the feature space, by at least an order of magnitude. Finally, the extraction of features from a densely sampled grid structure leads to a high degree of multicollinearity. To circumvent these data characteristics, we employ Partial Least Squares (PLS) analysis, an efficient dimensionality reduction technique, one which preserves significant discriminative information, to project the data onto a much lower dimensional subspace (20 dimensions, reduced from the original 170,000). Our human detection system, employing PLS analysis over the enriched descriptor set, is shown to outperform state-of-the-art techniques on three varied datasets including the popular INRIA pedestrian dataset, the low-resolution gray-scale DaimlerChrysler pedestrian dataset, and the ETHZ pedestrian dataset consisting of full-length videos of crowded scenes.",
"title": ""
},
{
"docid": "72bbc123119afa92f652d0a5332671e9",
"text": "Both detection and tracking people are challenging problems, especially in complex real world scenes that commonly involve multiple people, complicated occlusions, and cluttered or even moving backgrounds. People detectors have been shown to be able to locate pedestrians even in complex street scenes, but false positives have remained frequent. The identification of particular individuals has remained challenging as well. Tracking methods are able to find a particular individual in image sequences, but are severely challenged by real-world scenarios such as crowded street scenes. In this paper, we combine the advantages of both detection and tracking in a single framework. The approximate articulation of each person is detected in every frame based on local features that model the appearance of individual body parts. Prior knowledge on possible articulations and temporal coherency within a walking cycle are modeled using a hierarchical Gaussian process latent variable model (hGPLVM). We show how the combination of these results improves hypotheses for position and articulation of each person in several subsequent frames. We present experimental results that demonstrate how this allows to detect and track multiple people in cluttered scenes with reoccurring occlusions.",
"title": ""
}
] | [
{
"docid": "ed15e2118e219cf699c38100a0d124c3",
"text": "Is Facebook becoming a place where people mistakenly think they can literally get away with murder? In a 2011 Facebook murder-for-hire case in Philadelphia, PA, a 19-yearold mother offered $1,000 on Facebook to kill her 22-year-old boyfriend, the father of her 2-year-old daughter. The boyfriend was killed while the only two suspects responding to the mother’s post were in custody, so there is speculation that the murder was drug related. The mother pleaded guilty to conspiracy to commit murder, and was immediately paroled on a 3to 23-month sentence. Other ‘‘Facebook murder’’ perpetrators are being brought to justice, one way or another:",
"title": ""
},
{
"docid": "b140f08d25d5c37c4fa8743333664af2",
"text": " Random walks on an association graph using candidate matches as nodes. Rank candidate matches by stationary distribution Personalized jump for enforcing the matching constraints during the random walks process Matching constraints satisfying reweighting vector is calculated iteratively by inflation and bistochastic normalization Due to object motion or viewpoint change, relationships between two nodes are not exactly same Outlier Noise Deformation Noise",
"title": ""
},
{
"docid": "ca7e7fa988bf2ed1635e957ea6cd810d",
"text": "Knowledge graph (KG) is known to be helpful for the task of question answering (QA), since it provides well-structured relational information between entities, and allows one to further infer indirect facts. However, it is challenging to build QA systems which can learn to reason over knowledge graphs based on question-answer pairs alone. First, when people ask questions, their expressions are noisy (for example, typos in texts, or variations in pronunciations), which is non-trivial for the QA system to match those mentioned entities to the knowledge graph. Second, many questions require multi-hop logic reasoning over the knowledge graph to retrieve the answers. To address these challenges, we propose a novel and unified deep learning architecture, and an end-to-end variational learning algorithm which can handle noise in questions, and learn multi-hop reasoning simultaneously. Our method achieves state-of-the-art performance on a recent benchmark dataset in the literature. We also derive a series of new benchmark datasets, including questions for multi-hop reasoning, questions paraphrased by neural translation model, and questions in human voice. Our method yields very promising results on all these challenging datasets.",
"title": ""
},
{
"docid": "c34b6fac632c05c73daee2f0abce3ae8",
"text": "OBJECTIVES\nUnilateral strength training produces an increase in strength of the contralateral homologous muscle group. This process of strength transfer, known as cross education, is generally attributed to neural adaptations. It has been suggested that unilateral strength training of the free limb may assist in maintaining the functional capacity of an immobilised limb via cross education of strength, potentially enhancing recovery outcomes following injury. Therefore, the purpose of this review is to examine the impact of immobilisation, the mechanisms that may contribute to cross education, and possible implications for the application of unilateral training to maintain strength during immobilisation.\n\n\nDESIGN\nCritical review of literature.\n\n\nMETHODS\nSearch of online databases.\n\n\nRESULTS\nImmobilisation is well known for its detrimental effects on muscular function. Early reductions in strength outweigh atrophy, suggesting a neural contribution to strength loss, however direct evidence for the role of the central nervous system in this process is limited. Similarly, the precise neural mechanisms responsible for cross education strength transfer remain somewhat unknown. Two recent studies demonstrated that unilateral training of the free limb successfully maintained strength in the contralateral immobilised limb, although the role of the nervous system in this process was not quantified.\n\n\nCONCLUSIONS\nCross education provides a unique opportunity for enhancing rehabilitation following injury. By gaining an understanding of the neural adaptations occurring during immobilisation and cross education, future research can utilise the application of unilateral training in clinical musculoskeletal injury rehabilitation.",
"title": ""
},
{
"docid": "9b547f43a345d2acc3a75c80a8b2f064",
"text": "A risk-metric framework that supports Enterprise Risk Management is described. At the heart of the framework is the notion of a risk profile that provides risk measurement for risk elements. By providing a generic template in which metrics can be codified in terms of metric space operators, risk profiles can be used to construct a variety of risk measures for different business contexts. These measures can vary from conventional economic risk calculations to the kinds of metrics that are used by decision support systems, such as those supporting inexact reasoning and which are considered to closely match how humans combine information.",
"title": ""
},
{
"docid": "6ddf8cc094a38ebe47d51303f4792dc6",
"text": "The symmetric travelling salesman problem is a real world combinatorial optimization problem and a well researched domain. When solving combinatorial optimization problems such as the travelling salesman problem a low-level construction heuristic is usually used to create an initial solution, rather than randomly creating a solution, which is further optimized using techniques such as tabu search, simulated annealing and genetic algorithms, amongst others. These heuristics are usually manually derived by humans and this is a time consuming process requiring many man hours. The research presented in this paper forms part of a larger initiative aimed at automating the process of deriving construction heuristics for combinatorial optimization problems.\n The study investigates genetic programming to induce low-level construction heuristics for the symmetric travelling salesman problem. While this has been examined for other combinatorial optimization problems, to the authors' knowledge this is the first attempt at evolving low-level construction heuristics for the travelling salesman problem. In this study a generational genetic programming algorithm randomly creates an initial population of low-level construction heuristics which is iteratively refined over a set number of generations by the processes of fitness evaluation, selection of parents and application of genetic operators.\n The approach is tested on 23 problem instances, of varying problem characteristics, from the TSPLIB and VLSI benchmark sets. The evolved heuristics were found to perform better than the human derived heuristic, namely, the nearest neighbourhood heuristic, generally used to create initial solutions for the travelling salesman problem.",
"title": ""
},
{
"docid": "d56855e068a4524fda44d93ac9763cab",
"text": "greatest cause of mortality from cardiovascular disease, after myocardial infarction and cerebrovascular stroke. From hospital epidemiological data it has been calculated that the incidence of PE in the USA is 1 per 1,000 annually. The real number is likely to be larger, since the condition goes unrecognised in many patients. Mortality due to PE has been estimated to exceed 15% in the first three months after diagnosis. PE is a dramatic and life-threatening complication of deep venous thrombosis (DVT). For this reason, the prevention, diagnosis and treatment of DVT is of special importance, since symptomatic PE occurs in 30% of those affected. If asymptomatic episodes are also included, it is estimated that 50-60% of DVT patients develop PE. DVT and PE are manifestations of the same entity, namely thromboembolic disease. If we extrapolate the epidemiological data from the USA to Greece, which has a population of about ten million, 20,000 new cases of thromboembolic disease may be expected annually. Of these patients, PE will occur in 10,000, of which 6,000 will have symptoms and 900 will die during the first trimester.",
"title": ""
},
{
"docid": "2caaff9258c6b7a429a8d1aa086b73e6",
"text": "Ahstract- For many people suffering from motor disabilities, assistive devices controlled with only brain activity are the only way to interact with their environment [1]. Natural tasks often require different kinds of interactions, involving different controllers the user should be able to select in a self-paced way. We developed a Brain-Computer Interface (BCI) allowing users to switch between four control modes in a self-paced way in real-time. Since the system is devised to be used in domestic environments in a user-friendly way, we selected non-invasive electroencephalographic (EEG) signals and convolutional neural networks (CNNs), known for their ability to find the optimal features in classification tasks. We tested our system using the Cybathlon BCI computer game, which embodies all the challenges inherent to real-time control. Our preliminary results show that an efficient architecture (SmallNet), with only one convolutional layer, can classify 4 mental activities chosen by the user. The BCI system is run and validated online. It is kept up-to-date through the use of newly collected signals along playing, reaching an online accuracy of 47.6% where most approaches only report results obtained offline. We found that models trained with data collected online better predicted the behaviour of the system in real-time. This suggests that similar (CNN based) offline classifying methods found in the literature might experience a drop in performance when applied online. Compared to our previous decoder of physiological signals relying on blinks, we increased by a factor 2 the amount of states among which the user can transit, bringing the opportunity for finer control of specific subtasks composing natural grasping in a self-paced way. Our results are comparable to those showed at the Cybathlon's BCI Race but further improvements on accuracy are required.",
"title": ""
},
{
"docid": "e5fe8cfe50499f0175cd503cdae6138e",
"text": "We aim to detect complex events in long Internet videos that may last for hours. A major challenge in this setting is that only a few shots in a long video are relevant to the event of interest while others are irrelevant or even misleading. Instead of indifferently pooling the shots, we first define a novel notion of semantic saliency that assesses the relevance of each shot with the event of interest. We then prioritize the shots according to their saliency scores since shots that are semantically more salient are expected to contribute more to the final event detector. Next, we propose a new isotonic regularizer that is able to exploit the semantic ordering information. The resulting nearly-isotonic SVM classifier exhibits higher discriminative power. Computationally, we develop an efficient implementation using the proximal gradient algorithm, and we prove new, closed-form proximal steps. We conduct extensive experiments on three real-world video datasets and confirm the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "5a6bfd63fbbe4aea72226c4aa30ac05d",
"text": "Submitted: 1 December 2015 Accepted: 6 April 2016 doi:10.1111/zsc.12190 Sotka, E.E., Bell, T., Hughes, L.E., Lowry, J.K. & Poore, A.G.B. (2016). A molecular phylogeny of marine amphipods in the herbivorous family Ampithoidae. —Zoologica Scripta, 00, 000–000. Ampithoid amphipods dominate invertebrate assemblages associated with shallow-water macroalgae and seagrasses worldwide and represent the most species-rich family of herbivorous amphipod known. To generate the first molecular phylogeny of this family, we sequenced 35 species from 10 genera at two mitochondrial genes [the cytochrome c oxidase subunit I (COI) and the large subunit of 16 s (LSU)] and two nuclear loci [sodium–potassium ATPase (NAK) and elongation factor 1-alpha (EF1)], for a total of 1453 base pairs. All 10 genera are embedded within an apparently monophyletic Ampithoidae (Amphitholina, Ampithoe, Biancolina, Cymadusa, Exampithoe, Paragrubia, Peramphithoe, Pleonexes, Plumithoe, Pseudoamphithoides and Sunamphitoe). Biancolina was previously placed within its own superfamily in another suborder. Within the family, single-locus trees were generally poor at resolving relationships among genera. Combined-locus trees were better at resolving deeper nodes, but complete resolution will require greater taxon sampling of ampithoids and closely related outgroup species, and more molecular characters. Despite these difficulties, our data generally support the monophyly of Ampithoidae, novel evolutionary relationships among genera, several currently accepted genera that will require revisions via alpha taxonomy and the presence of cryptic species. Corresponding author: Erik Sotka, Department of Biology and the College of Charleston Marine Laboratory, 205 Fort Johnson Road, Charleston, SC 29412, USA. E-mail: [email protected] Erik E. Sotka, and Tina Bell, Department of Biology and Grice Marine Laboratory, College of Charleston, 205 Fort Johnson Road, Charleston, SC 29412, USA. E-mails: [email protected], [email protected] Lauren E. Hughes, and James K. Lowry, Australian Museum Research Institute, 6 College Street, Sydney, NSW 2010, Australia. E-mails: [email protected], [email protected] Alistair G. B. Poore, Evolution & Ecology Research Centre, School of Biological, Earth and Environmental Sciences, University of New South Wales, Sydney, NSW 2052, Australia. E-mail: [email protected]",
"title": ""
},
{
"docid": "ec44e814277dd0d45a314c42ef417cbe",
"text": "INTRODUCTION Oxygen support therapy should be given to the patients with acute hypoxic respiratory insufficiency in order to provide oxygenation of the tissues until the underlying pathology improves. The inspiratory flow rate requirement of patients with respiratory insufficiency varies between 30 and 120 L/min. Low flow and high flow conventional oxygen support systems produce a maximum flow rate of 15 L/min, and FiO2 changes depending on the patient’s peak inspiratory flow rate, respiratory pattern, the mask that is used, or the characteristics of the cannula. The inability to provide adequate airflow leads to discomfort in tachypneic patients. With high-flow nasal oxygen (HFNO) cannulas, warmed and humidified air matching the body temperature can be regulated at flow rates of 5–60 L/min, and oxygen delivery varies between 21% and 100%. When HFNO, first used in infants, was reported to increase the risk of infection, its long-term use was stopped. This problem was later eliminated with the use of sterile water, and its use has become a current issue in critical adult patients as well. Studies show that HFNO treatment improves physiological parameters when compared to conventional oxygen systems. Although there are studies indicating successful applications in different patient groups, there are also studies indicating that it does not create any difference in clinical parameters, but patient comfort is better in HFNO when compared with standard oxygen therapy and noninvasive mechanical ventilation (NIMV) (1-6). In this compilation, the physiological effect mechanisms of HFNO treatment and its use in various clinical situations are discussed in the light of current studies.",
"title": ""
},
{
"docid": "fb2ab8efc11c371e7183eacaee707f71",
"text": "Direct current (DC) motors are controlled easily and have very high performance. The speed of the motors could be adjusted within a wide range. Today, classical control techniques (such as Proportional Integral Differential PID) are very commonly used for speed control purposes. However, it is observed that the classical control techniques do not have an adequate performance in the case of nonlinear systems. Thus, instead, a modern technique is preferred: fuzzy logic. In this paper the control system is modelled using MATLAB/Simulink. Using both PID controller and fuzzy logic techniques, the results are compared for different speed values.",
"title": ""
},
{
"docid": "78c3573511176ba63e2cf727e09c7eb4",
"text": "Human aesthetic preference in the visual domain is reviewed from definitional, methodological, empirical, and theoretical perspectives. Aesthetic science is distinguished from the perception of art and from philosophical treatments of aesthetics. The strengths and weaknesses of important behavioral techniques are presented and discussed, including two-alternative forced-choice, rank order, subjective rating, production/adjustment, indirect, and other tasks. Major findings are reviewed about preferences for colors (single colors, color combinations, and color harmony), spatial structure (low-level spatial properties, shape properties, and spatial composition within a frame), and individual differences in both color and spatial structure. Major theoretical accounts of aesthetic response are outlined and evaluated, including explanations in terms of mere exposure effects, arousal dynamics, categorical prototypes, ecological factors, perceptual and conceptual fluency, and the interaction of multiple components. The results of the review support the conclusion that aesthetic response can be studied rigorously and meaningfully within the framework of scientific psychology.",
"title": ""
},
{
"docid": "c1981c3b0ccd26d4c8f02c2aa5e71c7a",
"text": "Functional genomics studies have led to the discovery of a large amount of non-coding RNAs from the human genome; among them are long non-coding RNAs (lncRNAs). Emerging evidence indicates that lncRNAs could have a critical role in the regulation of cellular processes such as cell growth and apoptosis as well as cancer progression and metastasis. As master gene regulators, lncRNAs are capable of forming lncRNA–protein (ribonucleoprotein) complexes to regulate a large number of genes. For example, lincRNA-RoR suppresses p53 in response to DNA damage through interaction with heterogeneous nuclear ribonucleoprotein I (hnRNP I). The present study demonstrates that hnRNP I can also form a functional ribonucleoprotein complex with lncRNA urothelial carcinoma-associated 1 (UCA1) and increase the UCA1 stability. Of interest, the phosphorylated form of hnRNP I, predominantly in the cytoplasm, is responsible for the interaction with UCA1. Moreover, although hnRNP I enhances the translation of p27 (Kip1) through interaction with the 5′-untranslated region (5′-UTR) of p27 mRNAs, the interaction of UCA1 with hnRNP I suppresses the p27 protein level by competitive inhibition. In support of this finding, UCA1 has an oncogenic role in breast cancer both in vitro and in vivo. Finally, we show a negative correlation between p27 and UCA in the breast tumor cancer tissue microarray. Together, our results suggest an important role of UCA1 in breast cancer.",
"title": ""
},
{
"docid": "bcb6ef3082d50038b456af4b942e75eb",
"text": "Vertebral angioma is a common bone tumor. We report a case of L1 vertebral angioma revealed by type A3.2 traumatic pathological fracture of the same vertebra. Management comprised emergency percutaneous osteosynthesis and, after stabilization of the multiple trauma, arterial embolization and percutaneous kyphoplasty.",
"title": ""
},
{
"docid": "8a81d5a3a91fdd0d4e55a8ce477f279a",
"text": "Sex differences are prominent in mood and anxiety disorders and may provide a window into mechanisms of onset and maintenance of affective disturbances in both men and women. With the plethora of sex differences in brain structure, function, and stress responsivity, as well as differences in exposure to reproductive hormones, social expectations and experiences, the challenge is to understand which sex differences are relevant to affective illness. This review will focus on clinical aspects of sex differences in affective disorders including the emergence of sex differences across developmental stages and the impact of reproductive events. Biological, cultural, and experiential factors that may underlie sex differences in the phenomenology of mood and anxiety disorders are discussed.",
"title": ""
},
{
"docid": "f85b08a0e3f38c1471b3c7f05e8a17ba",
"text": "In an end-to-end dialog system, the aim of dialog state tracking is to accurately estimate a compact representation of the current dialog status from a sequence of noisy observations produced by the speech recognition and the natural language understanding modules. A state tracking module is primarily meant to act as support for a dialog policy but it can also be used as support for dialog corpus summarization and other kinds of information extraction from transcription of dialogs. From a probabilistic view, this is achieved by maintaining a posterior distribution over hidden dialog states composed, in the simplest case, of a set of context dependent variables. Once a dialog policy is defined, deterministic or learnt, it is in charge of selecting an optimal dialog act given the estimated dialog state and a defined reward function. This paper introduces a novel method of dialog state tracking based on the general paradigm of machine reading and proposes to solve it using a memory-enhanced neural network architecture. We evaluate the proposed approach on the second Dialog State Tracking Challenge (DSTC-2) dataset that has been converted for the occasion in order to fit the relaxed assumption of a machine reading formulation where the true state is only provided at the very end of each dialog instead of providing the state updates at the utterance level. We show that the proposed tracker gives encouraging results. Finally, we propose to extend the DSTC-2 dataset with specific reasoning capabilities requirement like counting, list maintenance, yes-no question answering and indefinite knowledge management.",
"title": ""
},
{
"docid": "1ba931d8b32c3e1622c46c7b645608a3",
"text": "The recently introduced method, which was called ldquostretching,rdquo is extended to timed Petri nets which may have both controllable and uncontrollable transitions. Using this method, a new Petri net, called ldquostretched Petri net,rdquo which has only unit firing durations, is obtained to represent a timed-transition Petri net. Using this net, the state of the original timed Petri net can be represented easily. This representation also makes it easy to design a supervisory controller for a timed Petri net for any purpose. In this paper, supervisory controller design to avoid deadlock is considered in particular. Using this method, a controller is first designed for the stretched Petri net. Then, using this controller, a controller for the original timed Petri net is obtained. Algorithms to construct the reachability sets of the stretched and original timed Petri nets, as well as algorithms to obtain the controller for the original timed Petri net are presented. These algorithms are implemented using Matlab. Examples are also presented to illustrate the introduced approach.",
"title": ""
},
{
"docid": "1cfab58b5b57009817a54faceafacd8e",
"text": "Current Web applications are very complex and high sophisticated software products, whose usability can heavily determine their success or failure. Defining methods for ensuring usability is one of the current goals of the Web Engineering research. Also, much attention on usability is currently paid by Industry, which is recognizing the importance of adopting methods for usability evaluation before and after the application deployment. This chapter introduces principles and evaluation methods to be adopted during the whole application lifecycle for promoting usability. For each evaluation method, the main features, as well as the emerging advantages and drawbacks are illustrated, so as to support the choice of an evaluation plan that best fits the goals to be pursued and the available resources. The design and evaluation of a real application is also described for exemplifying the introduced concepts and methods.",
"title": ""
},
{
"docid": "5a777c011d7dbd82653b1b2d0f007607",
"text": "The Factored Language Model (FLM) is a flexible framework for incorporating various information sources, such as morphology and part-of-speech, into language modeling. FLMs have so far been successfully applied to tasks such as speech recognition and machine translation; it has the potential to be used in a wide variety of problems in estimating probability tables from sparse data. This tutorial serves as a comprehensive description of FLMs and related algorithms. We document the FLM functionalities as implemented in the SRI Language Modeling toolkit and provide an introductory walk-through using FLMs on an actual dataset. Our goal is to provide an easy-to-understand tutorial and reference for researchers interested in applying FLMs to their problems. Overview of the Tutorial We first describe the factored language model (Section 1) and generalized backoff (Section 2), two complementary techniques that attempt to improve statistical estimation (i.e., reduce parameter variance) in language models, and that also attempt to better describe the way in which language (and sequences of words) might be produced. Researchers familar with the algorithms behind FLMs may skip to Section 3, which describes the FLM programs and file formats in the publicly-available SRI Language Modeling (SRILM) toolkit.1 Section 4 is a step-by-step walkthrough with several FLM examples on a real language modeling dataset. This may be useful for beginning users of the FLMs. Finally, Section 5 discusses the problem of automatically tuning FLM parameters on real datasets and refers to existing software. This may be of interest to advanced users of FLMs.",
"title": ""
}
] | scidocsrr |
e44674f57cf1f061cb1839768d7ad019 | "How Old Do You Think I Am?" A Study of Language and Age in Twitter | [
{
"docid": "16c9b857bbe8d9f13f078ddb193d7483",
"text": "We present TweetMotif, an exploratory search application for Twitter. Unlike traditional approaches to information retrieval, which present a simple list of messages, TweetMotif groups messages by frequent significant terms — a result set’s subtopics — which facilitate navigation and drilldown through a faceted search interface. The topic extraction system is based on syntactic filtering, language modeling, near-duplicate detection, and set cover heuristics. We have used TweetMotif to deflate rumors, uncover scams, summarize sentiment, and track political protests in real-time. A demo of TweetMotif, plus its source code, is available at http://tweetmotif.com. Introduction and Description On the microblogging service Twitter, users post millions of very short messages every day. Organizing and searching through this large corpus is an exciting research problem. Since messages are so small, we believe microblog search requires summarization across many messages at once. Our system, TweetMotif, responds to user queries, first retrieving several hundred recent matching messages from a simple index; we use the Twitter Search API. Instead of simply showing this result set as a list, TweetMotif extracts a set of themes (topics) to group and summarize these messages. A topic is simultaneously characterized by (1) a 1to 3-word textual label, and (2) a set of messages, whose texts must all contain the label. TweetMotif’s user interface is inspired by faceted search, which has been shown to aid Web search tasks (Hearst et al. 2002). The main screen is a two-column layout. The left column is a list of themes that are related to the current search term, while the right column presents actual tweets, grouped by theme. As themes are selected on the left column, a sample of tweets for that theme appears at the top of the right column, pushing down (but not removing) tweet results for any previously selected related themes. This allows users to explore and compare multiple related themes at once. The set of topics is chosen to try to satisfy several criteria, which often conflict: Copyright c © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Screenshot of TweetMotif. 1. Frequency contrast: Topic label phrases should be frequent in the query subcorpus, but infrequent among general Twitter messages. This ensures relevance to the query while eliminating overly generic terms. 2. Topic diversity: Topics should be chosen such that their messages and label phrases minimally overlap. Overlapping topics repetitively fill the same information niche; only one should be used. 3. Topic size: A topic that includes too few messages is bad; it is overly specific. 4. Small number of topics: Screen real-estate and concomitant user cognitive load are limited resources. The goal is to provide the user a concise summary of themes and variation in the query subcorpus, then allow the user to navigate to individual topics to see their associated messages, and allow recursive drilldown. The approach is related to document clustering (though a message can belong to multiple topics) and text summarization (topic labels are a high-relevance subset of text across messages). We heuristically proceed through several stages of analysis.",
"title": ""
}
] | [
{
"docid": "31ec7ef4e68950919054b59942d4dbfa",
"text": "A promising approach to learn to play board games is to use reinforcement learning algorithms that can learn a game position evaluation function. In this paper we examine and compare three different methods for generating training games: (1) Learning by self-play, (2) Learning by playing against an expert program, and (3) Learning from viewing experts play against themselves. Although the third possibility generates highquality games from the start compared to initial random games generated by self-play, the drawback is that the learning program is never allowed to test moves which it prefers. We compared these three methods using temporal difference methods to learn the game of backgammon. For particular games such as draughts and chess, learning from a large database containing games played by human experts has as a large advantage that during the generation of (useful) training games, no expensive lookahead planning is necessary for move selection. Experimental results in this paper show how useful this method is for learning to play chess and draughts.",
"title": ""
},
{
"docid": "6abb57ab0c62c6a112907f6659864756",
"text": "Rabbani, A, Kargarfard, M, and Twist, C. Reliability and validity of a submaximal warm-up test for monitoring training status in professional soccer players. J Strength Cond Res 32(2): 326-333, 2018-Two studies were conducted to assess the reliability and validity of a submaximal warm-up test (SWT) in professional soccer players. For the reliability study, 12 male players performed an SWT over 3 trials, with 1 week between trials. For the validity study, 14 players of the same team performed an SWT and a 30-15 intermittent fitness test (30-15IFT) 7 days apart. Week-to-week reliability in selected heart rate (HR) responses (exercise heart rate [HRex], heart rate recovery [HRR] expressed as the number of beats recovered within 1 minute [HRR60s], and HRR expressed as the mean HR during 1 minute [HRpost1]) was determined using the intraclass correlation coefficient (ICC) and typical error of measurement expressed as coefficient of variation (CV). The relationships between HR measures derived from the SWT and the maximal speed reached at the 30-15IFT (VIFT) were used to assess validity. The range for ICC and CV values was 0.83-0.95 and 1.4-7.0% in all HR measures, respectively, with the HRex as the most reliable HR measure of the SWT. Inverse large (r = -0.50 and 90% confidence limits [CLs] [-0.78 to -0.06]) and very large (r = -0.76 and CL, -0.90 to -0.45) relationships were observed between HRex and HRpost1 with VIFT in relative (expressed as the % of maximal HR) measures, respectively. The SWT is a reliable and valid submaximal test to monitor high-intensity intermittent running fitness in professional soccer players. In addition, the test's short duration (5 minutes) and simplicity mean that it can be used regularly to assess training status in high-level soccer players.",
"title": ""
},
{
"docid": "7704b6baee77726a546b49bc0376d8cf",
"text": "The increase in high-precision, high-sample-rate telemetry timeseries poses a problem for existing timeseries databases which can neither cope with the throughput demands of these streams nor provide the necessary primitives for effective analysis of them. We present a novel abstraction for telemetry timeseries data and a data structure for providing this abstraction: a timepartitioning version-annotated copy-on-write tree. An implementation in Go is shown to outperform existing solutions, demonstrating a throughput of 53 million inserted values per second and 119 million queried values per second on a four-node cluster. The system achieves a 2.9x compression ratio and satisfies statistical queries spanning a year of data in under 200ms, as demonstrated on a year-long production deployment storing 2.1 trillion data points. The principles and design of this database are generally applicable to a large variety of timeseries types and represent a significant advance in the development of technology for the Internet of Things.",
"title": ""
},
{
"docid": "77d0786af4c5eee510a64790af497e25",
"text": "Mobile computing is a revolutionary technology, born as a result of remarkable advances in computer hardware and wireless communication. Mobile applications have become increasingly popular in recent years. Today, it is not uncommon to see people playing games or reading mails on handphones. With the rapid advances in mobile computing technology, there is an increasing demand for processing realtime transactions in a mobile environment. Hence there is a strong need for efficient transaction management, data access modes and data management, consistency control and other mobile data management issues. This survey paper will cover issues related to concurrency control in mobile database. This paper studies concurrency control problem in mobile database systems, we analyze the features of mobile database and concurrency control techniques. With the increasing number of mobile hosts there are many new solutions and algorithms for concurrency control being proposed and implemented. We wish that our paper has served as a survey of the important solutions in the fields of concurrency control in mobile database. Keywords-component; Distributed Real-time Databases, Mobile Real-time Databases, Concurrency Control, Data Similarity, and Transaction Scheduling.",
"title": ""
},
{
"docid": "d8cd13bcd43052550dbfdc0303ef2bc7",
"text": "We study the Shannon capacity of adaptive transmission techniques in conjunction with diversity combining. This capacity provides an upper bound on spectral efficiency using these techniques. We obtain closed-form solutions for the Rayleigh fading channel capacity under three adaptive policies: optimal power and rate adaptation, constant power with optimal rate adaptation, and channel inversion with fixed rate. Optimal power and rate adaptation yields a small increase in capacity over just rate adaptation, and this increase diminishes as the average received carrier-to-noise ratio (CNR) or the number of diversity branches increases. Channel inversion suffers the largest capacity penalty relative to the optimal technique, however, the penalty diminishes with increased diversity. Although diversity yields large capacity gains for all the techniques, the gain is most pronounced with channel inversion. For example, the capacity using channel inversion with two-branch diversity exceeds that of a single-branch system using optimal rate and power adaptation. Since channel inversion is the least complex scheme to implement, there is a tradeoff between complexity and capacity for the various adaptation methods and diversity-combining techniques.",
"title": ""
},
{
"docid": "b974a8d8b298bfde540abc451f76bf90",
"text": "This chapter provides information on commonly used equipment in industrial mammalian cell culture, with an emphasis on bioreactors. The actual equipment used in the cell culture process can vary from one company to another, but the main steps remain the same. The process involves expansion of cells in seed train and inoculation train processes followed by cultivation of cells in a production bioreactor. Process and equipment options for each stage of the cell culture process are introduced and examples are provided. Finally, the use of disposables during seed train and cell culture production is discussed.",
"title": ""
},
{
"docid": "ad40625ae8500d8724523ae2e663eeae",
"text": "The human hand is a masterpiece of mechanical complexity, able to perform fine motor manipulations and powerful work alike. Designing an animatable human hand model that features the abilities of the archetype created by Nature requires a great deal of anatomical detail to be modeled. In this paper, we present a human hand model with underlying anatomical structure. Animation of the hand model is controlled by muscle contraction values. We employ a physically based hybrid muscle model to convert these contraction values into movement of skin and bones. Pseudo muscles directly control the rotation of bones based on anatomical data and mechanical laws, while geometric muscles deform the skin tissue using a mass-spring system. Thus, resulting animations automatically exhibit anatomically and physically correct finger movements and skin deformations. In addition, we present a deformation technique to create individual hand models from photographs. A radial basis warping function is set up from the correspondence of feature points and applied to the complete structure of the reference hand model, making the deformed hand model instantly animatable.",
"title": ""
},
{
"docid": "314ffaaf39e2345f90e85fc5c5fdf354",
"text": "With the fast development pace of deep submicron technology, the size and density of semiconductor memory grows rapidly. However, keeping a high level of yield and reliability for memory products is more and more difficult. Both the redundancy repair and ECC techniques have been widely used for enhancing the yield and reliability of memory chips. Specifically, the redundancy repair and ECC techniques are conventionally used to repair or correct the hard faults and soft errors, respectively. In this paper, we propose an integrated ECC and redundancy repair scheme for memory reliability enhancement. Our approach can identify the hard faults and soft errors during the memory normal operation mode, and repair the hard faults during the memory idle time as long as there are unused redundant elements. We also develop a method for evaluating the memory reliability. Experimental results show that the proposed approach is effective, e.g., the MTTF of a 32K /spl times/ 64 memory is improved by 1.412 hours (7.1%) with our integrated ECC and repair scheme.",
"title": ""
},
{
"docid": "d0bacaa267599486356c175ca5419ede",
"text": "As P4 and its associated compilers move beyond relative immaturity, there is a need for common evaluation criteria. In this paper, we propose Whippersnapper, a set of benchmarks for P4. Rather than simply selecting a set of representative data-plane programs, the benchmark is designed from first principles, identifying and exploring key features and metrics. We believe the benchmark will not only provide a vehicle for comparing implementations and designs, but will also generate discussion within the larger community about the requirements for data-plane languages.",
"title": ""
},
{
"docid": "e8a9dffcb6c061fe720e7536387f5116",
"text": "The diffusion decision model allows detailed explanations of behavior in two-choice discrimination tasks. In this article, the model is reviewed to show how it translates behavioral dataaccuracy, mean response times, and response time distributionsinto components of cognitive processing. Three experiments are used to illustrate experimental manipulations of three components: stimulus difficulty affects the quality of information on which a decision is based; instructions emphasizing either speed or accuracy affect the criterial amounts of information that a subject requires before initiating a response; and the relative proportions of the two stimuli affect biases in drift rate and starting point. The experiments also illustrate the strong constraints that ensure the model is empirically testable and potentially falsifiable. The broad range of applications of the model is also reviewed, including research in the domains of aging and neurophysiology.",
"title": ""
},
{
"docid": "7000ea96562204dfe2c0c23f7cdb6544",
"text": "In this paper, the dynamic modeling of a doubly-fed induction generator-based wind turbine connected to infinite bus (SMIB) system, is carried out in detail. In most of the analysis, the DFIG stator transients and network transients are neglected. In this paper the interfacing problems while considering stator transients and network transients in the modeling of SMIB system are resolved by connecting a resistor across the DFIG terminals. The effect of simplification of shaft system on the controller gains is also discussed. In addition, case studies are presented to demonstrate the effect of mechanical parameters and controller gains on system stability when accounting the two-mass shaft model for the drive train system.",
"title": ""
},
{
"docid": "96a38946e201b7201e874bee0047a34e",
"text": "Nowadays people work on computers for hours and hours they don’t have time to take care of themselves. Due to hectic schedules and consumption of junk food it affects the health of people and mainly heart. So to we are implementing an heart disease prediction system using data mining technique Naïve Bayes and k-means clustering algorithm. It is the combination of both the algorithms. This paper gives an overview for the same. It helps in predicting the heart disease using various attributes and it predicts the output as in the prediction form. For grouping of various attributes it uses k-means algorithm and for predicting it uses naïve bayes algorithm. Index Terms —Data mining, Comma separated files, naïve bayes, k-means algorithm, heart disease.",
"title": ""
},
{
"docid": "66b909528a566662667a3d8c7c749bf4",
"text": "There exists a big demand for innovative secure electronic communications while the expertise level of attackers increases rapidly and that causes even bigger demands and needs for an extreme secure connection. An ideal security protocol should always be protecting the security of connections in many aspects, and leaves no trapdoor for the attackers. Nowadays, one of the popular cryptography protocols is hybrid cryptosystem that uses private and public key cryptography to change secret message. In available cryptography protocol attackers are always aware of transmission of sensitive data. Even non-interested attackers can get interested to break the ciphertext out of curiosity and challenge, when suddenly catches some scrambled data over the network. First of all, we try to explain the roles of innovative approaches in cryptography. After that we discuss about the disadvantages of public key cryptography to exchange secret key. Furthermore, DNA steganography is explained as an innovative paradigm to diminish the usage of public cryptography to exchange session key. In this protocol, session key between a sender and receiver is hidden by novel DNA data hiding technique. Consequently, the attackers are not aware of transmission of session key through unsecure channel. Finally, the strength point of the DNA steganography is discussed.",
"title": ""
},
{
"docid": "ce53aa803d587301a47166c483ecec34",
"text": "Boosting takes on various forms with different programs using different loss functions, different base models, and different optimization schemes. The gbm package takes the approach described in [3] and [4]. Some of the terminology differs, mostly due to an effort to cast boosting terms into more standard statistical terminology (e.g. deviance). In addition, the gbm package implements boosting for models commonly used in statistics but not commonly associated with boosting. The Cox proportional hazard model, for example, is an incredibly useful model and the boosting framework applies quite readily with only slight modification [7]. Also some algorithms implemented in the gbm package differ from the standard implementation. The AdaBoost algorithm [2] has a particular loss function and a particular optimization algorithm associated with it. The gbm implementation of AdaBoost adopts AdaBoost’s exponential loss function (its bound on misclassification rate) but uses Friedman’s gradient descent algorithm rather than the original one proposed. So the main purposes of this document is to spell out in detail what the gbm package implements.",
"title": ""
},
{
"docid": "46ea713c4206d57144350a7871433392",
"text": "In this paper, we use a blog corpus to demonstrate that we can often identify the author of an anonymous text even where there are many thousands of candidate authors. Our approach combines standard information retrieval methods with a text categorization meta-learning scheme that determines when to even venture a guess.",
"title": ""
},
{
"docid": "e75669b68e8736ee6044443108c00eb1",
"text": "UNLABELLED\nThe evolution in adhesive dentistry has broadened the indication of esthetic restorative procedures especially with the use of resin composite material. Depending on the clinical situation, some restorative techniques are best indicated. As an example, indirect adhesive restorations offer many advantages over direct techniques in extended cavities. In general, the indirect technique requires two appointments and a laboratory involvement, or it can be prepared chairside in a single visit either conventionally or by the use of computer-aided design/computer-aided manufacturing systems. In both cases, there will be an extra cost as well as the need of specific materials. This paper describes the clinical procedures for the chairside semidirect technique for composite onlay fabrication without the use of special equipments. The use of this technique combines the advantages of the direct and the indirect restoration.\n\n\nCLINICAL SIGNIFICANCE\nThe semidirect technique for composite onlays offers the advantages of an indirect restoration and low cost, and can be the ideal treatment option for extended cavities in case of financial limitations.",
"title": ""
},
{
"docid": "d2bf01dd261701cae64daa8625f4d2f4",
"text": "Canada has been the world’s leader in e-Government maturity for the last five years. The global average for government website usage by citizens is about 30%. In Canada, this statistic is over 51%. The vast majority of Canadians visit government websites to obtain information, rather than interacting or transacting with the government. It seems that the rate of adoption of e-Government has globally fallen below expectations although some countries are doing better than others. Clearly, a better understanding of why and how citizens use government websites, and their general dispositions towards e-Government is an important research issue. This paper initiates discussion of this issue by proposing a conceptual model of e-Government adoption that places users as the focal point for e-Government adoption strategy.",
"title": ""
},
{
"docid": "6bc5f1f780e96cf19dfd5cdf92b80a36",
"text": "We explore the concept of co-design in the context of neural network verification. Specifically, we aim to train deep neural networks that not only are robust to adversarial perturbations but also whose robustness can be verified more easily. To this end, we identify two properties of network models – weight sparsity and so-called ReLU stability – that turn out to significantly impact the complexity of the corresponding verification task. We demonstrate that improving weight sparsity alone already enables us to turn computationally intractable verification problems into tractable ones. Then, improving ReLU stability leads to an additional 4–13x speedup in verification times. An important feature of our methodology is its “universality,” in the sense that it can be used with a broad range of training procedures and verification approaches.",
"title": ""
},
{
"docid": "5124bfe94345f2abe6f91fe717731945",
"text": "Recently, IT trends such as big data, cloud computing, internet of things (IoT), 3D visualization, network, and so on demand terabyte/s bandwidth computer performance in a graphics card. In order to meet these performance, terabyte/s bandwidth graphics module using 2.5D-IC with high bandwidth memory (HBM) technology has been emerged. Due to the difference in scale of interconnect pitch between GPU or HBM and package substrate, the HBM interposer is certainly required for terabyte/s bandwidth graphics module. In this paper, the electrical performance of the HBM interposer channel in consideration of the manufacturing capabilities is analyzed by simulation both the frequency- and time-domain. Furthermore, although the silicon substrate is most widely employed for the HBM interposer fabrication, the organic and glass substrate are also proposed to replace the high cost and high loss silicon substrate. Therefore, comparison and analysis of the electrical performance of the HBM interposer channel using silicon, organic, and glass substrate are conducted.",
"title": ""
},
{
"docid": "aacfd1e4670044e597f8a321375bdfc1",
"text": "This article presents the main outcome findings from two inter-related randomized trials conducted at four sites to evaluate the effectiveness and cost-effectiveness of five short-term outpatient interventions for adolescents with cannabis use disorders. Trial 1 compared five sessions of Motivational Enhancement Therapy plus Cognitive Behavioral Therapy (MET/CBT) with a 12-session regimen of MET and CBT (MET/CBT12) and another that included family education and therapy components (Family Support Network [FSN]). Trial II compared the five-session MET/CBT with the Adolescent Community Reinforcement Approach (ACRA) and Multidimensional Family Therapy (MDFT). The 600 cannabis users were predominately white males, aged 15-16. All five CYT interventions demonstrated significant pre-post treatment during the 12 months after random assignment to a treatment intervention in the two main outcomes: days of abstinence and the percent of adolescents in recovery (no use or abuse/dependence problems and living in the community). Overall, the clinical outcomes were very similar across sites and conditions; however, after controlling for initial severity, the most cost-effective interventions were MET/CBT5 and MET/CBT12 in Trial 1 and ACRA and MET/CBT5 in Trial 2. It is possible that the similar results occurred because outcomes were driven more by general factors beyond the treatment approaches tested in this study; or because of shared, general helping factors across therapies that help these teens attend to and decrease their connection to cannabis and alcohol.",
"title": ""
}
] | scidocsrr |
17a1ee714654cc10cc65f25e39d2c370 | Right Answer for the Wrong Reason: Discovery and Mitigation | [
{
"docid": "0201a5f0da2430ec392284938d4c8833",
"text": "Natural language sentence matching is a fundamental technology for a variety of tasks. Previous approaches either match sentences from a single direction or only apply single granular (wordby-word or sentence-by-sentence) matching. In this work, we propose a bilateral multi-perspective matching (BiMPM) model. Given two sentences P and Q, our model first encodes them with a BiLSTM encoder. Next, we match the two encoded sentences in two directions P against Q and Q against P . In each matching direction, each time step of one sentence is matched against all timesteps of the other sentence from multiple perspectives. Then, another BiLSTM layer is utilized to aggregate the matching results into a fixed-length matching vector. Finally, based on the matching vector, a decision is made through a fully connected layer. We evaluate our model on three tasks: paraphrase identification, natural language inference and answer sentence selection. Experimental results on standard benchmark datasets show that our model achieves the state-of-the-art performance on all tasks.",
"title": ""
},
{
"docid": "a2673b70bf6c7cf50f2f4c4db2845e19",
"text": "This paper presents a summary of the first Workshop on Building Linguistically Generalizable Natural Language Processing Systems, and the associated Build It Break It, The Language Edition shared task. The goal of this workshop was to bring together researchers in NLP and linguistics with a shared task aimed at testing the generalizability of NLP systems beyond the distributions of their training data. We describe the motivation, setup, and participation of the shared task, provide discussion of some highlighted results, and discuss lessons learned.",
"title": ""
},
{
"docid": "54e2406da46e13870e991ed8a8fb084d",
"text": "Character-based neural machine translation (NMT) models alleviate out-ofvocabulary issues, learn morphology, and move us closer to completely end-toend translation systems. Unfortunately, they are also very brittle and easily falter when presented with noisy data. In this paper, we confront NMT models with synthetic and natural sources of noise. We find that state-of-the-art models fail to translate even moderately noisy texts that humans have no trouble comprehending. We explore two approaches to increase model robustness: structure-invariant word representations and robust training on noisy texts. We find that a model based on a character convolutional neural network is able to simultaneously learn representations robust to multiple kinds of noise.",
"title": ""
},
{
"docid": "71b5c8679979cccfe9cad229d4b7a952",
"text": "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one.\n In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.",
"title": ""
}
] | [
{
"docid": "a87cf90c881f1d65fdb76af0cdcf0bfa",
"text": "A Gram-stain-negative, rod-shaped, aerobic, straw yellow, motile strain, designated KNDSW-TSA6T, belonging to the genus Acidovorax, was isolated from a water sample of the river Ganges, downstream of the city of Kanpur, Uttar Pradesh, India. Cells were aerobic, non-endospore-forming and motile with single polar flagella. It differed from its phylogenetically related strains by phenotypic characteristics such as hydrolysis of urea, gelatin, casein and DNA, and the catalase reaction. The major fatty acids were C16 : 1ω7c/C16 : 1ω6c, C16 : 0 and C18 : 1ω7c/C18 : 1ω6c. Phylogenetic analysis based on 16S rRNA and housekeeping genes (gyrb, recA and rpoB gene sequences), confirmed its placement within the genus Acidovorax as a novel species. Strain KNDSW-TSA6T showed highest 16S rRNA sequence similarity to Acidovorax soli BL21T (98.9 %), Acidovorax delafieldii ATCC 17505T (98.8 %), Acidovorax temperans CCUG 11779T (98.2 %), Acidovorax caeni R-24608T (97.9 %) and Acidovorax radicis N35T (97.6 %). The digital DNA-DNA hybridization and average nucleotide identity values calculated from whole genome sequences between strain KNDSW-TSA6T and the two most closely related strains A. soli BL21T and A. delafieldii ATCC 17505T were below the threshold values of 70 and 95 % respectively. Thus, the data from the polyphasic taxonomic analysis clearly indicates that strain KNDSW-TSA6T represents a novel species, for which the name Acidovorax kalamii sp. nov. is proposed. The type strain is Acidovorax kalamii (=MTCC 12652T=KCTC 52819T=VTCC-B-910010T).",
"title": ""
},
{
"docid": "06499372aac4f329e1b96512587ac37d",
"text": "This study focuses on the task of multipassage reading comprehension (RC) where an answer is provided in natural language. Current mainstream approaches treat RC by extracting the answer span from the provided passages and cannot generate an abstractive summary from the given question and passages. Moreover, they cannot utilize and control different styles of answers, such as concise phrases and well-formed sentences, within a model. In this study, we propose a style-controllable Multi-source Abstractive Summarization model for QUEstion answering, called Masque. The model is an end-toend deep neural network that can generate answers conditioned on a given style. Experiments with MS MARCO 2.1 show that our model achieved state-of-the-art performance on two tasks with different answer styles.",
"title": ""
},
{
"docid": "aa52a52f16a14f6d199a57e86f57d49b",
"text": "We recently proposed a structural model for the Si!331\"-!12!1\" surface reconstruction containing silicon pentamers and adatoms as elementary structural building blocks. Using first-principles density functional theory we here investigate the stability of a variety of adatom configurations and determine the lowest-energy configuration. We also present a detailed comparison of the energetics between our model for Si!331\"-!12 !1\" and the adatom-tetramer-interstitial model for Si!110\"-!16!2\", which shares the same structural building blocks.",
"title": ""
},
{
"docid": "41a15d3dcca1ff835b5d983a8bb5343f",
"text": "and is made available as an electronic reprint (preprint) with permission of SPIE. One print or electronic copy may be made for personal use only. Systematic or multiple reproduction, distribution to multiple locations via electronic or other means, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited. ABSTRACT We describe the architecture and design of a through-the-wall radar. The radar is applied for the detection and localization of people hidden behind obstacles. It implements a new adaptive processing technique for people detection, which is introduced in this article. This processing technique is based on exponential averaging with adopted weighting coefficients. Through-the-wall detection and localization of a moving person is demonstrated by a measurement example. The localization relies on the time-of-flight approach.",
"title": ""
},
{
"docid": "cccb82f06f43f67b0431ee9f1ef8c949",
"text": "This paper presents a versatile solution in an effort of improving the accuracy in monitoring the environmental conditions and reducing manpower for industrial households shrimp farming. A ZigBee-based wireless sensor network (WSN) was used to monitor the critical environmental conditions and all the control processes are done with the help of a series of low-power embedded MSP430 microcontrollers from Texas Instruments. This system is capable of collecting, analyzing and presenting data on a Graphical User Interface (GUI), programmed with LabVIEW. It also allows the user to get the updated sensor information online based on Google Spreadsheets application, via Internet connectivity, or at any time through the SMS gateway service and sends alert message promptly enabling user interventions when needed. Thereby the system minimizes the effects of environmental fluctuations caused by sudden changes and reduces the expended labor power of farms. Because of that, the proposed system saves the cost of hiring labor as well as the electricity usage. The design promotes a versatile, low-cost, and commercial version which will function best for small to medium sized farming operations as it does not require any refitting or reconstruction of the pond.",
"title": ""
},
{
"docid": "96576520817119fa75ff9713ddb0fc3e",
"text": "Unattended ground sensors (UGS) are widely used to monitor human activities, such as pedestrian motion and detection of intruders in a secure region. Efficacy of UGS systems is often limited by high false alarm rates, possibly due to inadequacies of the underlying algorithms and limitations of onboard computation. In this regard, this paper presents a wavelet-based method for target detection and classification. The proposed method has been validated on data sets of seismic and passive infrared sensors for target detection and classification, as well as for payload and movement type identification of the targets. The proposed method has the advantages of fast execution time and low memory requirements and is potentially well-suited for real-time implementation with onboard UGS systems.",
"title": ""
},
{
"docid": "758880e02554dd63b92da065742147d5",
"text": "1Department of Computer Science, Faculty of Science and Technology, Universidade Nova de Lisboa, Lisboa, Portugal 2Center for Biomedical Technology, Universidad Politécnica de Madrid, 28223 Pozuelo de Alarcón, Madrid, Spain 3Data, Networks and Cybersecurity Research Institute, Univ. Rey Juan Carlos, 28028 Madrid, Spain 4Department of Applied Mathematics, Universidad Rey Juan Carlos, 28933 Móstoles, Madrid, Spain 5Center for Computational Simulation, 28223 Pozuelo de Alarcón, Madrid, Spain 6Cyber Security & Digital Trust, BBVA Group, 28050 Madrid, Spain",
"title": ""
},
{
"docid": "c632d3bfb27987e74cc69865627388bf",
"text": "Previous studies and surgeon interviews have shown that most surgeons prefer quality standard de nition (SD)TV 2D scopes to rst generation 3D endoscopes. The use of a telesurgical system has eased many of the design constraints on traditional endoscopes, enabling the design of a high quality SDTV 3D endoscope and an HDTV endoscopic system with outstanding resolution. The purpose of this study was to examine surgeon performance and preference given the choice between these. The study involved two perceptual tasks and four visual-motor tasks using a telesurgical system using the 2D HDTV endoscope and the SDTV endoscope in both 2D and 3D mode. The use of a telesurgical system enabled recording of all the subjects motions for later analysis. Contrary to experience with early 3D scopes and SDTV 2D scopes, this study showed that despite the superior resolution of the HDTV system surgeons performed better with and preferred the SDTV 3D scope.",
"title": ""
},
{
"docid": "b9d78a4f1fc6587557057125343675ab",
"text": "We propose a new computational approach for tracking and detecting statistically significant linguistic shifts in the meaning and usage of words. Such linguistic shifts are especially prevalent on the Internet, where the rapid exchange of ideas can quickly change a word's meaning. Our meta-analysis approach constructs property time series of word usage, and then uses statistically sound change point detection algorithms to identify significant linguistic shifts. We consider and analyze three approaches of increasing complexity to generate such linguistic property time series, the culmination of which uses distributional characteristics inferred from word co-occurrences. Using recently proposed deep neural language models, we first train vector representations of words for each time period. Second, we warp the vector spaces into one unified coordinate system. Finally, we construct a distance-based distributional time series for each word to track its linguistic displacement over time.\n We demonstrate that our approach is scalable by tracking linguistic change across years of micro-blogging using Twitter, a decade of product reviews using a corpus of movie reviews from Amazon, and a century of written books using the Google Book Ngrams. Our analysis reveals interesting patterns of language usage change commensurate with each medium.",
"title": ""
},
{
"docid": "dd6ed8448043868d17ddb015c98a4721",
"text": "Social networking sites, especially Facebook, are an integral part of the lifestyle of contemporary youth. The facilities are increasingly being used by older persons as well. Usage is mainly for social purposes, but the groupand discussion facilities of Facebook hold potential for focused academic use. This paper describes and discusses a venture in which postgraduate distancelearning students joined an optional group for the purpose of discussions on academic, contentrelated topics, largely initiated by the students themselves. Learning and insight were enhanced by these discussions and the students, in their environment of distance learning, are benefiting by contact with fellow students.",
"title": ""
},
{
"docid": "96516274e1eb8b9c53296a935f67ca2a",
"text": "Recurrent neural networks that are <italic>trained</italic> to behave like deterministic finite-state automata (DFAs) can show deteriorating performance when tested on long strings. This deteriorating performance can be attributed to the instability of the internal representation of the learned DFA states. The use of a sigmoidel discriminant function together with the recurrent structure contribute to this instability. We prove that a simple algorithm can <italic>construct</italic> second-order recurrent neural networks with a sparse interconnection topology and sigmoidal discriminant function such that the internal DFA state representations are stable, that is, the constructed network correctly classifies strings of <italic>arbitrary length</italic>. The algorithm is based on encoding strengths of weights directly into the neural network. We derive a relationship between the weight strength and the number of DFA states for robust string classification. For a DFA with <italic>n</italic> state and <italic>m</italic>input alphabet symbols, the constructive algorithm generates a “programmed” neural network with <italic>O</italic>(<italic>n</italic>) neurons and <italic>O</italic>(<italic>mn</italic>) weights. We compare our algorithm to other methods proposed in the literature.",
"title": ""
},
{
"docid": "45f895841ad08bd4473025385e57073a",
"text": "Robust brain magnetic resonance (MR) segmentation algorithms are critical to analyze tissues and diagnose tumor and edema in a quantitative way. In this study, we present a new tissue segmentation algorithm that segments brain MR images into tumor, edema, white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). The detection of the healthy tissues is performed simultaneously with the diseased tissues because examining the change caused by the spread of tumor and edema on healthy tissues is very important for treatment planning. We used T1, T2, and FLAIR MR images of 20 subjects suffering from glial tumor. We developed an algorithm for stripping the skull before the segmentation process. The segmentation is performed using self-organizing map (SOM) that is trained with unsupervised learning algorithm and fine-tuned with learning vector quantization (LVQ). Unlike other studies, we developed an algorithm for clustering the SOM instead of using an additional network. Input feature vector is constructed with the features obtained from stationary wavelet transform (SWT) coefficients. The results showed that average dice similarity indexes are 91% for WM, 87% for GM, 96% for CSF, 61% for tumor, and 77% for edema.",
"title": ""
},
{
"docid": "2fd708b638a6562b5b5c1cf2f9b156a5",
"text": "A main aspect of the Android platform is Inter-Application Communication (IAC), which enables reuse of functionality across apps and app components via message passing. While a powerful feature, IAC also constitutes a serious attack surface. A malicious app can embed a payload into an IAC message, thereby driving the recipient app into a potentially vulnerable behavior if the message is processed without its fields first being sanitized or validated. We present what to our knowledge is the first comprehensive testing algorithm for Android IAC vulnerabilities. Toward this end, we first describe a catalog, stemming from our field experience, of 8 concrete vulnerability types that can potentially arise due to unsafe handling of incoming IAC messages. We then explain the main challenges that automated discovery of Android IAC vulnerabilities entails, including in particular path coverage and custom data fields, and present simple yet surprisingly effective solutions to these challenges. We have realized our testing approach as the IntentDroid system, which is available as a commercial cloud service. IntentDroid utilizes lightweight platform-level instrumentation, implemented via debug breakpoints (to run atop any Android device without any setup or customization), to recover IAC-relevant app-level behaviors. Evaluation of IntentDroid over a set of 80 top-popular apps has revealed a total 150 IAC vulnerabilities — some already fixed by the developers following our report — with a recall rate of 92% w.r.t. a ground truth established via manual auditing by a security expert.",
"title": ""
},
{
"docid": "a4b123705dda7ae3ac7e9e88a50bd64a",
"text": "We present a novel approach to video segmentation using multiple object proposals. The problem is formulated as a minimization of a novel energy function defined over a fully connected graph of object proposals. Our model combines appearance with long-range point tracks, which is key to ensure robustness with respect to fast motion and occlusions over longer video sequences. As opposed to previous approaches based on object proposals, we do not seek the best per-frame object hypotheses to perform the segmentation. Instead, we combine multiple, potentially imperfect proposals to improve overall segmentation accuracy and ensure robustness to outliers. Overall, the basic algorithm consists of three steps. First, we generate a very large number of object proposals for each video frame using existing techniques. Next, we perform an SVM-based pruning step to retain only high quality proposals with sufficiently discriminative power. Finally, we determine the fore-and background classification by solving for the maximum a posteriori of a fully connected conditional random field, defined using our novel energy function. Experimental results on a well established dataset demonstrate that our method compares favorably to several recent state-of-the-art approaches.",
"title": ""
},
{
"docid": "a1b20560bbd6124db8fc8b418cd1342c",
"text": "Feature selection is often an essential data processing step prior to applying a learning algorithm The re moval of irrelevant and redundant information often improves the performance of machine learning algo rithms There are two common approaches a wrapper uses the intended learning algorithm itself to evaluate the usefulness of features while a lter evaluates fea tures according to heuristics based on general charac teristics of the data The wrapper approach is generally considered to produce better feature subsets but runs much more slowly than a lter This paper describes a new lter approach to feature selection that uses a correlation based heuristic to evaluate the worth of fea ture subsets When applied as a data preprocessing step for two common machine learning algorithms the new method compares favourably with the wrapper but re quires much less computation",
"title": ""
},
{
"docid": "ae4c9e5df340af3bd35ae5490083c72a",
"text": "The massive technological advancements around the world have created significant challenging competition among companies where each of the companies tries to attract the customers using different techniques. One of the recent techniques is Augmented Reality (AR). The AR is a new technology which is capable of presenting possibilities that are difficult for other technologies to offer and meet. Nowadays, numerous augmented reality applications have been used in the industry of different kinds and disseminated all over the world. AR will really alter the way individuals view the world. The AR is yet in its initial phases of research and development at different colleges and high-tech institutes. Throughout the last years, AR apps became transportable and generally available on various devices. Besides, AR begins to occupy its place in our audio-visual media and to be used in various fields in our life in tangible and exciting ways such as news, sports and is used in many domains in our life such as electronic commerce, promotion, design, and business. In addition, AR is used to facilitate the learning whereas it enables students to access location-specific information provided through various sources. Such growth and spread of AR applications pushes organizations to compete one another, and every one of them exerts its best to gain the customers. This paper provides a comprehensive study of AR including its history, architecture, applications, current challenges and future trends.",
"title": ""
},
{
"docid": "985111380a5eefe2e0e11e9b93941905",
"text": "The trend towards shorter delivery lead-times reduces operational efficiency and increases transportation costs for internet retailers. Mobile technology, however, creates new opportunities to organize the last-mile. In this paper, we study the concept of crowdsourced delivery that aims to use excess capacity on journeys that already take place to make deliveries. We consider a peer-to-peer platform that automatically creates matches between parcel delivery tasks and ad-hoc drivers. The platform also operates a fleet of backup vehicles to serve the tasks that cannot be served by the ad-hoc drivers. The matching of tasks, drivers and backup vehicles gives rise to a new variant of the dynamic pick-up and delivery problem. We propose a rolling horizon framework and develop an exact solution approach to solve the various subproblems. In order to investigate the potential benefit of crowdsourced delivery, we conduct a wide range of computational experiments. The experiments provide insights into the viability of crowdsourced delivery under various assumptions about the environment and the behavior of the ad-hoc drivers. The results suggest that the use of ad-hoc drivers has the potential to make the last-mile more cost-efficient and can reduce the system-wide vehicle-miles.",
"title": ""
},
{
"docid": "0999a01e947019409c75150f85058728",
"text": "We present a robot localization system using biologically inspired vision. Our system models two extensively studied human visual capabilities: (1) extracting the ldquogistrdquo of a scene to produce a coarse localization hypothesis and (2) refining it by locating salient landmark points in the scene. Gist is computed here as a holistic statistical signature of the image, thereby yielding abstract scene classification and layout. Saliency is computed as a measure of interest at every image location, which efficiently directs the time-consuming landmark-identification process toward the most likely candidate locations in the image. The gist features and salient regions are then further processed using a Monte Carlo localization algorithm to allow the robot to generate its position. We test the system in three different outdoor environments-building complex (38.4 m times 54.86 m area, 13 966 testing images), vegetation-filled park (82.3 m times 109.73 m area, 26 397 testing images), and open-field park (137.16 m times 178.31 m area, 34 711 testing images)-each with its own challenges. The system is able to localize, on average, within 0.98, 2.63, and 3.46 m, respectively, even with multiple kidnapped-robot instances.",
"title": ""
},
{
"docid": "c3193ff8079c5e47d85035b483c805f9",
"text": "In order to enhance the scanning range of planar phased arrays, a planar substrate integrated waveguide slot (SIWslot) antenna with wide beamwidth is proposed in this letter. The proposed antenna is fabricated on a single-layer substrate, which is fully covered with a metal ground on the back. The SIW works like a dielectric-filled rectangular waveguide working in the TE10 mode. There are four inclined slots etched on the top metal layer following the rules of rectangular waveguide slot antenna. The electric fields in the slots work as equivalent magnetic currents. As opposed to normal microstrip antennas, the equivalent magnetic currents from the slots over a larger metal ground can radiate with a wide beamwidth. Its operating bandwidth is from 5.4 to 6.45 GHz with a relative bandwidth of 17.7%. Meanwhile, the 3-dB beamwidth in the xz-plane is between 130° and 148° in the whole operating band. Furthermore, the SIW-slot element is employed in a 1 × 8 planar phased array. The measured results show that the main lobe of phased array can obtain a wide-angle scanning from -71° to 73° in the whole operating band.",
"title": ""
},
{
"docid": "e5a69aa4eaf7e38a5372fb3d39571669",
"text": "A widespread folklore for explaining the success of Convolutional Neural Networks (CNNs) is that CNNs use a more compact representation than the Fullyconnected Neural Network (FNN) and thus require fewer training samples to accurately estimate their parameters. We initiate the study of rigorously characterizing the sample complexity of estimating CNNs. We show that for an m-dimensional convolutional filter with linear activation acting on a d-dimensional input, the sample complexity of achieving population prediction error of is r Opm{ q 2, whereas the sample-complexity for its FNN counterpart is lower bounded by Ωpd{ q samples. Since, in typical settings m ! d, this result demonstrates the advantage of using a CNN. We further consider the sample complexity of estimating a onehidden-layer CNN with linear activation where both the m-dimensional convolutional filter and the r-dimensional output weights are unknown. For this model, we show that the sample complexity is r O ` pm` rq{ 2 ̆ when the ratio between the stride size and the filter size is a constant. For both models, we also present lower bounds showing our sample complexities are tight up to logarithmic factors. Our main tools for deriving these results are a localized empirical process analysis and a new lemma characterizing the convolutional structure. We believe that these tools may inspire further developments in understanding CNNs.",
"title": ""
}
] | scidocsrr |
dc61ad88c896f5df31456923867cbb14 | Wide Pulse Combined With Narrow-Pulse Generator for Food Sterilization | [
{
"docid": "19e16c7618b0f1a623f3446e4d84fc08",
"text": "Apoptosis — the regulated destruction of a cell — is a complicated process. The decision to die cannot be taken lightly, and the activity of many genes influence a cell's likelihood of activating its self-destruction programme. Once the decision is taken, proper execution of the apoptotic programme requires the coordinated activation and execution of multiple subprogrammes. Here I review the basic components of the death machinery, describe how they interact to regulate apoptosis in a coordinated manner, and discuss the main pathways that are used to activate cell death.",
"title": ""
}
] | [
{
"docid": "07be6a2df7360ef53d7e6d9cc30f621d",
"text": "Fire accidents can cause numerous casualties and heavy property losses, especially, in petrochemical industry, such accidents are likely to cause secondary disasters. However, common fire drill training would cause loss of resources and pollution. We designed a multi-dimensional interactive somatosensory (MDIS) cloth system based on virtual reality technology to simulate fire accidents in petrochemical industry. It provides a vivid visual and somatosensory experience. A thermal radiation model is built in a virtual environment, and it could predict the destruction radius of a fire. The participant position changes are got from Kinect, and shown in virtual environment synchronously. The somatosensory cloth, which could both heat and refrigerant, provides temperature feedback based on thermal radiation results and actual distance. In this paper, we demonstrate the details of the design, and then verified its basic function. Heating deviation from model target is lower than 3.3 °C and refrigerant efficiency is approximately two times faster than heating efficiency.",
"title": ""
},
{
"docid": "20662e12b45829c00c67434277ab9a26",
"text": "Given the significance of placement in IC physical design, extensive research studies performed over the last 50 years addressed numerous aspects of global and detailed placement. The objectives and the constraints dominant in placement have been revised many times over, and continue to evolve. Additionally, the increasing scale of placement instances affects the algorithms of choice for high-performance tools. We survey the history of placement research, the progress achieved up to now, and outstanding challenges.",
"title": ""
},
{
"docid": "9fa20791d2e847dbd2c7204d00eec965",
"text": "As neurobiological evidence points to the neocortex as the brain region mainly involved in high-level cognitive functions, an innovative model of neocortical information processing has been recently proposed. Based on a simplified model of a neocortical neuron, and inspired by experimental evidence of neocortical organisation, the Hierarchical Temporal Memory (HTM) model attempts at understanding intelligence, but also at building learning machines. This paper focuses on analysing HTM's ability for online, adaptive learning of sequences. In particular, we seek to determine whether the approach is robust to noise in its inputs, and to compare and contrast its performance and attributes to an alternative Hidden Markov Model (HMM) approach. We reproduce a version of a HTM network and apply it to a visual pattern recognition task under various learning conditions. Our first set of experiments explore the HTM network's capability to learn repetitive patterns and sequences of patterns within random data streams. Further experimentation involves assessing the network's learning performance in terms of inference and prediction under different noise conditions. HTM results are compared with those of a HMM trained at the same tasks. Online learning performance results demonstrate the HTM's capacity to make use of context in order to generate stronger predictions, whereas results on robustness to noise reveal an ability to deal with noisy environments. Our comparisons also, however, emphasise a manner in which HTM differs significantly from HMM, which is that HTM generates predicted observations rather than hidden states, and each observation is a sparse distributed representation.",
"title": ""
},
{
"docid": "33b63fe07849be342beaf3b31dc0d6da",
"text": "Infrared sensors are used in Photoplethysmography measurements (PPG) to get blood flow parameters in the vascular system. It is a simple, low-cost non-invasive optical technique that is commonly placed on a finger or toe, to detect blood volume changes in the micro-vascular bed of tissue. The sensor use an infrared source and a photo detector to detect the infrared wave which is not absorbed. The recorded infrared waveform at the detector side is called the PPG signal. This paper reviews the various blood flow parameters that can be extracted from this PPG signal including the existence of an endothelial disfunction as an early detection tool of vascular diseases.",
"title": ""
},
{
"docid": "3f23f5452c53ae5fcc23d95acdcdafd8",
"text": "Metamorphism is a technique that mutates the binary code using different obfuscations and never keeps the same sequence of opcodes in the memory. This stealth technique provides the capability to a malware for evading detection by simple signature-based (such as instruction sequences, byte sequences and string signatures) anti-malware programs. In this paper, we present a new scheme named Annotated Control Flow Graph (ACFG) to efficiently detect such kinds of malware. ACFG is built by annotating CFG of a binary program and is used for graph and pattern matching to analyse and detect metamorphic malware. We also optimize the runtime of malware detection through parallelization and ACFG reduction, maintaining the same accuracy (without ACFG reduction) for malware detection. ACFG proposed in this paper: (i) captures the control flow semantics of a program; (ii) provides a faster matching of ACFGs and can handle malware with smaller CFGs, compared with other such techniques, without compromising the accuracy; (iii) contains more information and hence provides more accuracy than a CFG. Experimental evaluation of the proposed scheme using an existing dataset yields malware detection rate of 98.9% and false positive rate of 4.5%.",
"title": ""
},
{
"docid": "1969bf5a07349cc5a9b498e0437e41fe",
"text": "In this work, we tackle the problem of instance segmentation, the task of simultaneously solving object detection and semantic segmentation. Towards this goal, we present a model, called MaskLab, which produces three outputs: box detection, semantic segmentation, and direction prediction. Building on top of the Faster-RCNN object detector, the predicted boxes provide accurate localization of object instances. Within each region of interest, MaskLab performs foreground/background segmentation by combining semantic and direction prediction. Semantic segmentation assists the model in distinguishing between objects of different semantic classes including background, while the direction prediction, estimating each pixel's direction towards its corresponding center, allows separating instances of the same semantic class. Moreover, we explore the effect of incorporating recent successful methods from both segmentation and detection (e.g., atrous convolution and hypercolumn). Our proposed model is evaluated on the COCO instance segmentation benchmark and shows comparable performance with other state-of-art models.",
"title": ""
},
{
"docid": "bbb9412a61bb8497e1d8b6e955e0217b",
"text": "There has been great interest in developing methodologies that are capable of dealing with imprecision and uncertainty. The large amount of research currently being carried out in fuzzy and rough sets is representative of this. Many deep relationships have been established, and recent studies have concluded as to the complementary nature of the two methodologies. Therefore, it is desirable to extend and hybridize the underlying concepts to deal with additional aspects of data imperfection. Such developments offer a high degree of flexibility and provide robust solutions and advanced tools for data analysis. Fuzzy-rough set-based feature (FS) selection has been shown to be highly useful at reducing data dimensionality but possesses several problems that render it ineffective for large datasets. This paper proposes three new approaches to fuzzy-rough FS-based on fuzzy similarity relations. In particular, a fuzzy extension to crisp discernibility matrices is proposed and utilized. Initial experimentation shows that the methods greatly reduce dimensionality while preserving classification accuracy.",
"title": ""
},
{
"docid": "cedfccb3fd6433e695082594cf0beb45",
"text": "Among the different existing cryptographic file systems, EncFS has a unique feature that makes it attractive for backup setups involving untrusted (cloud) storage. It is a file-based overlay file system in normal operation (i.e., it maintains a directory hierarchy by storing encrypted representations of files and folders in a specific source folder), but its reverse mode allows to reverse this process: Users can mount deterministic, encrypted views of their local, unencrypted files on the fly, allowing synchronization to untrusted storage using standard tools like rsync without having to store encrypted representations on the local hard drive. So far, EncFS is a single-user solution: All files of a folder are encrypted using the same, static key; file access rights are passed through to the encrypted representation, but not otherwise considered. In this paper, we work out how multi-user support can be integrated into EncFS and its reverse mode in particular. We present an extension that a) stores individual files' owner/group information and permissions in a confidential and authenticated manner, and b) cryptographically enforces thereby specified read rights. For this, we introduce user-specific keys and an appropriate, automatic key management. Given a user's key and a complete encrypted source directory, the extension allows access to exactly those files the user is authorized for according to the corresponding owner/group/permissions information. Just like EncFS, our extension depends only on symmetric cryptographic primitives.",
"title": ""
},
{
"docid": "410a4df5b17ec0c4b160c378ca08bc17",
"text": "We present the results of an investigation into the nature of information needs of software developers who work in projects that are part of larger ecosystems. This work is based on a quantitative survey of 75 professional software developers. We corroborate the results identified in the survey with needs and motivations proposed in a previous survey and discover that tool support for developers working in an ecosystem context is even more meager than we thought: mailing lists and internet search are the most popular tools developers use to satisfy their ecosystem-related information needs.",
"title": ""
},
{
"docid": "9c1f7dae555efd9c05ce7d3a90616c17",
"text": "Shallow trench isolation(STI) is the mainstream CMOS isolation technology for advanced integrated circuits. While STI process gives the isolation benefits due to its scalable characteristics, exploiting the compressive stress exerted by STI wells on device active regions to improve performance of devices has been one of the major industry focuses. However, in the present research of VLSI physical design, there has no yet a global optimization methodology on the whole chip layout to control the size of the STI wells, which affects the stress magnitude along with the size of active region of transistors. In this paper, we present a novel methodology that is capable of determining globally the optimal STI well width following the chip placement stage. The methodology is based on the observation that both of the terms in charge of chip width minimization and transistor channel mobility optimization in the objective function can be modeled as posynomials of the design variables, that is, the width of STI wells. Then, this stress aware placement optimization problem could be solved efficiently as a convex geometric programming (GP) problem. Finally, by a MOSEK GP problem solver, we do our STI width aware placement optimization on the given placements of some GSRC and IBM-PLACE benchmarks. Experiment results demonstrated that our methodology can obtain decent results with an acceptable runtime when satisfy the necessary location constraints from DRC specifications.",
"title": ""
},
{
"docid": "c5113ff741d9e656689786db10484a07",
"text": "Pulmonary administration of drugs presents several advantages in the treatment of many diseases. Considering local and systemic delivery, drug inhalation enables a rapid and predictable onset of action and induces fewer side effects than other routes of administration. Three main inhalation systems have been developed for the aerosolization of drugs; namely, nebulizers, pressurized metered-dose inhalers (MDIs) and dry powder inhalers (DPIs). The latter are currently the most convenient alternative as they are breath-actuated and do not require the use of any propellants. The deposition site in the respiratory tract and the efficiency of inhaled aerosols are critically influenced by the aerodynamic diameter, size distribution, shape and density of particles. In the case of DPIs, since micronized particles are generally very cohesive and exhibit poor flow properties, drug particles are usually blended with coarse and fine carrier particles. This increases particle aerodynamic behavior and flow properties of the drugs and ensures accurate dosage of active ingredients. At present, particles with controlled properties are obtained by milling, spray drying or supercritical fluid techniques. Several excipients such as sugars, lipids, amino acids, surfactants, polymers and absorption enhancers have been tested for their efficacy in improving drug pulmonary administration. The purpose of this article is to describe various observations that have been made in the field of inhalation product development, especially for the dry powder inhalation formulation, and to review the use of various additives, their effectiveness and their potential toxicity for pulmonary administration.",
"title": ""
},
{
"docid": "7c05ef9ac0123a99dd5d47c585be391c",
"text": "Memory access bugs, including buffer overflows and uses of freed heap memory, remain a serious problem for programming languages like C and C++. Many memory error detectors exist, but most of them are either slow or detect a limited set of bugs, or both. This paper presents AddressSanitizer, a new memory error detector. Our tool finds out-of-bounds accesses to heap, stack, and global objects, as well as use-after-free bugs. It employs a specialized memory allocator and code instrumentation that is simple enough to be implemented in any compiler, binary translation system, or even in hardware. AddressSanitizer achieves efficiency without sacrificing comprehensiveness. Its average slowdown is just 73% yet it accurately detects bugs at the point of occurrence. It has found over 300 previously unknown bugs in the Chromium browser and many bugs in other software.",
"title": ""
},
{
"docid": "d7ff935c38f2adad660ba580e6f3bc6c",
"text": "In this report, we provide a comparative analysis of different techniques for user intent classification towards the task of app recommendation. We analyse the performance of different models and architectures for multi-label classification over a dataset with a relative large number of classes and only a handful examples of each class. We focus, in particular, on memory network architectures, and compare how well the different versions perform under the task constraints. Since the classifier is meant to serve as a module in a practical dialog system, it needs to be able to work with limited training data and incorporate new data on the fly. We devise a 1-shot learning task to test the models under the above constraint. We conclude that relatively simple versions of memory networks perform better than other approaches. Although, for tasks with very limited data, simple non-parametric methods perform comparably, without needing the extra training data.",
"title": ""
},
{
"docid": "40bb8660fd02dc402d80e0f5970fa9dc",
"text": "Dengue is the second most common mosquito-borne disease affecting human beings. In 2009, WHO endorsed new guidelines that, for the first time, consider neurological manifestations in the clinical case classification for severe dengue. Dengue can manifest with a wide range of neurological features, which have been noted--depending on the clinical setting--in 0·5-21% of patients with dengue admitted to hospital. Furthermore, dengue was identified in 4-47% of admissions with encephalitis-like illness in endemic areas. Neurological complications can be categorised into dengue encephalopathy (eg, caused by hepatic failure or metabolic disorders), encephalitis (caused by direct virus invasion), neuromuscular complications (eg, Guillain-Barré syndrome or transient muscle dysfunctions), and neuro-ophthalmic involvement. However, overlap of these categories is possible. In endemic countries and after travel to these regions, dengue should be considered in patients presenting with fever and acute neurological manifestations.",
"title": ""
},
{
"docid": "d99d4bdf1af85c14653c7bbde10eca7b",
"text": "Plants endure a variety of abiotic and biotic stresses, all of which cause major limitations to production. Among abiotic stressors, heavy metal contamination represents a global environmental problem endangering humans, animals, and plants. Exposure to heavy metals has been documented to induce changes in the expression of plant proteins. Proteins are macromolecules directly responsible for most biological processes in a living cell, while protein function is directly influenced by posttranslational modifications, which cannot be identified through genome studies. Therefore, it is necessary to conduct proteomic studies, which enable the elucidation of the presence and role of proteins under specific environmental conditions. This review attempts to present current knowledge on proteomic techniques developed with an aim to detect the response of plant to heavy metal stress. Significant contributions to a better understanding of the complex mechanisms of plant acclimation to metal stress are also discussed.",
"title": ""
},
{
"docid": "e95e043f3a783d95cf4f490bdf6cb6e0",
"text": "The fundamental problem of finding a suitable representation of the orientation of 3D surfaces is considered. A representation is regarded suitable if it meets three basic requirements: Uniqueness, Uniformity and Polar separability. A suitable tensor representation is given. At the heart of the problem lies the fact that orientation can only be defined mod 180◦ , i.e the fact that a 180◦ rotation of a line or a plane amounts to no change at all. For this reason representing a plane using its normal vector leads to ambiguity and such a representation is consequently not suitable. The ambiguity can be eliminated by establishing a mapping between R3 and a higherdimensional tensor space. The uniqueness requirement implies a mapping that map all pairs of 3D vectors x and -x onto the same tensor T. Uniformity implies that the mapping implicitly carries a definition of distance between 3D planes (and lines) that is rotation invariant and monotone with the angle between the planes. Polar separability means that the norm of the representing tensor T is rotation invariant. One way to describe the mapping is that it maps a 3D sphere into 6D in such a way that the surface is uniformly stretched and all pairs of antipodal points maps onto the same tensor. It is demonstrated that the above mapping can be realized by sampling the 3D space using a specified class of symmetrically distributed quadrature filters. It is shown that 6 quadrature filters are necessary to realize the desired mapping, the orientations of the filters given by lines trough the vertices of an icosahedron. The desired tensor representation can be obtained by simply performing a weighted summation of the quadrature filter outputs. This situation is indeed satisfying as it implies a simple implementation of the theory and that requirements on computational capacity can be kept within reasonable limits. Noisy neigborhoods and/or linear combinations of tensors produced by the mapping will in general result in a tensor that has no direct counterpart in R3. In an adaptive hierarchical signal processing system, where information is flowing both up (increasing the level of abstraction) and down (for adaptivity and guidance), it is necessary that a meaningful inverse exists for each levelaltering operation. It is shown that the point in R3 that corresponds to the best approximation of a given tensor is given by the largest eigenvalue times the corresponding eigenvector of the tensor.",
"title": ""
},
{
"docid": "65685bafe88b596530d4280e7e75d1c4",
"text": "The supernodal method for sparse Cholesky factorization represents the factor L as a set of supernodes, each consisting of a contiguous set of columns of L with identical nonzero pattern. A conventional supernode is stored as a dense submatrix. While this is suitable for sparse Cholesky factorization where the nonzero pattern of L does not change, it is not suitable for methods that modify a sparse Cholesky factorization after a low-rank change to A (an update/downdate, Ā = A ± WWT). Supernodes merge and split apart during an update/downdate. Dynamic supernodes are introduced which allow a sparse Cholesky update/downdate to obtain performance competitive with conventional supernodal methods. A dynamic supernodal solver is shown to exceed the performance of the conventional (BLAS-based) supernodal method for solving triangular systems. These methods are incorporated into CHOLMOD, a sparse Cholesky factorization and update/downdate package which forms the basis of x = A\\b MATLAB when A is sparse and symmetric positive definite.",
"title": ""
},
{
"docid": "e3316e7fa5a042d0a973c621cec5c3bc",
"text": "Intelligent fault diagnosis techniques have replaced time-consuming and unreliable human analysis, increasing the efficiency of fault diagnosis. Deep learning models can improve the accuracy of intelligent fault diagnosis with the help of their multilayer nonlinear mapping ability. This paper proposes a novel method named Deep Convolutional Neural Networks with Wide First-layer Kernels (WDCNN). The proposed method uses raw vibration signals as input (data augmentation is used to generate more inputs), and uses the wide kernels in the first convolutional layer for extracting features and suppressing high frequency noise. Small convolutional kernels in the preceding layers are used for multilayer nonlinear mapping. AdaBN is implemented to improve the domain adaptation ability of the model. The proposed model addresses the problem that currently, the accuracy of CNN applied to fault diagnosis is not very high. WDCNN can not only achieve 100% classification accuracy on normal signals, but also outperform the state-of-the-art DNN model which is based on frequency features under different working load and noisy environment conditions.",
"title": ""
},
{
"docid": "10f5b005960094bdc1676facc4badf10",
"text": "Users with anomalous behaviors in online communication systems (e.g. email and social medial platforms) are potential threats to society. Automated anomaly detection based on advanced machine learning techniques has been developed to combat this issue; challenges remain, though, due to the difficulty of obtaining proper ground truth for model training and evaluation. Therefore, substantial human judgment on the automated analysis results is often required to better adjust the performance of anomaly detection. Unfortunately, techniques that allow users to understand the analysis results more efficiently, to make a confident judgment about anomalies, and to explore data in their context, are still lacking. In this paper, we propose a novel visual analysis system, TargetVue, which detects anomalous users via an unsupervised learning model and visualizes the behaviors of suspicious users in behavior-rich context through novel visualization designs and multiple coordinated contextual views. Particularly, TargetVue incorporates three new ego-centric glyphs to visually summarize a user's behaviors which effectively present the user's communication activities, features, and social interactions. An efficient layout method is proposed to place these glyphs on a triangle grid, which captures similarities among users and facilitates comparisons of behaviors of different users. We demonstrate the power of TargetVue through its application in a social bot detection challenge using Twitter data, a case study based on email records, and an interview with expert users. Our evaluation shows that TargetVue is beneficial to the detection of users with anomalous communication behaviors.",
"title": ""
},
{
"docid": "50708eb1617b59f605b926583d9215bf",
"text": "Due to filmmakers focusing on violence, traumatic events, and hallucinations when depicting characters with schizophrenia, critics have scrutinized the representation of mental disorders in contemporary films for years. This study compared previous research on schizophrenia with the fictional representation of the disease in contemporary films. Through content analysis, this study examined 10 films featuring a schizophrenic protagonist, tallying moments of violence and charting if they fell into four common stereotypes. Results showed a high frequency of violent behavior in films depicting schizophrenic characters, implying that those individuals are overwhelmingly dangerous and to be feared.",
"title": ""
}
] | scidocsrr |
1149bf34849583bfda1a14a163505f1f | Towards Generalization and Simplicity in Continuous Control | [
{
"docid": "05b6f7fd65ae6eee7fb3ae44e98fb2f9",
"text": "We explore learning-based approaches for feedback control of a dexterous five-finger hand performing non-prehensile manipulation. First, we learn local controllers that are able to perform the task starting at a predefined initial state. These controllers are constructed using trajectory optimization with respect to locally-linear time-varying models learned directly from sensor data. In some cases, we initialize the optimizer with human demonstrations collected via teleoperation in a virtual environment. We demonstrate that such controllers can perform the task robustly, both in simulation and on the physical platform, for a limited range of initial conditions around the trained starting state. We then consider two interpolation methods for generalizing to a wider range of initial conditions: deep learning, and nearest neighbors. We find that nearest neighbors achieve higher performance under full observability, while a neural network proves advantages under partial observability: it uses only tactile and proprioceptive feedback but no feedback about the object (i.e. it performs the task blind) and learns a time-invariant policy. In contrast, the nearest neighbors method switches between time-varying local controllers based on the proximity of initial object states sensed via motion capture. While both generalization methods leave room for improvement, our work shows that (i) local trajectory-based controllers for complex non-prehensile manipulation tasks can be constructed from surprisingly small amounts of training data, and (ii) collections of such controllers can be interpolated to form more global controllers. Results are summarized in the supplementary video: https://youtu.be/E0wmO6deqjo",
"title": ""
}
] | [
{
"docid": "c8be0e643c72c7abea1ad758ac2b49a8",
"text": "Visual attention plays an important role to understand images and demonstrates its effectiveness in generating natural language descriptions of images. On the other hand, recent studies show that language associated with an image can steer visual attention in the scene during our cognitive process. Inspired by this, we introduce a text-guided attention model for image captioning, which learns to drive visual attention using associated captions. For this model, we propose an exemplarbased learning approach that retrieves from training data associated captions with each image, and use them to learn attention on visual features. Our attention model enables to describe a detailed state of scenes by distinguishing small or confusable objects effectively. We validate our model on MSCOCO Captioning benchmark and achieve the state-of-theart performance in standard metrics.",
"title": ""
},
{
"docid": "9a973833c640e8a9fe77cd7afdae60f2",
"text": "Metastasis is a characteristic trait of most tumour types and the cause for the majority of cancer deaths. Many tumour types, including melanoma and breast and prostate cancers, first metastasize via lymphatic vessels to their regional lymph nodes. Although the connection between lymph node metastases and shorter survival times of patients was made decades ago, the active involvement of the lymphatic system in cancer, metastasis has been unravelled only recently, after molecular markers of lymphatic vessels were identified. A growing body of evidence indicates that tumour-induced lymphangiogenesis is a predictive indicator of metastasis to lymph nodes and might also be a target for prevention of metastasis. This article reviews the current understanding of lymphangiogenesis in cancer anti-lymphangiogenic strategies for prevention and therapy of metastatic disease, quantification of lymphangiogenesis for the prognosis and diagnosis of metastasis and in vivo imaging technologies for the assessment of lymphatic vessels, drainage and lymph nodes.",
"title": ""
},
{
"docid": "7a2d4032d79659a70ed2f8a6b75c4e71",
"text": "In recent years, transition-based parsers have shown promise in terms of efficiency and accuracy. Though these parsers have been extensively explored for multiple Indian languages, there is still considerable scope for improvement by properly incorporating syntactically relevant information. In this article, we enhance transition-based parsing of Hindi and Urdu by redefining the features and feature extraction procedures that have been previously proposed in the parsing literature of Indian languages. We propose and empirically show that properly incorporating syntactically relevant information like case marking, complex predication and grammatical agreement in an arc-eager parsing model can significantly improve parsing accuracy. Our experiments show an absolute improvement of ∼2% LAS for parsing of both Hindi and Urdu over a competitive baseline which uses rich features like part-of-speech (POS) tags, chunk tags, cluster ids and lemmas. We also propose some heuristics to identify ezafe constructions in Urdu texts which show promising results in parsing these constructions.",
"title": ""
},
{
"docid": "c6dfe01e87a7ec648f0857bf1a74a3ba",
"text": "Received: 12 June 2006 Revised: 10 May 2007 Accepted: 22 July 2007 Abstract Although there is widespread agreement that leadership has important effects on information technology (IT) acceptance and use, relatively little empirical research to date has explored this phenomenon in detail. This paper integrates the unified theory of acceptance and use of technology (UTAUT) with charismatic leadership theory, and examines the role of project champions influencing user adoption. PLS analysis of survey data collected from 209 employees in seven organizations that had engaged in a large-scale IT implementation revealed that project champion charisma was positively associated with increased performance expectancy, effort expectancy, social influence and facilitating condition perceptions of users. Theoretical and managerial implications are discussed, and suggestions for future research in this area are provided. European Journal of Information Systems (2007) 16, 494–510. doi:10.1057/palgrave.ejis.3000682",
"title": ""
},
{
"docid": "f88235f1056d66c5dc188fcf747bf570",
"text": "In this paper, we compare the differences between traditional Kelly Criterion and Vince's optimal f through backtesting actual financial transaction data. We apply a momentum trading strategy to the Taiwan Weighted Index Futures, and analyze its profit-and-loss vectors of Kelly Criterion and Vince's optimal f, respectively. Our numerical experiments demonstrate that there is nearly 90% chance that the difference gap between the bet ratio recommended by Kelly criterion and and Vince's optimal f lies within 2%. Therefore, in the actual transaction, the values from Kelly Criterion could be taken directly as the optimal bet ratio for funds control.",
"title": ""
},
{
"docid": "329a84a4757e7ee595c31d53a4ab84d0",
"text": "Generating a reasonable ending for a given story context, i.e., story ending generation, is a strong indication of story comprehension. This task requires not only to understand the context clues which play an important role in planning the plot, but also to handle implicit knowledge to make a reasonable, coherent story. In this paper, we devise a novel model for story ending generation. The model adopts an incremental encoding scheme to represent context clues which are spanning in the story context. In addition, commonsense knowledge is applied through multi-source attention to facilitate story comprehension, and thus to help generate coherent and reasonable endings. Through building context clues and using implicit knowledge, the model is able to produce reasonable story endings. Automatic and manual evaluation shows that our model can generate more reasonable story endings than state-of-the-art baselines. 1",
"title": ""
},
{
"docid": "438094ef7913de0236b57a85e7d511c2",
"text": "Magnetic resonance (MR) is the best way to assess the new anatomy of the pelvis after male to female (MtF) sex reassignment surgery. The aim of the study was to evaluate the radiological appearance of the small pelvis after MtF surgery and to compare it with the normal women's anatomy. Fifteen patients who underwent MtF surgery were subjected to pelvic MR at least 6 months after surgery. The anthropometric parameters of the small pelvis were measured and compared with those of ten healthy women (control group). Our personal technique (creation of the mons Veneris under the pubic skin) was performed in all patients. In patients who underwent MtF surgery, the mean neovaginal depth was slightly superior than in women (P=0.009). The length of the inferior pelvic aperture and of the inlet of pelvis was higher in the control group (P<0.005). The inclination between the axis of the neovagina and the inferior pelvis aperture, the thickness of the mons Veneris and the thickness of the rectovaginal septum were comparable between the two study groups. MR consents a detailed assessment of the new pelvic anatomy after MtF surgery. The anthropometric parameters measured in our patients were comparable with those of women.",
"title": ""
},
{
"docid": "1d6a5ba2f937caa1df5f6d32ffd3bcb4",
"text": "The objective of this study is to present an offline control of highly non-linear inverted pendulum system moving on a plane inclined at an angle of 10° from horizontal. The stabilisation was achieved using three different soft-computing control techniques i.e. Proportional-integral-derivative (PID), Fuzzy logic and Adaptive neuro fuzzy inference system (ANFIS). A Matlab-Simulink model of the proposed system was initially developed which was further simulated using PID controllers based on trial and error method. The ANFIS controller were trained using data sets generated from simulation results of PID controller. The ANFIS controllers were designed using only three membership functions. A fuzzy logic control of the proposed system is also shown using nine membership functions. The study compares the three techniques in terms of settling time, maximum overshoot and steady state error. The simulation results are shown with the help of graphs and tables which validates the effectiveness of proposed techniques.",
"title": ""
},
{
"docid": "cd2ad7c7243c2b690239f1466b57c0ea",
"text": "In 2001, JPL commissioned four industry teams to make a fresh examination of Mars Sample Return (MSR) mission architectures. As new fiscal realities of a cost-capped Mars Exploration Program unfolded, it was evident that the converged-upon MSR concept did not fit reasonably within a balanced program. Therefore, along with a new MSR Science Steering Group, JPL asked the industry teams plus JPL's Team-X to explore ways to reduce the cost. A paper presented at last year's conference described the emergence of a new, affordable \"Groundbreaking-MSR\" concept (Mattingly et al., 2003). This work addresses the continued evolution of the Groundbreaking MSR concept over the last year. One of the tenets of the low-cost approach is to use substantial heritage from an earlier mission, Mars Science Laboratory (MSL). Recently, the MSL project developed and switched its baseline to a revolutionary landing approach, coined \"skycrane\" where the MSL, which is a rover, would be lowered gently to the Martian surface from a hovering vehicle. MSR has adopted this approach in its mission studies, again continuing to capitalize on the heritage for a significant portion of the new lander. In parallel, a MSR Technology Board was formed to reexamine MSR technology needs and participate in a continuing refinement of architectural trades. While the focused technology program continues to be definitized through the remainder of this year, the current assessment of what technology development is required, is discussed in this paper. In addition, the results of new trade studies and considerations will be discussed. Adopting these changes, the Groundbreaking MSR concept has shifted to that presented in this paper. It remains a project that is affordable and meets the basic science needs defined by the MSR Science Steering Group in 2002.",
"title": ""
},
{
"docid": "020e01f6914b518d77887b1fef1a7be2",
"text": "Scene-agnostic visual inpainting remains very challenging despite progress in patch-based methods. Recently, Pathak et al. [26] have introduced convolutional \"context encoders'' (CEs) for unsupervised feature learning through image completion tasks. With the additional help of adversarial training, CEs turned out to be a promising tool to complete complex structures in real inpainting problems. In the present paper we propose to push further this key ability by relying on perceptual reconstruction losses at training time. We show on a wide variety of visual scenes the merit of the approach forstructural inpainting, and confirm it through a user study. Combined with the optimization-based refinement of [32] with neural patches, our context encoder opens up new opportunities for prior-free visual inpainting.",
"title": ""
},
{
"docid": "ce1d25b3d2e32f903ce29470514abcce",
"text": "We present a method to generate a robot control strategy that maximizes the probability to accomplish a task. The task is given as a Linear Temporal Logic (LTL) formula over a set of properties that can be satisfied at the regions of a partitioned environment. We assume that the probabilities with which the properties are satisfied at the regions are known, and the robot can determine the truth value of a proposition only at the current region. Motivated by several results on partitioned-based abstractions, we assume that the motion is performed on a graph. To account for noisy sensors and actuators, we assume that a control action enables several transitions with known probabilities. We show that this problem can be reduced to the problem of generating a control policy for a Markov Decision Process (MDP) such that the probability of satisfying an LTL formula over its states is maximized. We provide a complete solution for the latter problem that builds on existing results from probabilistic model checking. We include an illustrative case study.",
"title": ""
},
{
"docid": "00b80ec74135b3190a50b4e0d83af17a",
"text": "Many organizations aspire to adopt agile processes to take advantage of the numerous benefits that they offer to an organization. Those benefits include, but are not limited to, quicker return on investment, better software quality, and higher customer satisfaction. To date, however, there is no structured process (at least that is published in the public domain) that guides organizations in adopting agile practices. To address this situation, we present the agile adoption framework and the innovative approach we have used to implement it. The framework consists of two components: an agile measurement index, and a four-stage process, that together guide and assist the agile adoption efforts of organizations. More specifically, the Sidky Agile Measurement Index (SAMI) encompasses five agile levels that are used to identify the agile potential of projects and organizations. The four-stage process, on the other hand, helps determine (a) whether or not organizations are ready for agile adoption, and (b) guided by their potential, what set of agile practices can and should be introduced. To help substantiate the “goodness” of the Agile Adoption Framework, we presented it to various members of the agile community, and elicited responses through questionnaires. The results of that substantiation effort are encouraging, and are also presented in this paper.",
"title": ""
},
{
"docid": "d11d8408649280e26172886fc8341954",
"text": "OBJECTIVE\nSelf-stigma is highly prevalent in schizophrenia and can be seen as an important factor leading to low self-esteem. It is however unclear how psychological factors and actual adverse events contribute to self-stigma. This study empirically examines how symptom severity and the experience of being victimized affect both self-stigma and self-esteem.\n\n\nMETHODS\nPersons with a schizophrenia spectrum disorder (N = 102) were assessed with a battery of self-rating questionnaires and interviews. Structural equation modelling (SEM) was subsequently applied to test the fit of three models: a model with symptoms and victimization as direct predictors of self-stigma and negative self-esteem, a model with an indirect effect for symptoms mediated by victimization and a third model with a direct effect for negative symptoms and an indirect effect for positive symptoms mediated by victimization.\n\n\nRESULTS\nResults showed good model fit for the direct effects of both symptoms and victimization: both lead to an increase of self-stigma and subsequent negative self-esteem. Negative symptoms had a direct association with self-stigma, while the relationship between positive symptoms and self-stigma was mediated by victimization.\n\n\nCONCLUSIONS\nOur findings suggest that symptoms and victimization may contribute to self-stigma, leading to negative self-esteem in individuals with a schizophrenia spectrum disorder. Especially for patients with positive symptoms victimization seems to be an important factor in developing self-stigma. Given the burden of self-stigma on patients and the constraining effects on societal participation and service use, interventions targeting victimization as well as self-stigma are needed.",
"title": ""
},
{
"docid": "00bcce935ca2e4d443941b7e90d644c9",
"text": "Nairovirus, one of five bunyaviral genera, includes seven species. Genomic sequence information is limited for members of the Dera Ghazi Khan, Hughes, Qalyub, Sakhalin, and Thiafora nairovirus species. We used next-generation sequencing and historical virus-culture samples to determine 14 complete and nine coding-complete nairoviral genome sequences to further characterize these species. Previously unsequenced viruses include Abu Mina, Clo Mor, Great Saltee, Hughes, Raza, Sakhalin, Soldado, and Tillamook viruses. In addition, we present genomic sequence information on additional isolates of previously sequenced Avalon, Dugbe, Sapphire II, and Zirqa viruses. Finally, we identify Tunis virus, previously thought to be a phlebovirus, as an isolate of Abu Hammad virus. Phylogenetic analyses indicate the need for reassignment of Sapphire II virus to Dera Ghazi Khan nairovirus and reassignment of Hazara, Tofla, and Nairobi sheep disease viruses to novel species. We also propose new species for the Kasokero group (Kasokero, Leopards Hill, Yogue viruses), the Ketarah group (Gossas, Issyk-kul, Keterah/soft tick viruses) and the Burana group (Wēnzhōu tick virus, Huángpí tick virus 1, Tǎchéng tick virus 1). Our analyses emphasize the sister relationship of nairoviruses and arenaviruses, and indicate that several nairo-like viruses (Shāyáng spider virus 1, Xīnzhōu spider virus, Sānxiá water strider virus 1, South Bay virus, Wǔhàn millipede virus 2) require establishment of novel genera in a larger nairovirus-arenavirus supergroup.",
"title": ""
},
{
"docid": "f03cc92b0bc69845b9f2b6c0c6f3168b",
"text": "Relational database management systems (RDBMSs) are powerful because they are able to optimize and answer queries against any relational database. A natural language interface (NLI) for a database, on the other hand, is tailored to support that specific database. In this work, we introduce a general purpose transfer-learnable NLI with the goal of learning one model that can be used as NLI for any relational database. We adopt the data management principle of separating data and its schema, but with the additional support for the idiosyncrasy and complexity of natural languages. Specifically, we introduce an automatic annotation mechanism that separates the schema and the data, where the schema also covers knowledge about natural language. Furthermore, we propose a customized sequence model that translates annotated natural language queries to SQL statements. We show in experiments that our approach outperforms previous NLI methods on the WikiSQL dataset and the model we learned can be applied to another benchmark dataset OVERNIGHT without retraining.",
"title": ""
},
{
"docid": "36bdc3b5f9ce2fbbff0dd815bf3eee67",
"text": "A patient with upper limb dimelia including a double scapula, humerus, radius, and ulna, 11 metacarpals and digits (5 on the superior side, 6 on the inferior side) was treated with a simple amputation of the inferior limb resulting in cosmetic improvement and maintenance of range of motion in the preserved limb. During the amputation, the 2 limbs were found to be anatomically separate except for the ulnar nerve, which, in the superior limb, bifurcated into the sensory branch of radial nerve in the inferior limb, and the brachial artery, which bifurcated into the radial artery. Each case of this rare anomaly requires its own individually carefully planned surgical procedure.",
"title": ""
},
{
"docid": "21afffc79652f8e6c0f5cdcd74a03672",
"text": "It’s useful to automatically transform an image from its original form to some synthetic form (style, partial contents, etc.), while keeping the original structure or semantics. We define this requirement as the ”image-to-image translation” problem, and propose a general approach to achieve it, based on deep convolutional and conditional generative adversarial networks (GANs), which has gained a phenomenal success to learn mapping images from noise input since 2014. In this work, we develop a two step (unsupervised) learning method to translate images between different domains by using unlabeled images without specifying any correspondence between them, so that to avoid the cost of acquiring labeled data. Compared with prior works, we demonstrated the capacity of generality in our model, by which variance of translations can be conduct by a single type of model. Such capability is desirable in applications like bidirectional translation",
"title": ""
},
{
"docid": "938f8383d25d30b39b6cd9c78d1b3ab5",
"text": "In the last two decades, the Lattice Boltzmann method (LBM) has emerged as a promising tool for modelling the Navier-Stokes equations and simulating complex fluid flows. LBM is based on microscopic models and mesoscopic kinetic equations. In some perspective, it can be viewed as a finite difference method for solving the Boltzmann transport equation. Moreover the Navier-Stokes equations can be recovered by LBM with a proper choice of the collision operator. In Section 2 and 3, we first introduce this method and describe some commonly used boundary conditions. In Section 4, the validity of this method is confirmed by comparing the numerical solution to the exact solution of the steady plane Poiseuille flow and convergence of solution is established. Some interesting numerical simulations, including the lid-driven cavity flow, flow past a circular cylinder and the Rayleigh-Bénard convection for a range of Reynolds numbers, are carried out in Section 5, 6 and 7. In Section 8, we briefly highlight the procedure of recovering the Navier-Stokes equations from LBM. A summary is provided in Section 9.",
"title": ""
},
{
"docid": "d49260a42c4d800963ca8779cf50f1ee",
"text": "Autoencoders learn data representations (codes) in such a way that the input is reproduced at the output of the network. However, it is not always clear what kind of properties of the input data need to be captured by the codes. Kernel machines have experienced great success by operating via inner-products in a theoretically well-defined reproducing kernel Hilbert space, hence capturing topological properties of input data. In this paper, we enhance the autoencoder’s ability to learn effective data representations by aligning inner products between codes with respect to a kernel matrix. By doing so, the proposed kernelized autoencoder allows learning similarity-preserving embeddings of input data, where the notion of similarity is explicitly controlled by the user and encoded in a positive semi-definite kernel matrix. Experiments are performed for evaluating both reconstruction and kernel alignment performance in classification tasks and visualization of high-dimensional data. Additionally, we show that our method is capable to emulate kernel principal component analysis on a denoising task, obtaining competitive results at a much lower computational cost.",
"title": ""
}
] | scidocsrr |
58739370c4538449529104817a3ce640 | Warning traffic sign recognition using a HOG-based K-d tree | [
{
"docid": "1c1775a64703f7276e4843b8afc26117",
"text": "This paper describes a computer vision based system for real-time robust traffic sign detection, tracking, and recognition. Such a framework is of major interest for driver assistance in an intelligent automotive cockpit environment. The proposed approach consists of two components. First, signs are detected using a set of Haar wavelet features obtained from AdaBoost training. Compared to previously published approaches, our solution offers a generic, joint modeling of color and shape information without the need of tuning free parameters. Once detected, objects are efficiently tracked within a temporal information propagation framework. Second, classification is performed using Bayesian generative modeling. Making use of the tracking information, hypotheses are fused over multiple frames. Experiments show high detection and recognition accuracy and a frame rate of approximately 10 frames per second on a standard PC.",
"title": ""
}
] | [
{
"docid": "07de7621bcba13f151b8616f8ef46bb4",
"text": "There is growing evidence that client firms expect outsourcing suppliers to transform their business. Indeed, most outsourcing suppliers have delivered IT operational and business process innovation to client firms; however, achieving strategic innovation through outsourcing has been perceived to be far more challenging. Building on the growing interest in the IS outsourcing literature, this paper seeks to advance our understanding of the role that relational and contractual governance plays in achieving strategic innovation through outsourcing. We hypothesized and tested empirically the relationship between the quality of client-supplier relationships and the likelihood of achieving strategic innovation, and the interaction effect of different contract types, such as fixed-price, time and materials, partnership and their combinations. Results from a pan-European survey of 248 large firms suggest that high-quality relationships between clients and suppliers may indeed help achieve strategic innovation through outsourcing. However, within the spectrum of various outsourcing contracts, only the partnership contract, when included in the client contract portfolio alongside either fixed-price, time and materials or their combination, presents a significant positive effect on relational governance and is likely to strengthen the positive effect of the quality of client-supplier relationships on strategic innovation.",
"title": ""
},
{
"docid": "47afccb5e7bcdade764666f3b5ab042e",
"text": "Social media comprises interactive applications and platforms for creating, sharing and exchange of user-generated contents. The past ten years have brought huge growth in social media, especially online social networking services, and it is changing our ways to organize and communicate. It aggregates opinions and feelings of diverse groups of people at low cost. Mining the attributes and contents of social media gives us an opportunity to discover social structure characteristics, analyze action patterns qualitatively and quantitatively, and sometimes the ability to predict future human related events. In this paper, we firstly discuss the realms which can be predicted with current social media, then overview available predictors and techniques of prediction, and finally discuss challenges and possible future directions.",
"title": ""
},
{
"docid": "800fd3b3b6dfd21838006e643ba92a0d",
"text": "The primary goals in use of half-bridge LLC series-resonant converter (LLC-SRC) are high efficiency, low noise, and wide-range regulation. A voltage-clamped drive circuit for simultaneously driving both primary and secondary switches is proposed to achieve synchronous rectification (SR) at switching frequency higher than the dominant resonant frequency. No high/low-side driver circuit for half-bridge switches of LLC-SRC is required and less circuit complexity is achieved. The SR mode LLC-SRC developed for reducing output rectification losses is described along with steady-state analysis, gate drive strategy, and its experiments. Design consideration is described thoroughly so as to build up a reference for design and realization. A design example of 240W SR LLC-SRC is examined and an average efficiency as high as 95% at full load is achieved. All performances verified by simulation and experiment are close to the theoretical predictions.",
"title": ""
},
{
"docid": "5768212e1fa93a7321fa6c0deff10c88",
"text": "Human research biobanks have rapidly expanded in the past 20 years, in terms of both their complexity and utility. To date there exists no agreement upon classification schema for these biobanks. This is an important issue to address for several reasons: to ensure that the diversity of biobanks is appreciated, to assist researchers in understanding what type of biobank they need access to, and to help institutions/funding bodies appreciate the varying level of support required for different types of biobanks. To capture the degree of complexity, specialization, and diversity that exists among human research biobanks, we propose here a new classification schema achieved using a conceptual classification approach. This schema is based on 4 functional biobank \"elements\" (donor/participant, design, biospecimens, and brand), which we feel are most important to the major stakeholder groups (public/participants, members of the biobank community, health care professionals/researcher users, sponsors/funders, and oversight bodies), and multiple intrinsic features or \"subelements\" (eg, the element \"biospecimens\" could be further classified based on preservation method into fixed, frozen, fresh, live, and desiccated). We further propose that the subelements relating to design (scale, accrual, data format, and data content) and brand (user, leadership, and sponsor) should be specifically recognized by individual biobanks and included in their communications to the broad stakeholder audience.",
"title": ""
},
{
"docid": "b006c534bd688fb2023f56f3952390d1",
"text": "The idea of applying IOT technologies to smart home system is introduced. An original architecture of the integrated system is analyzed with its detailed introduction. This architecture has great scalability. Based on this proposed architecture many applications can be integrated into the system through uniform interface. Agents are proposed to communicate with appliances through RFID tags. Key issues to be solved to promote the development of smart home system are also discussed.",
"title": ""
},
{
"docid": "f712384911f20ce7a475c4fe7d6be35d",
"text": "Weather forecasting provides numerous societal benefits, from extreme weather warnings to agricultural planning. In recent decades, advances in forecasting have been rapid, arising from improved observations and models, and better integration of these through data assimilation and related techniques. Further improvements are not yet constrained by limits on predictability. Better forecasting, in turn, can contribute to a wide range of environmental forecasting, from forest-fire smoke to bird migrations.",
"title": ""
},
{
"docid": "a441f01dae68134b419aa33f1f9588a6",
"text": "In this work we present a technique for using natural language to help reinforcement learning generalize to unseen environments using neural machine translation techniques. These techniques are then integrated into policy shaping to make it more effective at learning in unseen environments. We evaluate this technique using the popular arcade game, Frogger, and show that our modified policy shaping algorithm improves over a Q-learning agent as well as a baseline version of policy shaping.",
"title": ""
},
{
"docid": "408ef85850165cb8ffa97811cb5dc957",
"text": "Inspired by the recent development of deep network-based methods in semantic image segmentation, we introduce an end-to-end trainable model for face mask extraction in video sequence. Comparing to landmark-based sparse face shape representation, our method can produce the segmentation masks of individual facial components, which can better reflect their detailed shape variations. By integrating convolutional LSTM (ConvLSTM) algorithm with fully convolutional networks (FCN), our new ConvLSTM-FCN model works on a per-sequence basis and takes advantage of the temporal correlation in video clips. In addition, we also propose a novel loss function, called segmentation loss, to directly optimise the intersection over union (IoU) performances. In practice, to further increase segmentation accuracy, one primary model and two additional models were trained to focus on the face, eyes, and mouth regions, respectively. Our experiment shows the proposed method has achieved a 16.99% relative improvement (from 54.50 to 63.76% mean IoU) over the baseline FCN model on the 300 Videos in the Wild (300VW) dataset.",
"title": ""
},
{
"docid": "a52673140d86780db6c73787e5f53139",
"text": "Human papillomavirus (HPV) is the most important etiological factor for cervical cancer. A recent study demonstrated that more than 20 HPV types were thought to be oncogenic for uterine cervical cancer. Notably, more than one-half of women show cervical HPV infections soon after their sexual debut, and about 90 % of such infections are cleared within 3 years. Immunity against HPV might be important for elimination of the virus. The innate immune responses involving macrophages, natural killer cells, and natural killer T cells may play a role in the first line of defense against HPV infection. In the second line of defense, adaptive immunity via cytotoxic T lymphocytes (CTLs) targeting HPV16 E2 and E6 proteins appears to eliminate cells infected with HPV16. However, HPV can evade host immune responses. First, HPV does not kill host cells during viral replication and therefore neither presents viral antigen nor induces inflammation. HPV16 E6 and E7 proteins downregulate the expression of type-1 interferons (IFNs) in host cells. The lack of co-stimulatory signals by inflammatory cytokines including IFNs during antigen recognition may induce immune tolerance rather than the appropriate responses. Moreover, HPV16 E5 protein downregulates the expression of HLA-class 1, and it facilitates evasion of CTL attack. These mechanisms of immune evasion may eventually support the establishment of persistent HPV infection, leading to the induction of cervical cancer. Considering such immunological events, prophylactic HPV16 and 18 vaccine appears to be the best way to prevent cervical cancer in women who are immunized in adolescence.",
"title": ""
},
{
"docid": "b134824f6c135a331e503b77d17380c0",
"text": "Social media sites (e.g., Flickr, YouTube, and Facebook) are a popular distribution outlet for users looking to share their experiences and interests on the Web. These sites host substantial amounts of user-contributed materials (e.g., photographs, videos, and textual content) for a wide variety of real-world events of different type and scale. By automatically identifying these events and their associated user-contributed social media documents, which is the focus of this paper, we can enable event browsing and search in state-of-the-art search engines. To address this problem, we exploit the rich \"context\" associated with social media content, including user-provided annotations (e.g., title, tags) and automatically generated information (e.g., content creation time). Using this rich context, which includes both textual and non-textual features, we can define appropriate document similarity metrics to enable online clustering of media to events. As a key contribution of this paper, we explore a variety of techniques for learning multi-feature similarity metrics for social media documents in a principled manner. We evaluate our techniques on large-scale, real-world datasets of event images from Flickr. Our evaluation results suggest that our approach identifies events, and their associated social media documents, more effectively than the state-of-the-art strategies on which we build.",
"title": ""
},
{
"docid": "556c0c1662a64f484aff9d7556b2d0b5",
"text": "In this paper, we investigate the Chinese calligraphy synthesis problem: synthesizing Chinese calligraphy images with specified style from standard font(eg. Hei font) images (Fig. 1(a)). Recent works mostly follow the stroke extraction and assemble pipeline which is complex in the process and limited by the effect of stroke extraction. In this work we treat the calligraphy synthesis problem as an image-to-image translation problem and propose a deep neural network based model which can generate calligraphy images from standard font images directly. Besides, we also construct a large scale benchmark that contains various styles for Chinese calligraphy synthesis. We evaluate our method as well as some baseline methods on the proposed dataset, and the experimental results demonstrate the effectiveness of our proposed model.",
"title": ""
},
{
"docid": "f0ca75d480ca80ab9c3f8ea35819d064",
"text": "Purpose – The purpose of this paper is to evaluate the influence of psychological hardiness, social judgment, and “Big Five” personality dimensions on leader performance in U.S. military academy cadets at West Point. Design/methodology/approach – Army Cadets were studied in two different organizational contexts: (a)summer field training, and (b)during academic semesters. Leader performance was measured with leadership grades (supervisor ratings) aggregated over four years at West Point. Findings After controlling for general intellectual abilities, hierarchical regression results showed leader performance in the summer field training environment is predicted by Big Five Extraversion, and Hardiness, and a trend for Social Judgment. During the academic period context, leader performance is predicted by mental abilities, Big Five Conscientiousness, and Hardiness, with a trend for Social Judgment. Research limitations/implications Results confirm the importance of psychological hardiness, extraversion, and conscientiousness as factors influencing leader effectiveness, and suggest that social judgment aspects of emotional intelligence can also be important. These results also show that different Big Five personality factors may influence leadership in different organizational",
"title": ""
},
{
"docid": "64c6012d2e97a1059161c295ae3b9cdb",
"text": "One of the most popular user activities on the Web is watching videos. Services like YouTube, Vimeo, and Hulu host and stream millions of videos, providing content that is on par with TV. While some of this content is popular all over the globe, some videos might be only watched in a confined, local region.\n In this work we study the relationship between popularity and locality of online YouTube videos. We investigate whether YouTube videos exhibit geographic locality of interest, with views arising from a confined spatial area rather than from a global one. Our analysis is done on a corpus of more than 20 millions YouTube videos, uploaded over one year from different regions. We find that about 50% of the videos have more than 70% of their views in a single region. By relating locality to viralness we show that social sharing generally widens the geographic reach of a video. If, however, a video cannot carry its social impulse over to other means of discovery, it gets stuck in a more confined geographic region. Finally, we analyze how the geographic properties of a video's views evolve on a daily basis during its lifetime, providing new insights on how the geographic reach of a video changes as its popularity peaks and then fades away.\n Our results demonstrate how, despite the global nature of the Web, online video consumption appears constrained by geographic locality of interest: this has a potential impact on a wide range of systems and applications, spanning from delivery networks to recommendation and discovery engines, providing new directions for future research.",
"title": ""
},
{
"docid": "b06fc6126bf086cdef1d5ac289cf5ebe",
"text": "Rhinophyma is a subtype of rosacea characterized by nodular thickening of the skin, sebaceous gland hyperplasia, dilated pores, and in its late stage, fibrosis. Phymatous changes in rosacea are most common on the nose but can also occur on the chin (gnatophyma), ears (otophyma), and eyelids (blepharophyma). In severe cases, phymatous changes result in the loss of normal facial contours, significant disfigurement, and social isolation. Additionally, patients with profound rhinophyma can experience nare obstruction and difficulty breathing due to the weight and bulk of their nose. Treatment options for severe advanced rhinophyma include cryosurgery, partial-thickness decortication with subsequent secondary repithelialization, carbon dioxide (CO2) or erbium-doped yttrium aluminum garnet (Er:YAG) laser ablation, full-thickness resection with graft or flap reconstruction, excision by electrocautery or radio frequency, and sculpting resection using a heated Shaw scalpel. We report a severe case of rhinophyma resulting in marked facial disfigurement and nasal obstruction treated successfully using the Shaw scalpel. Rhinophymectomy using the Shaw scalpel allows for efficient and efficacious treatment of rhinophyma without the need for multiple procedures or general anesthesia and thus should be considered in patients with nare obstruction who require intervention.",
"title": ""
},
{
"docid": "5ff7a82ec704c8fb5c1aa975aec0507c",
"text": "With the increase of an ageing population and chronic diseases, society becomes more health conscious and patients become “health consumers” looking for better health management. People’s perception is shifting towards patient-centered, rather than the classical, hospital–centered health services which has been propelling the evolution of telemedicine research from the classic e-Health to m-Health and now is to ubiquitous healthcare (u-Health). It is expected that mobile & ubiquitous Telemedicine, integrated with Wireless Body Area Network (WBAN), have a great potential in fostering the provision of next-generation u-Health. Despite the recent efforts and achievements, current u-Health proposed solutions still suffer from shortcomings hampering their adoption today. This paper presents a comprehensive review of up-to-date requirements in hardware, communication, and computing for next-generation u-Health systems. It compares new technological and technical trends and discusses how they address expected u-Health requirements. A thorough survey on various worldwide recent system implementations is presented in an attempt to identify shortcomings in state-of-the art solutions. In particular, challenges in WBAN and ubiquitous computing were emphasized. The purpose of this survey is not only to help beginners with a holistic approach toward understanding u-Health systems but also present to researchers new technological trends and design challenges they have to cope with, while designing such systems.",
"title": ""
},
{
"docid": "177db8a6f89528c1e822f52395a34468",
"text": "Design of a low-energy power-ON reset (POR) circuit is proposed to reduce the energy consumed by the stable supply of the dual supply static random access memory (SRAM), as the other supply is ramping up. The proposed POR circuit, when embedded inside dual supply SRAM, removes its ramp-up constraints related to voltage sequencing and pin states. The circuit consumes negligible energy during ramp-up, does not consume dynamic power during operations, and includes hysteresis to improve noise immunity against voltage fluctuations on the power supply. The POR circuit, designed in the 40-nm CMOS technology within 10.6-μm2 area, enabled 27× reduction in the energy consumed by the SRAM array supply during periphery power-up in typical conditions.",
"title": ""
},
{
"docid": "fc421a5ef2556b86c34d6f2bb4dc018e",
"text": "It's been over a decade now. We've forgotten how slow the adoption of consumer Internet commerce has been compared to other Internet growth metrics. And we're surprised when security scares like spyware and phishing result in lurches in consumer use.This paper re-visits an old theme, and finds that consumer marketing is still characterised by aggression and dominance, not sensitivity to customer needs. This conclusion is based on an examination of terms and privacy policy statements, which shows that businesses are confronting the people who buy from them with fixed, unyielding interfaces. Instead of generating trust, marketers prefer to wield power.These hard-headed approaches can work in a number of circumstances. Compelling content is one, but not everyone sells sex, gambling services, short-shelf-life news, and even shorter-shelf-life fashion goods. And, after decades of mass-media-conditioned consumer psychology research and experimentation, it's far from clear that advertising can convert everyone into salivating consumers who 'just have to have' products and services brand-linked to every new trend, especially if what you sell is groceries or handyman supplies.The thesis of this paper is that the one-dimensional, aggressive concept of B2C has long passed its use-by date. Trading is two-way -- consumers' attention, money and loyalty, in return for marketers' products and services, and vice versa.So B2C is conceptually wrong, and needs to be replaced by some buzzphrase that better conveys 'B-with-C' rather than 'to-C' and 'at-C'. Implementations of 'customised' services through 'portals' have to mature beyond data-mining-based manipulation to support two-sided relationships, and customer-managed profiles.It's all been said before, but now it's time to listen.",
"title": ""
},
{
"docid": "0808637a7768609502b63bff5ffda1cb",
"text": "Blur is a key determinant in the perception of image quality. Generally, blur causes spread of edges, which leads to shape changes in images. Discrete orthogonal moments have been widely studied as effective shape descriptors. Intuitively, blur can be represented using discrete moments since noticeable blur affects the magnitudes of moments of an image. With this consideration, this paper presents a blind image blur evaluation algorithm based on discrete Tchebichef moments. The gradient of a blurred image is first computed to account for the shape, which is more effective for blur representation. Then the gradient image is divided into equal-size blocks and the Tchebichef moments are calculated to characterize image shape. The energy of a block is computed as the sum of squared non-DC moment values. Finally, the proposed image blur score is defined as the variance-normalized moment energy, which is computed with the guidance of a visual saliency model to adapt to the characteristic of human visual system. The performance of the proposed method is evaluated on four public image quality databases. The experimental results demonstrate that our method can produce blur scores highly consistent with subjective evaluations. It also outperforms the state-of-the-art image blur metrics and several general-purpose no-reference quality metrics.",
"title": ""
},
{
"docid": "1592e0150e4805a1fab68e5daaed8ed7",
"text": "Knowledge management (KM) has emerged as a tool that allows the creation, use, distribution and transfer of knowledge in organizations. There are different frameworks that propose KM in the scientific literature. The majority of these frameworks are structured based on a strong theoretical background. This study describes a guide for the implementation of KM in a higher education institution (HEI) based on a framework with a clear description on the practical implementation. This framework is based on a technological infrastructure that includes enterprise architecture, business intelligence and educational data mining. Furthermore, a case study which describes the experience of the implementation in a HEI is presented. As a conclusion, the pros and cons on the use of the framework are analyzed.",
"title": ""
},
{
"docid": "b26724af5b086315f219ae63bcd083d1",
"text": "BACKGROUND\nHyperhomocysteinemia arising from impaired methionine metabolism, probably usually due to a deficiency of cystathionine beta-synthase, is associated with premature cerebral, peripheral, and possibly coronary vascular disease. Both the strength of this association and its independence of other risk factors for cardiovascular disease are uncertain. We studied the extent to which the association could be explained by heterozygous cystathionine beta-synthase deficiency.\n\n\nMETHODS\nWe first established a diagnostic criterion for hyperhomocysteinemia by comparing peak serum levels of homocysteine after a standard methionine-loading test in 25 obligate heterozygotes with respect to cystathionine beta-synthase deficiency (whose children were known to be homozygous for homocystinuria due to this enzyme defect) with the levels in 27 unrelated age- and sex-matched normal subjects. A level of 24.0 mumol per liter or more was 92 percent sensitive and 100 percent specific in distinguishing the two groups. The peak serum homocysteine levels in these normal subjects were then compared with those in 123 patients whose vascular disease had been diagnosed before they were 55 years of age.\n\n\nRESULTS\nHyperhomocysteinemia was detected in 16 of 38 patients with cerebrovascular disease (42 percent), 7 of 25 with peripheral vascular disease (28 percent), and 18 of 60 with coronary vascular disease (30 percent), but in none of the 27 normal subjects. After adjustment for the effects of conventional risk factors, the lower 95 percent confidence limit for the odds ratio for vascular disease among the patients with hyperhomocysteinemia, as compared with the normal subjects, was 3.2. The geometric-mean peak serum homocysteine level was 1.33 times higher in the patients with vascular disease than in the normal subjects (P = 0.002). The presence of cystathionine beta-synthase deficiency was confirmed in 18 of 23 patients with vascular disease who had hyperhomocysteinemia.\n\n\nCONCLUSIONS\nHyperhomocysteinemia is an independent risk factor for vascular disease, including coronary disease, and in most instances is probably due to cystathionine beta-synthase deficiency.",
"title": ""
}
] | scidocsrr |
7663e1da0e3460b971249ce724b584d3 | Mid-Curve Recommendation System: a Stacking Approach Through Neural Networks | [
{
"docid": "be692c1251cb1dc73b06951c54037701",
"text": "Can we train the computer to beat experienced traders for financial assert trading? In this paper, we try to address this challenge by introducing a recurrent deep neural network (NN) for real-time financial signal representation and trading. Our model is inspired by two biological-related learning concepts of deep learning (DL) and reinforcement learning (RL). In the framework, the DL part automatically senses the dynamic market condition for informative feature learning. Then, the RL module interacts with deep representations and makes trading decisions to accumulate the ultimate rewards in an unknown environment. The learning system is implemented in a complex NN that exhibits both the deep and recurrent structures. Hence, we propose a task-aware backpropagation through time method to cope with the gradient vanishing issue in deep training. The robustness of the neural system is verified on both the stock and the commodity future markets under broad testing conditions.",
"title": ""
},
{
"docid": "50cc2033252216368c3bf19ea32b8a2c",
"text": "Sometimes you just have to clench your teeth and go for the differential matrix algebra. And the central limit theorems. Together with the maximum likelihood techniques. And the static mean variance portfolio theory. Not forgetting the dynamic asset pricing models. And these are just the tools you need before you can start making empirical inferences in financial economics.” So wrote Ruben Lee, playfully, in a review of The Econometrics of Financial Markets, winner of TIAA-CREF’s Paul A. Samuelson Award. In economist Harry M. Markowitz, who in won the Nobel Prize in Economics, published his landmark thesis “Portfolio Selection” as an article in the Journal of Finance, and financial economics was born. Over the subsequent decades, this young and burgeoning field saw many advances in theory but few in econometric technique or empirical results. Then, nearly four decades later, Campbell, Lo, and MacKinlay’s The Econometrics of Financial Markets made a bold leap forward by integrating theory and empirical work. The three economists combined their own pathbreaking research with a generation of foundational work in modern financial theory and research. The book includes treatment of topics from the predictability of asset returns to the capital asset pricing model and arbitrage pricing theory, from statistical fractals to chaos theory. Read widely in academe as well as in the business world, The Econometrics of Financial Markets has become a new landmark in financial economics, extending and enhancing the Nobel Prize– winning work established by the early trailblazers in this important field.",
"title": ""
}
] | [
{
"docid": "48aff90183293227a99ecf3911c7296a",
"text": "Based on data from a survey (n = 3291) and 14 qualitative interviews among Danish older adults, this study investigated the use of, and attitudes toward, information communications technology (ICT) and the digital delivery of public services. While age, gender, and socioeconomic status were associated with use of ICT, these determinants lost their explanatory power when we controlled for attitudes and experiences. We identified three segments that differed in their use of ICT and attitudes toward digital service delivery. As nonuse of ICT often results from the lack of willingness to use it rather than from material or cognitive deficiencies, policy measures for bridging the digital divide should focus on skills and confidence rather than on access or ability.",
"title": ""
},
{
"docid": "2f9b8ee2f7578c7820eced92fb98c696",
"text": "The Tic tac toe is very popular game having a 3 × 3 grid board and 2 players. A Special Symbol (X or O) is assigned to each player to indicate the slot is covered by the respective player. The winner of the game is the player who first cover a horizontal, vertical and diagonal row of the board having only player's own symbols. This paper presents the design model of Tic tac toe Game using Multi-Tape Turing Machine in which both player choose input randomly and result of the game is declared. The computational Model of Tic tac toe is used to describe it in a formal manner.",
"title": ""
},
{
"docid": "047c486e94c217a9ce84cdd57fc647fe",
"text": "There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide insights into their behavior and thought processes. XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we describe foundational concepts of explainability and show how they can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.",
"title": ""
},
{
"docid": "0afe679d5b022cc31a3ce69b967f8d77",
"text": "Cyber-crime has reached unprecedented proportions in this day and age. In addition, the internet has created a world with seemingly no barriers while making a countless number of tools available to the cyber-criminal. In light of this, Computer Forensic Specialists employ state-of-the-art tools and methodologies in the extraction and analysis of data from storage devices used at the digital crime scene. The focus of this paper is to conduct an investigation into some of these Forensic tools eg.Encase®. This investigation will address commonalities across the Forensic tools, their essential differences and ultimately point out what features need to be improved in these tools to allow for effective autopsies of storage devices.",
"title": ""
},
{
"docid": "a380ee9ea523d1a3a09afcf2fb01a70d",
"text": "Back-translation has become a commonly employed heuristic for semi-supervised neural machine translation. The technique is both straightforward to apply and has led to stateof-the-art results. In this work, we offer a principled interpretation of back-translation as approximate inference in a generative model of bitext and show how the standard implementation of back-translation corresponds to a single iteration of the wake-sleep algorithm in our proposed model. Moreover, this interpretation suggests a natural iterative generalization, which we demonstrate leads to further improvement of up to 1.6 BLEU.",
"title": ""
},
{
"docid": "9c8ab4fa4e6951990c771025cd4cc36c",
"text": "This paper presents a methodology for extracting road edge and lane information for smart and intelligent navigation of vehicles. The range information provided by a fast laser range-measuring device is processed by an extended Kalman filter to extract the road edge or curb information. The resultant road edge information is used to aid in the extraction of the lane boundary from a CCD camera image. Hough Transform (HT) is used to extract the candidate lane boundary edges, and the most probable lane boundary is determined using an Active Line Model based on minimizing an appropriate Energy function. Experimental results are presented to demonstrate the effectiveness of the combined Laser and Vision strategy for road-edge and lane boundary detection.",
"title": ""
},
{
"docid": "99d76fafe2a238a061e67e4c5e5bea52",
"text": "F/OSS software has been described by many as a puzzle. In the past five years, it has stimulated the curiosity of scholars in a variety of fields, including economics, law, psychology, anthropology and computer science, so that the number of contributions on the subject has increased exponentially. The purpose of this paper is to provide a sufficiently comprehensive account of these contributions in order to draw some general conclusions on the state of our understanding of the phenomenon and identify directions for future research. The exercise suggests that what is puzzling about F/OSS is not so much the fact that people freely contribute to a good they make available to all, but rather the complexity of its institutional structure and its ability to organizationally evolve over time. JEL Classification: K11, L22, L23, L86, O31, O34.",
"title": ""
},
{
"docid": "9de00d8cf6b3001f976fa49c42875620",
"text": "This paper is a preliminary report on the efficiency of two strategies of data reduction in a data preprocessing stage. In the first experiment, we apply the Count-Min sketching algorithm, while in the second experiment we discretize our data prior to applying the Count-Min algorithm. By conducting a discretization before sketching, the need for the increased number of buckets in sketching is reduced. This preliminary attempt of combining two methods with the same purpose has shown potential. In our experiments, we use sensor data collected to study the environmental fluctuation and its impact on the quality of fresh peaches and nectarines in cold chain.",
"title": ""
},
{
"docid": "37f4da100d31ad1da1ba21168c95d7e9",
"text": "An AC chopper controller with symmetrical Pulse-Width Modulation (PWM) is proposed to achieve better performance for a single-phase induction motor compared to phase-angle control line-commutated voltage controllers and integral-cycle control of thyristors. Forced commutated device IGBT controlled by a microcontroller was used in the AC chopper which has the advantages of simplicity, ability to control large amounts of power and low waveform distortion. In this paper the simulation and hardware models of a simple single phase IGBT An AC controller has been developed which showed good results.",
"title": ""
},
{
"docid": "6aa1c48fcde6674990a03a1a15b5dc0e",
"text": "A compact multiple-input-multiple-output (MIMO) antenna is presented for ultrawideband (UWB) applications with band-notched function. The proposed antenna is composed of two offset microstrip-fed antenna elements with UWB performance. To achieve high isolation and polarization diversity, the antenna elements are placed perpendicular to each other. A parasitic T-shaped strip between the radiating elements is employed as a decoupling structure to further suppress the mutual coupling. In addition, the notched band at 5.5 GHz is realized by etching a pair of L-shaped slits on the ground. The antenna prototype with a compact size of 38.5 × 38.5 mm2 has been fabricated and measured. Experimental results show that the antenna has an impedance bandwidth of 3.08-11.8 GHz with reflection coefficient less than -10 dB, except the rejection band of 5.03-5.97 GHz. Besides, port isolation, envelope correlation coefficient and radiation characteristics are also investigated. The results indicate that the MIMO antenna is suitable for band-notched UWB applications.",
"title": ""
},
{
"docid": "dee5489accb832615f63623bc445212f",
"text": "In this paper a simulation-based scheduling system is discussed which was developed for a semiconductor Backend facility. Apart from the usual dispatching rules it uses heuristic search strategies for the optimization of the operating sequences. In practice hereby multiple objectives have to be considered, e. g. concurrent minimization of mean cycle time, maximization of throughput and due date compliance. Because the simulation model is very complex and simulation time itself is not negligible, we emphasize to increase the convergence of heuristic optimization methods, consequentially reducing the number of necessary iterations. Several realized strategies are presented.",
"title": ""
},
{
"docid": "311f0668e477dda8ef4716d58ff9cdc8",
"text": "A fundamental aspect of controlling humanoid robots lies in the capability to exploit the whole body to perform tasks. This work introduces a novel whole body control library called OpenSoT. OpenSoT is combined with joint impedance control to create a framework that can effectively generate complex whole body motion behaviors for humanoids according to the needs of the interaction level of the tasks. OpenSoT gives an easy way to implement tasks, constraints, bounds and solvers by providing common interfaces. We present the mathematical foundation of the library and validate it on the compliant humanoid robot COMAN to execute multiple motion tasks under a number of constraints. The framework is able to solve hierarchies of tasks of arbitrary complexity in a robust and reliable way.",
"title": ""
},
{
"docid": "246a4ed0d3a94fead44c1e48cc235a63",
"text": "With the introduction of fully convolutional neural networks, deep learning has raised the benchmark for medical image segmentation on both speed and accuracy, and different networks have been proposed for 2D and 3D segmentation with promising results. Nevertheless, most networks only handle relatively small numbers of labels (<10), and there are very limited works on handling highly unbalanced object sizes especially in 3D segmentation. In this paper, we propose a network architecture and the corresponding loss function which improve segmentation of very small structures. By combining skip connections and deep supervision with respect to the computational feasibility of 3D segmentation, we propose a fast converging and computationally efficient network architecture for accurate segmentation. Furthermore, inspired by the concept of focal loss, we propose an exponential logarithmic loss which balances the labels not only by their relative sizes but also by their segmentation difficulties. We achieve an average Dice coefficient of 82% on brain segmentation with 20 labels, with the ratio of the smallest to largest object sizes as 0.14%. Less than 100 epochs are required to reach such accuracy, and segmenting a 128×128×128 volume only takes around 0.4 s.",
"title": ""
},
{
"docid": "8d29b510fb10f8f7dc4563bca36b9e6d",
"text": "Face images that are captured by surveillance cameras usually have a very low resolution, which significantly limits the performance of face recognition systems. In the past, super-resolution techniques have been proposed to increase the resolution by combining information from multiple images. These techniques use super-resolution as a preprocessing step to obtain a high-resolution image that is later passed to a face recognition system. Considering that most state-of-the-art face recognition systems use an initial dimensionality reduction method, we propose to transfer the super-resolution reconstruction from pixel domain to a lower dimensional face space. Such an approach has the advantage of a significant decrease in the computational complexity of the super-resolution reconstruction. The reconstruction algorithm no longer tries to obtain a visually improved high-quality image, but instead constructs the information required by the recognition system directly in the low dimensional domain without any unnecessary overhead. In addition, we show that face-space super-resolution is more robust to registration errors and noise than pixel-domain super-resolution because of the addition of model-based constraints.",
"title": ""
},
{
"docid": "30a8b93f979f913f92fc8a39ae8d25ab",
"text": "Many of the recent Trajectory Optimization algorithms alternate between local approximation of the dynamics and conservative policy update. However, linearly approximating the dynamics in order to derive the new policy can bias the update and prevent convergence to the optimal policy. In this article, we propose a new model-free algorithm that backpropagates a local quadratic time-dependent Q-Function, allowing the derivation of the policy update in closed form. Our policy update ensures exact KL-constraint satisfaction without simplifying assumptions on the system dynamics demonstrating improved performance in comparison to related Trajectory Optimization algorithms linearizing the dynamics.",
"title": ""
},
{
"docid": "dd16da9d44e47fb0f7fe1a25063daeee",
"text": "The excitation and vibration triggered by the long-term operation of railway vehicles inevitably result in defective states of catenary support devices. With the massive construction of high-speed electrified railways, automatic defect detection of diverse and plentiful fasteners on the catenary support device is of great significance for operation safety and cost reduction. Nowadays, the catenary support devices are periodically captured by the cameras mounted on the inspection vehicles during the night, but the inspection still mostly relies on human visual interpretation. To reduce the human involvement, this paper proposes a novel vision-based method that applies the deep convolutional neural networks (DCNNs) in the defect detection of the fasteners. Our system cascades three DCNN-based detection stages in a coarse-to-fine manner, including two detectors to sequentially localize the cantilever joints and their fasteners and a classifier to diagnose the fasteners’ defects. Extensive experiments and comparisons of the defect detection of catenary support devices along the Wuhan–Guangzhou high-speed railway line indicate that the system can achieve a high detection rate with good adaptation and robustness in complex environments.",
"title": ""
},
{
"docid": "d911ccb1bbb761cbfee3e961b8732534",
"text": "This paper presents a study on SIFT (Scale Invariant Feature transform) which is a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. There are various applications of SIFT that includes object recognition, robotic mapping and navigation, image stitching, 3D modeling, gesture recognition, video tracking, individual identification of wildlife and match moving.",
"title": ""
},
{
"docid": "4b8af6dfcaaea4246c10ab840ea03608",
"text": "Mobile cloud computing (MCC) as an emerging and prospective computing paradigm, can significantly enhance computation capability and save energy of smart mobile devices (SMDs) by offloading computation-intensive tasks from resource-constrained SMDs onto the resource-rich cloud. However, how to achieve energy-efficient computation offloading under the hard constraint for application completion time remains a challenge issue. To address such a challenge, in this paper, we provide an energy-efficient dynamic offloading and resource scheduling (eDors) policy to reduce energy consumption and shorten application completion time. We first formulate the eDors problem into the energy-efficiency cost (EEC) minimization problem while satisfying the task-dependency requirements and the completion time deadline constraint. To solve the optimization problem, we then propose a distributed eDors algorithm consisting of three subalgorithms of computation offloading selection, clock frequency control and transmission power allocation. More importantly, we find that the computation offloading selection depends on not only the computing workload of a task, but also the maximum completion time of its immediate predecessors and the clock frequency and transmission power of the mobile device. Finally, our experimental results in a real testbed demonstrate that the eDors algorithm can effectively reduce the EEC by optimally adjusting the CPU clock frequency of SMDs based on the dynamic voltage and frequency scaling (DVFS) technique in local computing, and adapting the transmission power for the wireless channel conditions in cloud computing.",
"title": ""
},
{
"docid": "8cbdd4f368ca9fd7dcf7e4f8c9748412",
"text": "We describe an efficient neural network method to automatically learn sentiment lexicons without relying on any manual resources. The method takes inspiration from the NRC method, which gives the best results in SemEval13 by leveraging emoticons in large tweets, using the PMI between words and tweet sentiments to define the sentiment attributes of words. We show that better lexicons can be learned by using them to predict the tweet sentiment labels. By using a very simple neural network, our method is fast and can take advantage of the same data volume as the NRC method. Experiments show that our lexicons give significantly better accuracies on multiple languages compared to the current best methods.",
"title": ""
},
{
"docid": "2e0e53ff34dccd5412faab5b51a3a2f2",
"text": "This study examines print and online daily newspaper journalists’ perceptions of the credibility of Internet news information, as well as the influence of several factors— most notably, professional role conceptions—on those perceptions. Credibility was measured as a multidimensional construct. The results of a survey of U.S. journalists (N = 655) show that Internet news information was viewed as moderately credible overall and that online newspaper journalists rated Internet news information as significantly more credible than did print newspaper journalists. Hierarchical regression analyses reveal that Internet reliance was a strong positive predictor of credibility. Two professional role conceptions also emerged as significant predictors. The populist mobilizer role conception was a significant positive predictor of online news credibility, while the adversarial role conception was a significant negative predictor. Demographic characteristics of print and online daily newspaper journalists did not influence their perceptions of online news credibility.",
"title": ""
}
] | scidocsrr |
2f2cfa7b5b5b9381ebd764bc0abe0c10 | E-Counterfeit: A Mobile-Server Platform for Document Counterfeit Detection | [
{
"docid": "097879c593aa68602564c176b806a74b",
"text": "We study the recognition of surfaces made from different materials such as concrete, rug, marble, or leather on the basis of their textural appearance. Such natural textures arise from spatial variation of two surface attributes: (1) reflectance and (2) surface normal. In this paper, we provide a unified model to address both these aspects of natural texture. The main idea is to construct a vocabulary of prototype tiny surface patches with associated local geometric and photometric properties. We call these 3D textons. Examples might be ridges, grooves, spots or stripes or combinations thereof. Associated with each texton is an appearance vector, which characterizes the local irradiance distribution, represented as a set of linear Gaussian derivative filter outputs, under different lighting and viewing conditions. Given a large collection of images of different materials, a clustering approach is used to acquire a small (on the order of 100) 3D texton vocabulary. Given a few (1 to 4) images of any material, it can be characterized using these textons. We demonstrate the application of this representation for recognition of the material viewed under novel lighting and viewing conditions. We also illustrate how the 3D texton model can be used to predict the appearance of materials under novel conditions.",
"title": ""
},
{
"docid": "d0c75242aad1230e168122930b078671",
"text": "Combinatorial graph cut algorithms have been successfully applied to a wide range of problems in vision and graphics. This paper focusses on possibly the simplest application of graph-cuts: segmentation of objects in image data. Despite its simplicity, this application epitomizes the best features of combinatorial graph cuts methods in vision: global optima, practical efficiency, numerical robustness, ability to fuse a wide range of visual cues and constraints, unrestricted topological properties of segments, and applicability to N-D problems. Graph cuts based approaches to object extraction have also been shown to have interesting connections with earlier segmentation methods such as snakes, geodesic active contours, and level-sets. The segmentation energies optimized by graph cuts combine boundary regularization with region-based properties in the same fashion as Mumford-Shah style functionals. We present motivation and detailed technical description of the basic combinatorial optimization framework for image segmentation via s/t graph cuts. After the general concept of using binary graph cut algorithms for object segmentation was first proposed and tested in Boykov and Jolly (2001), this idea was widely studied in computer vision and graphics communities. We provide links to a large number of known extensions based on iterative parameter re-estimation and learning, multi-scale or hierarchical approaches, narrow bands, and other techniques for demanding photo, video, and medical applications.",
"title": ""
}
] | [
{
"docid": "88804f285f4d608b81a1cd741dbf2b7e",
"text": "Predicting ad click-through rates (CTR) is a massive-scale learning problem that is central to the multi-billion dollar online advertising industry. We present a selection of case studies and topics drawn from recent experiments in the setting of a deployed CTR prediction system. These include improvements in the context of traditional supervised learning based on an FTRL-Proximal online learning algorithm (which has excellent sparsity and convergence properties) and the use of per-coordinate learning rates.\n We also explore some of the challenges that arise in a real-world system that may appear at first to be outside the domain of traditional machine learning research. These include useful tricks for memory savings, methods for assessing and visualizing performance, practical methods for providing confidence estimates for predicted probabilities, calibration methods, and methods for automated management of features. Finally, we also detail several directions that did not turn out to be beneficial for us, despite promising results elsewhere in the literature. The goal of this paper is to highlight the close relationship between theoretical advances and practical engineering in this industrial setting, and to show the depth of challenges that appear when applying traditional machine learning methods in a complex dynamic system.",
"title": ""
},
{
"docid": "c32af7ce60d3d6eaa09a2876ba5469d3",
"text": "ID: 2423 Y. M. S. Al-Wesabi, Avishek Choudhury, Daehan Won Binghamton University, USA",
"title": ""
},
{
"docid": "13b9fd37b1cf4f15def39175157e12c5",
"text": "Although motorcycle safety helmets are known for preventing head injuries, in many countries, the use of motorcycle helmets is low due to the lack of police power to enforcing helmet laws. This paper presents a system which automatically detect motorcycle riders and determine that they are wearing safety helmets or not. The system extracts moving objects and classifies them as a motorcycle or other moving objects based on features extracted from their region properties using K-Nearest Neighbor (KNN) classifier. The heads of the riders on the recognized motorcycle are then counted and segmented based on projection profiling. The system classifies the head as wearing a helmet or not using KNN based on features derived from 4 sections of segmented head region. Experiment results show an average correct detection rate for near lane, far lane, and both lanes as 84%, 68%, and 74%, respectively.",
"title": ""
},
{
"docid": "7889bd099150ad799461bd0da2896428",
"text": "A systematic method to improve the quality ( ) factor of RF integrated inductors is presented in this paper. The proposed method is based on the layout optimization to minimize the series resistance of the inductor coil, taking into account both ohmic losses, due to conduction currents, and magnetically induced losses, due to Eddy currents. The technique is particularly useful when applied to inductors in which the fabrication process includes integration substrate removal. However, it is also applicable to inductors on low-loss substrates. The method optimizes the width of the metal strip for each turn of the inductor coil, leading to a variable strip-width layout. The optimization procedure has been successfully applied to the design of square spiral inductors in a silicon-based multichip-module technology, complemented with silicon micromachining postprocessing. The obtained experimental results corroborate the validity of the proposed method. A factor of about 17 have been obtained for a 35-nH inductor at 1.5 GHz, with values higher than 40 predicted for a 20-nH inductor working at 3.5 GHz. The latter is up to a 60% better than the best results for a single strip-width inductor working at the same frequency.",
"title": ""
},
{
"docid": "311bccf1c8bf6cbb2c2dbef22a709e8c",
"text": "We present a new video-assisted minimally invasive technique for the treatment of pilonidal disease (E.P.Si.T: endoscopic pilonidal sinus treatment). Between March and November 2012, we operated on 11 patients suffering from pilonidal disease. Surgery is performed under local or spinal anesthesia using the Meinero fistuloscope. The external opening is excised and the fistuloscope is introduced through the small hole. Anatomy is identified, hair and debris are removed and the entire area is ablated under direct vision. There were no significant complications recorded in the patient cohort. The pain experienced during the postoperative period was minimal. At 1 month postoperatively, the external opening(s) were closed in all patients and there were no cases of recurrence at a median follow-up of 6 months. All patients were admitted and discharged on the same day as surgery and commenced work again after a mean time period of 4 days. Aesthetic results were excellent. The key feature of the E.P.Si.T. technique is direct vision, allowing a good definition of the involved area, removal of debris and cauterization of the inflamed tissue.",
"title": ""
},
{
"docid": "23a5152da5142048332c09164bade40f",
"text": "Knowledge bases extracted automatically from the Web present new opportunities for data mining and exploration. Given a large, heterogeneous set of extracted relations, new tools are needed for searching the knowledge and uncovering relationships of interest. We present WikiTables, a Web application that enables users to interactively explore tabular knowledge extracted from Wikipedia.\n In experiments, we show that WikiTables substantially outperforms baselines on the novel task of automatically joining together disparate tables to uncover \"interesting\" relationships between table columns. We find that a \"Semantic Relatedness\" measure that leverages the Wikipedia link structure accounts for a majority of this improvement. Further, on the task of keyword search for tables, we show that WikiTables performs comparably to Google Fusion Tables despite using an order of magnitude fewer tables. Our work also includes the release of a number of public resources, including over 15 million tuples of extracted tabular data, manually annotated evaluation sets, and public APIs.",
"title": ""
},
{
"docid": "5b6a73103e7310de86c37185c729b8d9",
"text": "Motion segmentation is currently an active area of research in computer Vision. The task of comparing different methods of motion segmentation is complicated by the fact that researchers may use subtly different definitions of the problem. Questions such as ”Which objects are moving?”, ”What is background?”, and ”How can we use motion of the camera to segment objects, whether they are static or moving?” are clearly related to each other, but lead to different algorithms, and imply different versions of the ground truth. This report has two goals. The first is to offer a precise definition of motion segmentation so that the intent of an algorithm is as welldefined as possible. The second is to report on new versions of three previously existing data sets that are compatible with this definition. We hope that this more detailed definition, and the three data sets that go with it, will allow more meaningful comparisons of certain motion segmentation methods.",
"title": ""
},
{
"docid": "e54bf7ae1235031c3d62f3206d62a89a",
"text": "The purpose of the study is to explore the factors influencing customer buying decision through Intern et shopping. Several factors such as information quali ty, firm’s reputation, perceived ease of payment, s ites design, benefit of online shopping, and trust that influence customer decision to purchase from e-comm erce sites were analyzed. Factors such as those mention d above, which are commonly considered influencing purhasing decision through online shopping in other countries were hypothesized to be true in the case of Indonesia. A random sample comprised of 171 Indone sia people who have been buying goods/services through e-commerce sites at least once, were collec ted via online questionnaires. To test the hypothes is, the data were examined using Structural Equations Model ing (SEM) which is basically a combination of Confirmatory Factor Analysis (CFA), and linear Regr ession. The results suggest that information qualit y, perceived ease of payment, benefits of online shopp ing, and trust affect online purchase decision significantly. Close attention need to be placed on these factors to increase online sales. The most significant influence comes from trust. Indonesian people still lack of trust toward online commerce, so it is very important to gain customer trust to increase s al s. E-commerce’s business owners are encouraged t o develop sites that can meet the expectation of pote ntial customer, provides ease of payment system, pr ovide detailed and actual information and responsible for customer personal information and transaction reco rds. This paper outlined the key factors influencing onl ine shopping intention in Indonesia and pioneered t he building of an integrated research framework to und erstand how consumers make purchase decision toward online shopping; a relatively new way of shopping i the country.",
"title": ""
},
{
"docid": "0bbb23b9df622f451f7e7f2fd136d9e0",
"text": "The Janus kinase (JAK)-signal transducer of activators of transcription (STAT) pathway is now recognized as an evolutionarily conserved signaling pathway employed by diverse cytokines, interferons, growth factors, and related molecules. This pathway provides an elegant and remarkably straightforward mechanism whereby extracellular factors control gene expression. It thus serves as a fundamental paradigm for how cells sense environmental cues and interpret these signals to regulate cell growth and differentiation. Genetic mutations and polymorphisms are functionally relevant to a variety of human diseases, especially cancer and immune-related conditions. The clinical relevance of the pathway has been confirmed by the emergence of a new class of therapeutics that targets JAKs.",
"title": ""
},
{
"docid": "307dac4f0cc964a539160780abb1c123",
"text": "One of the main current applications of intelligent systems is recommender systems (RS). RS can help users to find relevant items in huge information spaces in a personalized way. Several techniques have been investigated for the development of RS. One of them is evolutionary computational (EC) techniques, which is an emerging trend with various application areas. The increasing interest in using EC for web personalization, information retrieval and RS fostered the publication of survey papers on the subject. However, these surveys have analyzed only a small number of publications, around ten. This study provides a comprehensive review of more than 65 research publications focusing on five aspects we consider relevant for such: the recommendation technique used, the datasets and the evaluation methods adopted in their experimental parts, the baselines employed in the experimental comparison of proposed approaches and the reproducibility of the reported experiments. At the end of this review, we discuss negative and positive aspects of these papers, as well as point out opportunities, challenges and possible future research directions. To the best of our knowledge, this review is the most comprehensive review of various approaches using EC in RS. Thus, we believe this review will be a relevant material for researchers interested in EC and RS.",
"title": ""
},
{
"docid": "4cd0d1040e104b4e317e22760b2ced71",
"text": "Color mapping is an important technique used in visualization to build visual representations of data and information. With output devices such as computer displays providing a large number of colors, developers sometimes tend to build their visualization to be visually appealing, while forgetting the main goal of clear depiction of the underlying data. Visualization researchers have profited from findings in adjoining areas such as human vision and psychophysics which, combined with their own experience, enabled them to establish guidelines that might help practitioners to select appropriate color scales and adjust the associated color maps, for particular applications. This survey presents an overview on the subject of color scales by focusing on important guidelines, experimental research work and tools proposed to help non-expert users. & 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "105913d67437afafa6147b7c67e8d808",
"text": "This paper proposes to develop an electronic device for obstacle detection in the path of visually impaired people. This device assists a user to walk without colliding with any obstacles in their path. It is a wearable device in the form of a waist belt that has ultrasonic sensors and raspberry pi installed on it. This device detects obstacles around the user up to 500cm in three directions i.e. front, left and right using a network of ultrasonic sensors. These ultrasonic sensors are connected to raspberry pi that receives data signals from these sensors for further data processing. The algorithm running in raspberry pi computes the distance from the obstacle and converts it into text message, which is then converted into speech and conveyed to the user through earphones/speakers. This design is benefitial in terms of it’s portability, low-cost, low power consumption and the fact that neither the user nor the device requires initial training. Keywords—embedded systems; raspberry pi; speech feedback; ultrasonic sensor; visually impaired;",
"title": ""
},
{
"docid": "bee01b9bd3beb41b0ca963c05378a93f",
"text": "Calibration between color camera and 3D Light Detection And Ranging (LIDAR) equipment is an essential process for data fusion. The goal of this paper is to improve the calibration accuracy between a camera and a 3D LIDAR. In particular, we are interested in calibrating a low resolution 3D LIDAR with a relatively small number of vertical sensors. Our goal is achieved by employing a new methodology for the calibration board, which exploits 2D-3D correspondences. The 3D corresponding points are estimated from the scanned laser points on the polygonal planar board with adjacent sides. Since the lengths of adjacent sides are known, we can estimate the vertices of the board as a meeting point of two projected sides of the polygonal board. The estimated vertices from the range data and those detected from the color image serve as the corresponding points for the calibration. Experiments using a low-resolution LIDAR with 32 sensors show robust results.",
"title": ""
},
{
"docid": "31c1427c3682a76528b1cb42036db7c1",
"text": "Fifteen years ago, a panel of experts representing the full spectrum of cardiovascular disease (CVD) research and practice assembled at a workshop to examine the state of knowledge about CVD. The leaders of the workshop generated a hypothesis that framed CVD as a chain of events, initiated by a myriad of related and unrelated risk factors and progressing through numerous physiological pathways and processes to the development of end-stage heart disease (Figure 1).1 They further hypothesized that intervention anywhere along the chain of events leading to CVD could disrupt the pathophysiological process and confer cardioprotection. The workshop participants endorsed this paradigm but also identified the unresolved issues relating to the concept of a CVD continuum. There was limited availability of clinical trial data and pathobiological evidence at that time, and the experts recognized that critical studies at both the mechanistic level and the clinical level were needed to validate the concept of a chain of events leading to end-stage CVD. In the intervening 15 years, new evidence for underlying pathophysiological mechanisms, the development of novel therapeutic agents, and the release of additional landmark clinical trial data have confirmed the concept of a CVD continuum and reinforced the notion that intervention at any point along this chain can modify CVD progression. In addition, the accumulated evidence indicates that the events leading to disease progression overlap and intertwine and do not always occur as a sequence of discrete, tandem incidents. Furthermore, although the original concept focused on risk factors for coronary artery disease (CAD) and its sequelae, the CVD continuum has expanded to include other areas such as cerebrovascular disease, peripheral vascular disease, and renal disease. Since its conception 15 years ago, the CVD continuum has become much in need of an update. Accordingly, this 2-part article will present a critical and comprehensive update of the current evidence for a CVD continuum based on the results of pathophysiological studies and the outcome of a broad range of clinical trials that have been performed in the past 15 years. It is not the intent of the article to include a comprehensive listing of all trials performed as part of the CVD continuum; instead, we have sought to include only those trials that have had the greatest impact. Part I briefly reviews the current understanding of the pathophysiology of CVD and discusses clinical trial data from risk factors for disease through stable CAD. Part II continues the review of clinical trial data beginning with acute coronary syndromes and continuing through extension of the CVD continuum to stroke and renal disease. The article concludes with a discussion of areas in which future research might further clarify our understanding of the CVD continuum.",
"title": ""
},
{
"docid": "458633abcbb030b9e58e432d5b539950",
"text": "In many computer vision tasks, we expect a particular behavior of the output with respect to rotations of the input image. If this relationship is explicitly encoded, instead of treated as any other variation, the complexity of the problem is decreased, leading to a reduction in the size of the required model. In this paper, we propose the Rotation Equivariant Vector Field Networks (RotEqNet), a Convolutional Neural Network (CNN) architecture encoding rotation equivariance, invariance and covariance. Each convolutional filter is applied at multiple orientations and returns a vector field representing magnitude and angle of the highest scoring orientation at every spatial location. We develop a modified convolution operator relying on this representation to obtain deep architectures. We test RotEqNet on several problems requiring different responses with respect to the inputs’ rotation: image classification, biomedical image segmentation, orientation estimation and patch matching. In all cases, we show that RotEqNet offers extremely compact models in terms of number of parameters and provides results in line to those of networks orders of magnitude larger.",
"title": ""
},
{
"docid": "1c1775a64703f7276e4843b8afc26117",
"text": "This paper describes a computer vision based system for real-time robust traffic sign detection, tracking, and recognition. Such a framework is of major interest for driver assistance in an intelligent automotive cockpit environment. The proposed approach consists of two components. First, signs are detected using a set of Haar wavelet features obtained from AdaBoost training. Compared to previously published approaches, our solution offers a generic, joint modeling of color and shape information without the need of tuning free parameters. Once detected, objects are efficiently tracked within a temporal information propagation framework. Second, classification is performed using Bayesian generative modeling. Making use of the tracking information, hypotheses are fused over multiple frames. Experiments show high detection and recognition accuracy and a frame rate of approximately 10 frames per second on a standard PC.",
"title": ""
},
{
"docid": "b6a600ea1c277bc3bf8f2452b8aef3f1",
"text": "Fusion of data from multiple sensors can enable robust navigation in varied environments. However, for optimal performance, the sensors must calibrated relative to one another. Full sensor-to-sensor calibration is a spatiotemporal problem: we require an accurate estimate of the relative timing of measurements for each pair of sensors, in addition to the 6-DOF sensor-to-sensor transform. In this paper, we examine the problem of determining the time delays between multiple proprioceptive and exteroceptive sensor data streams. The primary difficultly is that the correspondences between measurements from different sensors are unknown, and hence the delays cannot be computed directly. We instead formulate temporal calibration as a registration task. Our algorithm operates by aligning curves in a three-dimensional orientation space, and, as such, can be considered as a variant of Iterative Closest Point (ICP). We present results from simulation studies and from experiments with a PR2 robot, which demonstrate accurate calibration of the time delays between measurements from multiple, heterogeneous sensors.",
"title": ""
},
{
"docid": "120e36cc162f4ce602da810c80c18c7d",
"text": "We describe a new model for learning meaningful representations of text documents from an unlabeled collection of documents. This model is inspired by the recently proposed Replicated Softmax, an undirected graphical model of word counts that was shown to learn a better generative model and more meaningful document representations. Specifically, we take inspiration from the conditional mean-field recursive equations of the Replicated Softmax in order to define a neural network architecture that estimates the probability of observing a new word in a given document given the previously observed words. This paradigm also allows us to replace the expensive softmax distribution over words with a hierarchical distribution over paths in a binary tree of words. The end result is a model whose training complexity scales logarithmically with the vocabulary size instead of linearly as in the Replicated Softmax. Our experiments show that our model is competitive both as a generative model of documents and as a document representation learning algorithm.",
"title": ""
},
{
"docid": "032f444d4844c4fa9a3e948cbbc0818a",
"text": "This paper presents a microstrip dual-band bandpass filter (BPF) based on cross-shaped resonator and spurline. It is shown that spurlines added into input/output ports of a cross-shaped resonator generate an additional notch band. Using even and odd-mode analysis the proposed structure is realized and designed. The proposed bandpass filter has dual passband from 1.9 GHz to 2.4 GHz and 9.5 GHz to 11.5 GHz.",
"title": ""
}
] | scidocsrr |
bb2272ae45e3bf89d557a34ffb542d4b | Challenges of Sentiment Analysis for Dynamic Events | [
{
"docid": "f13d3c01729d9f3dcb2b220a0fcce902",
"text": "User generated content on Twitter (produced at an enormous rate of 340 million tweets per day) provides a rich source for gleaning people's emotions, which is necessary for deeper understanding of people's behaviors and actions. Extant studies on emotion identification lack comprehensive coverage of \"emotional situations\" because they use relatively small training datasets. To overcome this bottleneck, we have automatically created a large emotion-labeled dataset (of about 2.5 million tweets) by harnessing emotion-related hash tags available in the tweets. We have applied two different machine learning algorithms for emotion identification, to study the effectiveness of various feature combinations as well as the effect of the size of the training data on the emotion identification task. Our experiments demonstrate that a combination of unigrams, big rams, sentiment/emotion-bearing words, and parts-of-speech information is most effective for gleaning emotions. The highest accuracy (65.57%) is achieved with a training data containing about 2 million tweets.",
"title": ""
}
] | [
{
"docid": "c630b600a0b03e9e3ede1c0132f80264",
"text": "68 AI MAGAZINE Adaptive graphical user interfaces (GUIs) automatically tailor the presentation of functionality to better fit an individual user’s tasks, usage patterns, and abilities. A familiar example of an adaptive interface is the Windows XP start menu, where a small set of applications from the “All Programs” submenu is replicated in the top level of the “Start” menu for easier access, saving users from navigating through multiple levels of the menu hierarchy (figure 1). The potential of adaptive interfaces to reduce visual search time, cognitive load, and motor movement is appealing, and when the adaptation is successful an adaptive interface can be faster and preferred in comparison to a nonadaptive counterpart (for example, Gajos et al. [2006], Greenberg and Witten [1985]). In practice, however, many challenges exist, and, thus far, evaluation results of adaptive interfaces have been mixed. For an adaptive interface to be successful, the benefits of correct adaptations must outweigh the costs, or usability side effects, of incorrect adaptations. Often, an adaptive mechanism designed to improve one aspect of the interaction, typically motor movement or visual search, inadvertently increases effort along another dimension, such as cognitive or perceptual load. The result is that many adaptive designs that were expected to confer a benefit along one of these dimensions have failed in practice. For example, a menu that tracks how frequently each item is used and adaptively reorders itself so that items appear in order from most to least frequently accessed should improve motor performance, but in reality this design can slow users down and reduce satisfaction because of the constantly changing layout (Mitchell and Schneiderman [1989]; for example, figure 2b). Commonly cited issues with adaptive interfaces include the lack of control the user has over the adaptive process and the difficulty that users may have in predicting what the system’s response will be to a user action (Höök 2000). User evaluation of adaptive GUIs is more complex than eval-",
"title": ""
},
{
"docid": "49d533bf41f18bc96c404bb9a8bd12ae",
"text": "A back-cavity shielded bow-tie antenna system working at 900MHz center frequency for ground-coupled GPR application is investigated numerically and experimentally in this paper. Bow-tie geometrical structure is modified for a compact design and back-cavity assembly. A layer of absorber is employed to overcome the back reflection by omni-directional radiation pattern of a bow-tie antenna in H-plane, thus increasing the SNR and improve the isolation between T and R antennas as well. The designed antenna system is applied to a prototype GPR system. Tested data shows that the back-cavity shielded antenna works satisfactorily in the 900MHz GPR system.",
"title": ""
},
{
"docid": "2b288883556821fd61576c7460a81c29",
"text": "Intensive care units (ICUs) are major sites for medical errors and adverse events. Suboptimal outcomes reflect a widespread failure to implement care delivery systems that successfully address the complexity of modern ICUs. Whereas other industries have used information technologies to fundamentally improve operating efficiency and enhance safety, medicine has been slow to implement such strategies. Most ICUs do not even track performance; fewer still have the capability to examine clinical data and use this information to guide quality improvement initiatives. This article describes a technology-enabled care model (electronic ICU, or eICU) that represents a new paradigm for delivery of critical care services. A major component of the model is the use of telemedicine to leverage clinical expertise and facilitate a round-the-clock proactive care by intensivist-led teams of ICU caregivers. Novel data presentation formats, computerized decision support, and smart alarms are used to enhance efficiency, increase effectiveness, and standardize clinical and operating processes. In addition, the technology infrastructure facilitates performance improvement by providing an automated means to measure outcomes, track performance, and monitor resource utilization. The program is designed to support the multidisciplinary intensivist-led team model and incorporates comprehensive ICU re-engineering efforts to change practice behavior. Although this model can transform ICUs into centers of excellence, success will hinge on hospitals accepting the underlying value proposition and physicians being willing to change established practices.",
"title": ""
},
{
"docid": "6875d41e412d71f45d6d4ea43697ed80",
"text": "Context Emergency department visits by older adults are often due to adverse drug events, but the proportion of these visits that are the result of drugs designated as inappropriate for use in this population is unknown. Contribution Analyses of a national surveillance study of adverse drug events and a national outpatient survey estimate that Americans age 65 years or older have more than 175000 emergency department visits for adverse drug events yearly. Three commonly prescribed drugs accounted for more than one third of visits: warfarin, insulin, and digoxin. Caution The study was limited to adverse events in the emergency department. Implication Strategies to decrease adverse drug events among older adults should focus on warfarin, insulin, and digoxin. The Editors Adverse drug events cause clinically significant morbidity and mortality and are associated with large economic costs (15). They are common in older adults, regardless of whether they live in the community, reside in long-term care facilities, or are hospitalized (59). Most physicians recognize that prescribing medications to older patients requires special considerations, but nongeriatricians are typically unfamiliar with the most commonly used measure of medication appropriateness for older patients: the Beers criteria (1012). The Beers criteria are a consensus-based list of medications identified as potentially inappropriate for use in older adults. The criteria were introduced in 1991 to help researchers evaluate prescription quality in nursing homes (10). The Beers criteria were updated in 1997 and 2003 to apply to all persons age 65 years or older, to include new medications judged to be ineffective or to pose unnecessarily high risk, and to rate the severity of adverse outcomes (11, 12). Prescription rates of Beers criteria medications have become a widely used measure of quality of care for older adults in research studies in the United States and elsewhere (1326). The application of the Beers criteria as a measure of health care quality and safety has expanded beyond research studies. The Centers for Medicare & Medicaid Services incorporated the Beers criteria into federal safety regulations for long-term care facilities in 1999 (27). The prescription rate of potentially inappropriate medications is one of the few medication safety measures in the National Healthcare Quality Report (28) and has been introduced as a Health Plan and Employer Data and Information Set quality measure for managed care plans (29). Despite widespread adoption of the Beers criteria to measure prescription quality and safety, as well as proposals to apply these measures to additional settings, such as medication therapy management services under Medicare Part D (30), population-based data on the effect of adverse events from potentially inappropriate medications are sparse and do not compare the risks for adverse events from Beers criteria medications against those from other medications (31, 32). Adverse drug events that lead to emergency department visits are clinically significant adverse events (5) and result in increased health care resource utilization and expense (6). We used nationally representative public health surveillance data to estimate the number of emergency department visits for adverse drug events involving Beers criteria medications and compared the number with that for adverse drug events involving other medications. We also estimated the frequency of outpatient prescription of Beers criteria medications and other medications to calculate and compare the risks for emergency department visits for adverse drug events per outpatient prescription visit. Methods Data Sources National estimates of emergency department visits for adverse drug events were based on data from the 58 nonpediatric hospitals participating in the National Electronic Injury Surveillance SystemCooperative Adverse Drug Event Surveillance (NEISS-CADES) System, a nationally representative, size-stratified probability sample of hospitals (excluding psychiatric and penal institutions) in the United States and its territories with a minimum of 6 beds and a 24-hour emergency department (Figure 1) (3335). As described elsewhere (5, 34), trained coders at each hospital reviewed clinical records of every emergency department visit to report physician-diagnosed adverse drug events. Coders reported clinical diagnosis, medication implicated in the adverse event, and narrative descriptions of preceding circumstances. Data collection, management, quality assurance, and analyses were determined to be public health surveillance activities by the Centers for Disease Control and Prevention (CDC) and U.S. Food and Drug Administration human subjects oversight bodies and, therefore, did not require human subject review or institutional review board approval. Figure 1. Data sources and descriptions. NAMCS= National Ambulatory Medical Care Survey (36); NEISS-CADES= National Electronic Injury Surveillance SystemCooperative Adverse Drug Event Surveillance System (5, 3335); NHAMCS = National Hospital Ambulatory Medical Care Survey (37). *The NEISS-CADES is a 63-hospital national probability sample, but 5 pediatric hospitals were not included in this analysis. National estimates of outpatient prescription were based on 2 cross-sectional surveys, the National Ambulatory Medical Care Survey (NAMCS) and the National Hospital Ambulatory Medical Care Survey (NHAMCS), designed to provide information on outpatient office visits and visits to hospital outpatient clinics and emergency departments (Figure 1) (36, 37). These surveys have been previously used to document the prescription rates of inappropriate medications (17, 3840). Definition of Potentially Inappropriate Medications The most recent iteration of the Beers criteria (12) categorizes 41 medications or medication classes as potentially inappropriate under any circumstances (always potentially inappropriate) and 7 medications or medication classes as potentially inappropriate when used in certain doses, frequencies, or durations (potentially inappropriate in certain circumstances). For example, ferrous sulfate is considered to be potentially inappropriate only when used at dosages greater than 325 mg/d, but not potentially inappropriate if used at lower dosages. For this investigation, we included the Beers criteria medications listed in Table 1. Because medication dose, duration, and frequency were not always available in NEISS-CADES and are not reported in NAMCS and NHAMCS, we included medications regardless of dose, duration, or frequency of use. We excluded 3 medications considered to be potentially inappropriate when used in specific formulations (short-acting nifedipine, short-acting oxybutynin, and desiccated thyroid) because NEISS-CADES, NAMCS, and NHAMCS do not reliably identify these formulations. Table 1. Potentially Inappropriate Medications for Individuals Age 65 Years or Older The updated Beers criteria identify additional medications as potentially inappropriate if they are prescribed to patients who have certain preexisting conditions. We did not include these medications because they have rarely been used in previous studies or safety measures and NEISS-CADES, NAMCS, and NHAMCS do not reliably identify preexisting conditions. Identification of Emergency Department Visits for Adverse Drug Events We defined an adverse drug event case as an incident emergency department visit by a patient age 65 years or older, from 1 January 2004 to 31 December 2005, for a condition that the treating physician explicitly attributed to the use of a drug or for a drug-specific effect (5). Adverse events include allergic reactions (immunologically mediated effects) (41), adverse effects (undesirable pharmacologic or idiosyncratic effects at recommended doses) (41), unintentional overdoses (toxic effects linked to excess dose or impaired excretion) (41), or secondary effects (such as falls and choking). We excluded cases of intentional self-harm, therapeutic failures, therapy withdrawal, drug abuse, adverse drug events that occurred as a result of medical treatment received during the emergency department visit, and follow-up visits for a previously diagnosed adverse drug event. We defined an adverse drug event from Beers criteria medications as an emergency department visit in which a medication from Table 1 was implicated. Identification of Outpatient Prescription Visits We used the NAMCS and NHAMCS public use data files for the most recent year available (2004) to identify outpatient prescription visits. We defined an outpatient prescription visit as any outpatient office, hospital clinic, or emergency department visit at which treatment with a medication of interest was either started or continued. We identified medications by generic name for those with a single active ingredient and by individual active ingredients for combination products. We categorized visits with at least 1 medication identified in Table 1 as involving Beers criteria medications. Statistical Analysis Each NEISS-CADES, NAMCS, and NHAMCS case is assigned a sample weight on the basis of the inverse probability of selection (33, 4244). We calculated national estimates of emergency department visits and prescription visits by summing the corresponding sample weights, and we calculated 95% CIs by using the SURVEYMEANS procedure in SAS, version 9.1 (SAS Institute, Cary, North Carolina), to account for the sampling strata and clustering by site. To obtain annual estimates of visits for adverse events, we divided NEISS-CADES estimates for 20042005 and corresponding 95% CI end points by 2. Estimates based on small numbers of cases (<20 cases for NEISS-CADES and <30 cases for NAMCS and NHAMCS) or with a coefficient of variation greater than 30% are considered statistically unstable and are identified in the tables. To estimate the risk for adverse events relative to outpatient prescription",
"title": ""
},
{
"docid": "0a627cbe37cbe1da8f3edf7fff354314",
"text": "Robust and reliable vehicle detection from images acquired by a moving vehicle (i.e., on-road vehicle detection) is an important problem with applications to driver assistance systems and autonomous, self-guided vehicles. The focus of this work is on the issues of feature extraction and classification for rear-view vehicle detection. Specifically, by treating the problem of vehicle detection as a two-class classification problem, we have investigated several different feature extraction methods such as principal component analysis, wavelets, and Gabor filters. To evaluate the extracted features, we have experimented with two popular classifiers, neural networks and support vector machines (SVMs). Based on our evaluation results, we have developed an on-board real-time monocular vehicle detection system that is capable of acquiring grey-scale images, using Ford's proprietary low-light camera, achieving an average detection rate of 10 Hz. Our vehicle detection algorithm consists of two main steps: a multiscale driven hypothesis generation step and an appearance-based hypothesis verification step. During the hypothesis generation step, image locations where vehicles might be present are extracted. This step uses multiscale techniques not only to speed up detection, but also to improve system robustness. The appearance-based hypothesis verification step verifies the hypotheses using Gabor features and SVMs. The system has been tested in Ford's concept vehicle under different traffic conditions (e.g., structured highway, complex urban streets, and varying weather conditions), illustrating good performance.",
"title": ""
},
{
"docid": "ca550339bd91ba8e431f1e82fbaf5a99",
"text": "In several previous papers and particularly in [3] we presented the use of logic equations and their solution using ternary vectors and set-theoretic considerations as well as binary codings and bit-parallel vector operations. In this paper we introduce a new and elegant model for the game of Sudoku that uses the same approach and solves this problem without any search always finding all solutions (including no solutions or several solutions). It can also be extended to larger Sudokus and to a whole class of similar discrete problems, such as Queens’ problems on the chessboard, graph-coloring problems etc. Disadvantages of known SAT approaches for such problems were overcome by our new method.",
"title": ""
},
{
"docid": "7699f4fa25a47fca0de320b8bbe6ff00",
"text": "Homeland Security (HS) is a growing field of study in the U.S. today, generally covering risk management, terrorism studies, policy development, and other topics related to the broad field. Information security threats to both the public and private sectors are growing in intensity, frequency, and severity, and are a very real threat to the security of the nation. While there are many models for information security education at all levels of higher education, these programs are invariably offered as a technical course of study, these curricula are generally not well suited to HS students. As a result, information systems and cyber security principles are under represented in the typical HS program. The authors propose a course of study in cyber security designed to capitalize on the intellectual strengths of students in this discipline and that are consistent with the broad suite of professional needs in this discipline.",
"title": ""
},
{
"docid": "b7babfd34b47420f85aae434ce72b84d",
"text": "The use of Building Information Modeling (BIM) in the construction industry is on the rise. It is widely acknowledged that adoption of BIM would cause a seismic shift in the business processes within the construction industry and related fields. Cost estimation is a key aspect in the workflow of a construction project. Processes within estimating, such as quantity survey and pricing, may be automated by using existing BIM software in combination with existing estimating software. The adoption of this combination of technologies is not as widely seen as might be expected. Researchers conducted a survey of construction practitioners to determine the extent to which estimating processes were automated in the conjunction industry, with the data from a BIM model. Survey participants were asked questions about how BIM was used within their organization and how it was used in the various tasks involved in construction cost estimating. The results of the survey data revealed that while most contractors were using BIM, only a small minority were using it to automate estimating processes. Most organizations reported that employees skilled in BIM did not have the estimating experience to produce working estimates from BIM models and vice-versa. The results of the survey are presented and analyzed to determine conditions that would improve the adoption of these new business processes in the construction estimating field.",
"title": ""
},
{
"docid": "70950eef662a1bcbc899c9d065d8cd1f",
"text": "We present a novel approach to efficiently learn a label tree for large scale classification with many classes. The key contribution of the approach is a technique to simultaneously determine the structure of the tree and learn the classifiers for each node in the tree. This approach also allows fine grained control over the efficiency vs accuracy trade-off in designing a label tree, leading to more balanced trees. Experiments are performed on large scale image classification with 10184 classes and 9 million images. We demonstrate significant improvements in test accuracy and efficiency with less training time and more balanced trees compared to the previous state of the art by Bengio et al.",
"title": ""
},
{
"docid": "ff4c069ab63ced5979cf6718eec30654",
"text": "Dowser is a ‘guided’ fuzzer that combines taint tracking, program analysis and symbolic execution to find buffer overflow and underflow vulnerabilities buried deep in a program’s logic. The key idea is that analysis of a program lets us pinpoint the right areas in the program code to probe and the appropriate inputs to do so. Intuitively, for typical buffer overflows, we need consider only the code that accesses an array in a loop, rather than all possible instructions in the program. After finding all such candidate sets of instructions, we rank them according to an estimation of how likely they are to contain interesting vulnerabilities. We then subject the most promising sets to further testing. Specifically, we first use taint analysis to determine which input bytes influence the array index and then execute the program symbolically, making only this set of inputs symbolic. By constantly steering the symbolic execution along branch outcomes most likely to lead to overflows, we were able to detect deep bugs in real programs (like the nginx webserver, the inspircd IRC server, and the ffmpeg videoplayer). Two of the bugs we found were previously undocumented buffer overflows in ffmpeg and the poppler PDF rendering library.",
"title": ""
},
{
"docid": "d242ef5126dfb2db12b54c15be61367e",
"text": "RankNet is one of the widely adopted ranking models for web search tasks. However, adapting a generic RankNet for personalized search is little studied. In this paper, we first continue-trained a variety of RankNets with different number of hidden layers and network structures over a previously trained global RankNet model, and observed that a deep neural network with five hidden layers gives the best performance. To further improve the performance of adaptation, we propose a set of novel methods categorized into two groups. In the first group, three methods are proposed to properly assess the usefulness of each adaptation instance and only leverage the most informative instances to adapt a user-specific RankNet model. These assessments are based on KL-divergence, click entropy or a heuristic to ignore top clicks in adaptation queries. In the second group, two methods are proposed to regularize the training of the neural network in RankNet: one of these methods regularize the error back-propagation via a truncated gradient approach, while the other method limits the depth of the back propagation when adapting the neural network. We empirically evaluate our approaches using a large-scale real-world data set. Experimental results exhibit that our methods all give significant improvements over a strong baseline ranking system, and the truncated gradient approach gives the best performance, significantly better than all others.",
"title": ""
},
{
"docid": "4e86bc8fc24b6ada4c8eaf6d50e32f26",
"text": "We formulate dependency parsing as a graphical model with the novel ingredient of global constraints. We show how to apply loopy belief propagation (BP), a simple and effective tool for approximate learning and inference. As a parsing algorithm, BP is both asymptotically and empirically efficient. Even with second-order features or latent variables, which would make exact parsing considerably slower or NP-hard, BP needs only O(n) time with a small constant factor. Furthermore, such features significantly improve parse accuracy over exact first-order methods. Incorporating additional features would increase the runtime additively rather than multiplicatively.",
"title": ""
},
{
"docid": "7b9bc654a170d143a64bdae4c421053e",
"text": "Analysis on a developed dynamic model of the dish-Stirling (DS) system shows that maximum solar energy harness can be realized through controlling the Stirling engine speed. Toward this end, a control scheme is proposed for the doubly fed induction generator coupled to the DS system, as a means to achieve maximum power point tracking as the solar insolation level varies. Furthermore, the adopted fuzzy supervisory control technique is shown to be effective in controlling the temperature of the receiver in the DS system as the speed changes. Simulation results and experimental measurements validate the maximum energy harness ability of the proposed variable-speed DS solar-thermal system.",
"title": ""
},
{
"docid": "bf6a5ff65a60da049c6024375e2effb6",
"text": "This document updates RFC 4944, \"Transmission of IPv6 Packets over IEEE 802.15.4 Networks\". This document specifies an IPv6 header compression format for IPv6 packet delivery in Low Power Wireless Personal Area Networks (6LoWPANs). The compression format relies on shared context to allow compression of arbitrary prefixes. How the information is maintained in that shared context is out of scope. This document specifies compression of multicast addresses and a framework for compressing next headers. UDP header compression is specified within this framework. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.",
"title": ""
},
{
"docid": "2b0534f3d659e8eaea4d5b53af4617db",
"text": "Many organisations are currently involved in implementing Sustainable Supply Chain Management (SSCM) initiatives to address societal expectations and government regulations. Implementation of these initiatives has in turn created complexity due to the involvement of collection, management, control, and monitoring of a wide range of additional information exchanges among trading partners, which was not necessary in the past. Organisations thus would rely more on meaningful support from their IT function to help them implement and operate SSCM practices. Given the growing global recognition of the importance of sustainable supply chain (SSC) practices, existing corporate IT strategy and plans need to be revisited for IT to remain supportive and aligned with new sustainability aspirations of their organisations. Towards this goal, in this paper we report on the development of an IT maturity model specifically designed for SSCM context. The model is built based on four dimensions derived from software process maturity and IS/IT planning literatures. Our proposed model defines four progressive IT maturity stages for corporate IT function to support SSCM implementation initiatives. Some implications of the study finding and several challenges that may potentially hinder acceptance of the model by organisations are discussed.",
"title": ""
},
{
"docid": "2ba78aa333d2239b8069f45180946a21",
"text": "Face frontalization refers to the process of synthesizing the frontal view of a face from a given profile. Due to self-occlusion and appearance distortion in the wild, it is extremely challenging to recover faithful results and preserve texture details in a high-resolution. This paper proposes a High Fidelity Pose Invariant Model (HF-PIM) to produce photographic and identity-preserving results. HF-PIM frontalizes the profiles through a novel texture warping procedure and leverages a dense correspondence field to bind the 2D and 3D surface spaces. We decompose the prerequisite of warping into dense correspondence field estimation and facial texture map recovering, which are both well addressed by deep networks. Different from those reconstruction methods relying on 3D data, we also propose Adversarial Residual Dictionary Learning (ARDL) to supervise facial texture map recovering with only monocular images. Exhaustive experiments on both controlled and uncontrolled environments demonstrate that the proposed method not only boosts the performance of pose-invariant face recognition but also dramatically improves high-resolution frontalization appearances.",
"title": ""
},
{
"docid": "34641057a037740ec28581a798c96f05",
"text": "Vehicles are becoming complex software systems with many components and services that need to be coordinated. Service oriented architectures can be used in this domain to support intra-vehicle, inter-vehicles, and vehicle-environment services. Such architectures can be deployed on different platforms, using different communication and coordination paradigms. We argue that practical solutions should be hybrid: they should integrate and support interoperability of different paradigms. We demonstrate the concept by integrating Jini, the service-oriented technology we used within the vehicle, and JXTA, the peer to peer infrastructure we used to support interaction with the environment through a gateway service, called J2J. Initial experience with J2J is illustrated.",
"title": ""
},
{
"docid": "10959ca4eaa8d8a44629255e98e104da",
"text": "Millimeter-wave (mm-wave) wireless local area networks (WLANs) are expected to provide multi-Gbps connectivity by exploiting the large amount of unoccupied spectrum in e.g. the unlicensed 60 GHz band. However, to overcome the high path loss inherent at these high frequencies, mm-wave networks must employ highly directional beamforming antennas, which makes link establishment and maintenance much more challenging than in traditional omnidirectional networks. In particular, maintaining connectivity under node mobility necessitates frequent re-steering of the transmit and receive antenna beams to re-establish a directional mm-wave link. A simple exhaustive sequential scanning to search for new feasible antenna sector pairs may introduce excessive delay, potentially disrupting communication and lowering the QoS. In this paper, we propose a smart beam steering algorithm for fast 60 GHz link re-establishment under node mobility, which uses knowledge of previous feasible sector pairs to narrow the sector search space, thereby reducing the associated latency overhead. We evaluate the performance of our algorithm in several representative indoor scenarios, based on detailed simulations of signal propagation in a 60 GHz WLAN in WinProp with realistic building materials. We study the effect of indoor layout, antenna sector beamwidth, node mobility pattern, and device orientation awareness. Our results show that the smart beam steering algorithm achieves a 7-fold reduction of the sector search space on average, which directly translates into lower 60 GHz link re-establishment latency. Our results also show that our fast search algorithm selects the near-optimal antenna sector pair for link re-establishment.",
"title": ""
},
{
"docid": "8a3031bb351b3a285bbb7b90db407801",
"text": "Koch-shaped dipoles are introduced for the first time in a wideband antenna design and evolve the traditional Euclidean log-periodic dipole array into the log-periodic Koch-dipole array (LPKDA). Antenna size can be reduced while maintaining its overall performance characteristics. Observations and characteristics of both antennas are discussed. Advantages and disadvantages of the proposed LPKDA are validated through a fabricated proof-of-concept prototype that exhibited approximately 12% size reduction with minimal degradation in the impedance and pattern bandwidths. This is the first application of Koch prefractal elements in a miniaturized wideband antenna design.",
"title": ""
},
{
"docid": "cd37d9ab471d99a82ae3ba324695f5ac",
"text": "Recently, a supervised dictionary learning (SDL) approach based on the Hilbert-Schmidt independence criterion (HSIC) has been proposed that learns the dictionary and the corresponding sparse coefficients in a space where the dependency between the data and the corresponding labels is maximized. In this paper, two multiview dictionary learning techniques are proposed based on this HSIC-based SDL. While one of these two techniques learns one dictionary and the corresponding coefficients in the space of fused features in all views, the other learns one dictionary in each view and subsequently fuses the sparse coefficients in the spaces of learned dictionaries. The effectiveness of the proposed multiview learning techniques in using the complementary information of single views is demonstrated in the application of speech emotion recognition (SER). The fully-continuous sub-challenge (FCSC) of the AVEC 2012 dataset is used in two different views: baseline and spectral energy distribution (SED) feature sets. Four dimensional affects, i.e., arousal, expectation, power, and valence are predicted using the proposed multiview methods as the continuous response variables. The results are compared with the single views, AVEC 2012 baseline system, and also other supervised and unsupervised multiview learning approaches in the literature. Using correlation coefficient as the performance measure in predicting the continuous dimensional affects, it is shown that the proposed approach achieves the highest performance among the rivals. The relative performance of the two proposed multiview techniques and their relationship are also discussed. Particularly, it is shown that by providing an additional constraint on the dictionary of one of these approaches, it becomes the same as the other.",
"title": ""
}
] | scidocsrr |
2ded977da124ff15c126f89368f3889b | Hierarchical load forecasting : Gradient boosting machines and Gaussian processes | [
{
"docid": "65c38bb314856c1b5b79ad6473ec9121",
"text": "Despite its importance, choosing the structural form of the kernel in nonparametric regression remains a black art. We define a space of kernel structures which are built compositionally by adding and multiplying a small number of base kernels. We present a method for searching over this space of structures which mirrors the scientific discovery process. The learned structures can often decompose functions into interpretable components and enable long-range extrapolation on time-series datasets. Our structure search method outperforms many widely used kernels and kernel combination methods on a variety of prediction tasks.",
"title": ""
}
] | [
{
"docid": "e3393edb6166e225907f86b5187534ea",
"text": "Th is book supports an emerging trend toward emphasizing the plurality of digital literacy; recognizing the advantages of understanding digital literacy as digital literacies. In the book world this trend is still marginal. In December 2007, Allan Martin and Dan Madigan’s collection Digital Literacies for Learning (2006) was the only English-language book with “digital literacies” in the title to show up in a search on Amazon.com. Th e plural form fares better among English-language journal articles (e.g., Anderson & Henderson, 2004; Ba, Tally, & Tsikalas, 2002; Bawden, 2001; Doering et al., 2007; Myers, 2006; Snyder, 1999; Th omas, 2004) and conference presentations (e.g., Erstad, 2007; Lin & Lo, 2004; Steinkeuhler, 2005), however, and is now reasonably common in talk on blogs and wikis (e.g., Couros, 2007; Davies, 2007). Nonetheless, talk of digital literacy, in the singular, remains the default mode. Th e authors invited to contribute to this book were chosen in light of three reasons we (the editors) identify as important grounds for promoting the idea of digital literacies in the plural. Th is, of course, does not mean the contributing authors would necessarily subscribe to some or all of these reasons. Th at was",
"title": ""
},
{
"docid": "98fec87d72f6247e1a8baa1a07a41c70",
"text": "As multicast applications are deployed for mainstream use, the need to secure multicast communications will become critical. Multicast, however, does not fit the point-to-point model of most network security protocols which were designed with unicast communications in mind. As we will show, securing multicast (or group) communications is fundamentally different from securing unicast (or paired) communications. In turn, these differences can result in scalability problems for many typical applications.In this paper, we examine and model the differences between unicast and multicast security and then propose Iolus: a novel framework for scalable secure multicasting. Protocols based on Iolus can be used to achieve a variety of security objectives and may be used either to directly secure multicast communications or to provide a separate group key management service to other \"security-aware\" applications. We describe the architecture and operation of Iolus in detail and also describe our experience with a protocol based on the Iolus framework.",
"title": ""
},
{
"docid": "f013f58d995693a79cd986a028faff38",
"text": "We present the design and implementation of a system for axiomatic programming, and its application to mathematical software construction. Key novelties include a direct support for user-defined axioms establishing local equalities between types, and overload resolution based on equational theories and user-defined local axioms. We illustrate uses of axioms, and their organization into concepts, in structured generic programming as practiced in computational mathematical systems.",
"title": ""
},
{
"docid": "af3b0fb6b2babe8393b2e715f92a2c97",
"text": "Collaboration is the “mutual engagement of participants in a coordinated effort to solve a problem together.” Collaborative interactions are characterized by shared goals, symmetry of structure, and a high degree of negotiation, interactivity, and interdependence. Interactions producing elaborated explanations are particularly valuable for improving student learning. Nonresponsive feedback, on the other hand, can be detrimental to student learning in collaborative situations. Collaboration can have powerful effects on student learning, particularly for low-achieving students. However, a number of factors may moderate the impact of collaboration on student learning, including student characteristics, group composition, and task characteristics. Although historical frameworks offer some guidance as to when and how children acquire and develop collaboration skills, there is scant empirical evidence to support such predictions. However, because many researchers appear to believe children can be taught to collaborate, they urge educators to provide explicit instruction that encourages development of skills such as coordination, communication, conflict resolution, decision-making, problemsolving, and negotiation. Such training should also emphasize desirable qualities of interaction, such as providing elaborated explanations, asking direct and specific questions, and responding appropriately to the requests of others. Teachers should structure tasks in ways that will support the goals of collaboration, specify “ground rules” for interaction, and regulate such interactions. There are a number of challenges in using group-based tasks to assess collaboration. Several suggestions for assessing collaboration skills are made.",
"title": ""
},
{
"docid": "263f58a9cf856e66a5570e666ad1cec9",
"text": "This paper presents an approach for online estimation of the extrinsic calibration parameters of a multi-camera rig. Given a coarse initial estimate of the parameters, the relative poses between cameras are refined through recursive filtering. The approach is purely vision based and relies on plane induced homographies between successive frames. Overlapping fields of view are not required. Instead, the ground plane serves as a natural reference object. In contrast to other approaches, motion, relative camera poses, and the ground plane are estimated simultaneously using a single iterated extended Kalman filter. This reduces not only the number of parameters but also the computational complexity. Furthermore, an arbitrary number of cameras can be incorporated. Several experiments on synthetic as well as real data were conducted using a setup of four synchronized wide angle fisheye cameras, mounted on a moving platform. Results were obtained, using both, a planar and a general motion model with full six degrees of freedom. Additionally, the effects of uncertain intrinsic parameters and nonplanar ground were evaluated experimentally.",
"title": ""
},
{
"docid": "bfd1ec5a23731185b5ef2d24d3c63d9a",
"text": "Taurine is a natural amino acid present as free form in many mammalian tissues and in particular in skeletal muscle. Taurine exerts many physiological functions, including membrane stabilization, osmoregulation and cytoprotective effects, antioxidant and anti-inflammatory actions as well as modulation of intracellular calcium concentration and ion channel function. In addition taurine may control muscle metabolism and gene expression, through yet unclear mechanisms. This review summarizes the effects of taurine on specific muscle targets and pathways as well as its therapeutic potential to restore skeletal muscle function and performance in various pathological conditions. Evidences support the link between alteration of intracellular taurine level in skeletal muscle and different pathophysiological conditions, such as disuse-induced muscle atrophy, muscular dystrophy and/or senescence, reinforcing the interest towards its exogenous supplementation. In addition, taurine treatment can be beneficial to reduce sarcolemmal hyper-excitability in myotonia-related syndromes. Although further studies are necessary to fill the gaps between animals and humans, the benefit of the amino acid appears to be due to its multiple actions on cellular functions while toxicity seems relatively low. Human clinical trials using taurine in various pathologies such as diabetes, cardiovascular and neurological disorders have been performed and may represent a guide-line for designing specific studies in patients of neuromuscular diseases.",
"title": ""
},
{
"docid": "4e8eed4acd7251432042428054e8fb68",
"text": "Designing a practical test automation architecture provides a solid foundation for a successful automation effort. This paper describes key elements of automated testing that need to be considered, models for testing that can be used for designing a test automation architecture, and considerations for successfully combining the elements to form an automated test environment. The paper first develops a general framework for discussion of software testing and test automation. This includes a definition of test automation, a model for software tests, and a discussion of test oracles. The remainder of the paper focuses on using the framework to plan for a test automation architecture that addresses the requirements for the specific software under test (SUT).",
"title": ""
},
{
"docid": "be20cb4f75ff0d4d1637095d5928b005",
"text": "Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.",
"title": ""
},
{
"docid": "560cadfecdf5207851d333b4a122a06d",
"text": "Over the past years, state-of-the-art information extraction (IE) systems such as NELL [5] and ReVerb [9] have achieved impressive results by producing very large knowledge resources at web scale with minimal supervision. However, these resources lack the schema information, exhibit a high degree of ambiguity, and are difficult even for humans to interpret. Working with such resources becomes easier if there is a structured information base to which the resources can be linked. In this paper, we introduce the integration of open information extraction projects with Wikipedia-based IE projects that maintain a logical schema, as an important challenge for the NLP, semantic web, and machine learning communities. We describe the problem, present a gold-standard benchmark, and take the first steps towards a data-driven solution to the problem. This is especially promising, since NELL and ReVerb typically achieve a very large coverage, but still still lack a fullfledged clean ontological structure which, on the other hand, could be provided by large-scale ontologies like DBpedia [2] or YAGO [13].",
"title": ""
},
{
"docid": "ad23230c4ee2ed2216378a3ab833d3eb",
"text": "We present a framework for precomputed volume radiance transfer that achieves real-time rendering of global illumination effects for volume data sets such as multiple scattering, volumetric shadows, and so on. Our approach incorporates the volumetric photon mapping method into the classical precomputed radiance transfer pipeline. We contribute several techniques for light approximation, radiance transfer precomputation, and real-time radiance estimation, which are essential to make the approach practical and to achieve high frame rates. For light approximation, we propose a new discrete spherical function that has better performance for construction and evaluation when compared with existing rotational invariant spherical functions such as spherical harmonics and spherical radial basis functions. In addition, we present a fast splatting-based radiance transfer precomputation method and an early evaluation technique for real-time radiance estimation in the clustered principal component analysis space. Our techniques are validated through comprehensive evaluations and rendering tests. We also apply our rendering approach to volume visualization.",
"title": ""
},
{
"docid": "dad1c5e4aa43b9fc2b3592799f9a3a69",
"text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.05.068 ⇑ Tel.: +886 7 3814526. E-mail address: [email protected] Due to the explosive growth of social-media applications, enhancing event-awareness by social mining has become extremely important. The contents of microblogs preserve valuable information associated with past disastrous events and stories. To learn the experiences from past events for tackling emerging real-world events, in this work we utilize the social-media messages to characterize real-world events through mining their contents and extracting essential features for relatedness analysis. On one hand, we established an online clustering approach on Twitter microblogs for detecting emerging events, and meanwhile we performed event relatedness evaluation using an unsupervised clustering approach. On the other hand, we developed a supervised learning model to create extensible measure metrics for offline evaluation of event relatedness. By means of supervised learning, our developed measure metrics are able to compute relatedness of various historical events, allowing the event impacts on specified domains to be quantitatively measured for event comparison. By combining the strengths of both methods, the experimental results showed that the combined framework in our system is sensible for discovering more unknown knowledge about event impacts and enhancing event awareness. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0626c39604a1dde16a5d27de1c4cef24",
"text": "Two dimensional (2D) materials with a monolayer of atoms represent an ultimate control of material dimension in the vertical direction. Molybdenum sulfide (MoS2) monolayers, with a direct bandgap of 1.8 eV, offer an unprecedented prospect of miniaturizing semiconductor science and technology down to a truly atomic scale. Recent studies have indeed demonstrated the promise of 2D MoS2 in fields including field effect transistors, low power switches, optoelectronics, and spintronics. However, device development with 2D MoS2 has been delayed by the lack of capabilities to produce large-area, uniform, and high-quality MoS2 monolayers. Here we present a self-limiting approach that can grow high quality monolayer and few-layer MoS2 films over an area of centimeters with unprecedented uniformity and controllability. This approach is compatible with the standard fabrication process in semiconductor industry. It paves the way for the development of practical devices with 2D MoS2 and opens up new avenues for fundamental research.",
"title": ""
},
{
"docid": "017d1bb9180e5d1f8a01604630ebc40d",
"text": "This paper presents an unsupervised distribution-free change detection approach for synthetic aperture radar (SAR) images based on an image fusion strategy and a novel fuzzy clustering algorithm. The image fusion technique is introduced to generate a difference image by using complementary information from a mean-ratio image and a log-ratio image. In order to restrain the background information and enhance the information of changed regions in the fused difference image, wavelet fusion rules based on an average operator and minimum local area energy are chosen to fuse the wavelet coefficients for a low-frequency band and a high-frequency band, respectively. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.",
"title": ""
},
{
"docid": "9e3263866208bbc6a9019b3c859d2a66",
"text": "A residual network (or ResNet) is a standard deep neural net architecture, with stateof-the-art performance across numerous applications. The main premise of ResNets is that they allow the training of each layer to focus on fitting just the residual of the previous layer’s output and the target output. Thus, we should expect that the trained network is no worse than what we can obtain if we remove the residual layers and train a shallower network instead. However, due to the non-convexity of the optimization problem, it is not at all clear that ResNets indeed achieve this behavior, rather than getting stuck at some arbitrarily poor local minimum. In this paper, we rigorously prove that arbitrarily deep, nonlinear residual units indeed exhibit this behavior, in the sense that the optimization landscape contains no local minima with value above what can be obtained with a linear predictor (namely a 1-layer network). Notably, we show this under minimal or no assumptions on the precise network architecture, data distribution, or loss function used. We also provide a quantitative analysis of approximate stationary points for this problem. Finally, we show that with a certain tweak to the architecture, training the network with standard stochastic gradient descent achieves an objective value close or better than any linear predictor.",
"title": ""
},
{
"docid": "b311ce7a34d3bdb21678ed765bcd0f0b",
"text": "This paper focuses on the micro-blogging service Twitter, looking at source credibility for information shared in relation to the Fukushima Daiichi nuclear power plant disaster in Japan. We look at the sources, credibility, and between-language differences in information shared in the month following the disaster. Messages were categorized by user, location, language, type, and credibility of information source. Tweets with reference to third-party information made up the bulk of messages sent, and it was also found that a majority of those sources were highly credible, including established institutions, traditional media outlets, and highly credible individuals. In general, profile anonymity proved to be correlated with a higher propensity to share information from low credibility sources. However, Japanese-language tweeters, while more likely to have anonymous profiles, referenced lowcredibility sources less often than non-Japanese tweeters, suggesting proximity to the disaster mediating the degree of credibility of shared content.",
"title": ""
},
{
"docid": "c35619bf5830f6415a1c2f80cbaea31b",
"text": "Thumbnail images provide users of image retrieval and browsing systems with a method for quickly scanning large numbers of images. Recognizing the objects in an image is important in many retrieval tasks, but thumbnails generated by shrinking the original image often render objects illegible. We study the ability of computer vision systems to detect key components of images so that automated cropping, prior to shrinking, can render objects more recognizable. We evaluate automatic cropping techniques 1) based on a general method that detects salient portions of images, and 2) based on automatic face detection. Our user study shows that these methods result in small thumbnails that are substantially more recognizable and easier to find in the context of visual search.",
"title": ""
},
{
"docid": "b07ae3888b52faa598893bbfbf04eae2",
"text": "This paper presents a compliant locomotion framework for torque-controlled humanoids using model-based whole-body control. In order to stabilize the centroidal dynamics during locomotion, we compute linear momentum rate of change objectives using a novel time-varying controller for the Divergent Component of Motion (DCM). Task-space objectives, including the desired momentum rate of change, are tracked using an efficient quadratic program formulation that computes optimal joint torque setpoints given frictional contact constraints and joint position / torque limits. In order to validate the effectiveness of the proposed approach, we demonstrate push recovery and compliant walking using THOR, a 34 DOF humanoid with series elastic actuation. We discuss details leading to the successful implementation of optimization-based whole-body control on our hardware platform, including the design of a “simple” joint impedance controller that introduces inner-loop velocity feedback into the actuator force controller.",
"title": ""
},
{
"docid": "155938bc107c7e7cfca22758937f4d32",
"text": "A general theory of addictions is proposed, using the compulsive gambler as the prototype. Addiction is defined as a dependent state acquired over time to relieve stress. Two interrelated sets of factors predispose persons to addictions: an abnormal physiological resting state, and childhood experiences producing a deep sense of inadequacy. All addictions are hypothesized to follow a similar three-stage course. A matrix strategy is outlined to collect similar information from different kinds of addicts and normals. The ultimate objective is to identify high risk youth and prevent the development of addictions.",
"title": ""
},
{
"docid": "de96ac151e5a3a2b38f2fa309862faee",
"text": "Venue recommendation is an important application for Location-Based Social Networks (LBSNs), such as Yelp, and has been extensively studied in recent years. Matrix Factorisation (MF) is a popular Collaborative Filtering (CF) technique that can suggest relevant venues to users based on an assumption that similar users are likely to visit similar venues. In recent years, deep neural networks have been successfully applied to tasks such as speech recognition, computer vision and natural language processing. Building upon this momentum, various approaches for recommendation have been proposed in the literature to enhance the effectiveness of MF-based approaches by exploiting neural network models such as: word embeddings to incorporate auxiliary information (e.g. textual content of comments); and Recurrent Neural Networks (RNN) to capture sequential properties of observed user-venue interactions. However, such approaches rely on the traditional inner product of the latent factors of users and venues to capture the concept of collaborative filtering, which may not be sufficient to capture the complex structure of user-venue interactions. In this paper, we propose a Deep Recurrent Collaborative Filtering framework (DRCF) with a pairwise ranking function that aims to capture user-venue interactions in a CF manner from sequences of observed feedback by leveraging Multi-Layer Perception and Recurrent Neural Network architectures. Our proposed framework consists of two components: namely Generalised Recurrent Matrix Factorisation (GRMF) and Multi-Level Recurrent Perceptron (MLRP) models. In particular, GRMF and MLRP learn to model complex structures of user-venue interactions using element-wise and dot products as well as the concatenation of latent factors. In addition, we propose a novel sequence-based negative sampling approach that accounts for the sequential properties of observed feedback and geographical location of venues to enhance the quality of venue suggestions, as well as alleviate the cold-start users problem. Experiments on three large checkin and rating datasets show the effectiveness of our proposed framework by outperforming various state-of-the-art approaches.",
"title": ""
},
{
"docid": "e0a08bac6769382c3168922bdee1939d",
"text": "This paper presents the state of art research progress on multilingual multi-document summarization. Our method utilizes hLDA (hierarchical Latent Dirichlet Allocation) algorithm to model the documents firstly. A new feature is proposed from the hLDA modeling results, which can reflect semantic information to some extent. Then it combines this new feature with different other features to perform sentence scoring. According to the results of sentence score, it extracts candidate summary sentences from the documents to generate a summary. We have also attempted to verify the effectiveness and robustness of the new feature through experiments. After the comparison with other summarization methods, our method reveals better performance in some respects.",
"title": ""
}
] | scidocsrr |
71d797de968480d5b70ea2b8cdb7ca0d | Coming of Age (Digitally): An Ecological View of Social Media Use among College Students | [
{
"docid": "7e4c00d8f17166cbfb3bdac8d5e5ad09",
"text": "Twitter is now used to distribute substantive content such as breaking news, increasing the importance of assessing the credibility of tweets. As users increasingly access tweets through search, they have less information on which to base credibility judgments as compared to consuming content from direct social network connections. We present survey results regarding users' perceptions of tweet credibility. We find a disparity between features users consider relevant to credibility assessment and those currently revealed by search engines. We then conducted two experiments in which we systematically manipulated several features of tweets to assess their impact on credibility ratings. We show that users are poor judges of truthfulness based on content alone, and instead are influenced by heuristics such as user name when making credibility assessments. Based on these findings, we discuss strategies tweet authors can use to enhance their credibility with readers (and strategies astute readers should be aware of!). We propose design improvements for displaying social search results so as to better convey credibility.",
"title": ""
},
{
"docid": "d06a7c8379ba991385af5dc986537360",
"text": "Though social network site use is often treated as a monolithic activity, in which all time is equally social and its impact the same for all users, we examine how Facebook affects social capital depending upon: (1) types of site activities, contrasting one-on-one communication, broadcasts to wider audiences, and passive consumption of social news, and (2) individual differences among users, including social communication skill and self-esteem. Longitudinal surveys matched to server logs from 415 Facebook users reveal that receiving messages from friends is associated with increases in bridging social capital, but that other uses are not. However, using the site to passively consume news assists those with lower social fluency draw value from their connections. The results inform site designers seeking to increase social connectedness and the value of those connections.",
"title": ""
},
{
"docid": "be6ce39ba9565f4d28dfeb29528a5046",
"text": "The negative aspects of smartphone overuse on young adults, such as sleep deprivation and attention deficits, are being increasingly recognized recently. This emerging issue motivated us to analyze the usage patterns related to smartphone overuse. We investigate smartphone usage for 95 college students using surveys, logged data, and interviews. We first divide the participants into risk and non-risk groups based on self-reported rating scale for smartphone overuse. We then analyze the usage data to identify between-group usage differences, which ranged from the overall usage patterns to app-specific usage patterns. Compared with the non-risk group, our results show that the risk group has longer usage time per day and different diurnal usage patterns. Also, the risk group users are more susceptible to push notifications, and tend to consume more online content. We characterize the overall relationship between usage features and smartphone overuse using analytic modeling and provide detailed illustrations of problematic usage behaviors based on interview data.",
"title": ""
},
{
"docid": "b8f1c6553cd97fab63eae159ae01797e",
"text": "0747-5632/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.chb.2010.02.004 * Corresponding author. E-mail address: [email protected] (M. Using computers with friends either in person or online has become ubiquitous in the life of most adolescents; however, little is known about the complex relation between this activity and friendship quality. This study examined direct support for the social compensation and rich-get-richer hypotheses among adolescent girls and boys by including social anxiety as a moderating factor. A sample of 1050 adolescents completed a survey in grade 9 and then again in grades 11 and 12. For girls, there was a main effect of using computers with friends on friendship quality; providing support for both hypotheses. For adolescent boys, however, social anxiety moderated this relation, supporting the social compensation hypothesis. These findings were identical for online communication and were stable throughout adolescence. Furthermore, participating in organized sports did not compensate for social anxiety for either adolescent girls or boys. Therefore, characteristics associated with using computers with friends may create a comfortable environment for socially anxious adolescents to interact with their peers which may be distinct from other more traditional adolescent activities. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "55631b81d46fc3dcaad8375176cb1c68",
"text": "UNLABELLED\nThe need for long-term retention to prevent post-treatment tooth movement is now widely accepted by orthodontists. This may be achieved with removable retainers or permanent bonded retainers. This article aims to provide simple guidance for the dentist on how to maintain and repair both removable and fixed retainers.\n\n\nCLINICAL RELEVANCE\nThe general dental practitioner is more likely to review patients over time and needs to be aware of the need for long-term retention and how to maintain and repair the retainers.",
"title": ""
},
{
"docid": "1c16eec32b941af1646843bb81d16b5f",
"text": "Facebook is rapidly gaining recognition as a powerful research tool for the social sciences. It constitutes a large and diverse pool of participants, who can be selectively recruited for both online and offline studies. Additionally, it facilitates data collection by storing detailed records of its users' demographic profiles, social interactions, and behaviors. With participants' consent, these data can be recorded retrospectively in a convenient, accurate, and inexpensive way. Based on our experience in designing, implementing, and maintaining multiple Facebook-based psychological studies that attracted over 10 million participants, we demonstrate how to recruit participants using Facebook, incentivize them effectively, and maximize their engagement. We also outline the most important opportunities and challenges associated with using Facebook for research, provide several practical guidelines on how to successfully implement studies on Facebook, and finally, discuss ethical considerations.",
"title": ""
},
{
"docid": "e8c7f00d775254bd6b8c5393397d05a6",
"text": "PURPOSE\nVirtual reality devices, including virtual reality head-mounted displays, are becoming increasingly accessible to the general public as technological advances lead to reduced costs. However, there are numerous reports that adverse effects such as ocular discomfort and headache are associated with these devices. To investigate these adverse effects, questionnaires that have been specifically designed for other purposes such as investigating motion sickness have often been used. The primary purpose of this study was to develop a standard questionnaire for use in investigating symptoms that result from virtual reality viewing. In addition, symptom duration and whether priming subjects elevates symptom ratings were also investigated.\n\n\nMETHODS\nA list of the most frequently reported symptoms following virtual reality viewing was determined from previously published studies and used as the basis for a pilot questionnaire. The pilot questionnaire, which consisted of 12 nonocular and 11 ocular symptoms, was administered to two groups of eight subjects. One group was primed by having them complete the questionnaire before immersion; the other group completed the questionnaire postviewing only. Postviewing testing was carried out immediately after viewing and then at 2-min intervals for a further 10 min.\n\n\nRESULTS\nPriming subjects did not elevate symptom ratings; therefore, the data were pooled and 16 symptoms were found to increase significantly. The majority of symptoms dissipated rapidly, within 6 min after viewing. Frequency of endorsement data showed that approximately half of the symptoms on the pilot questionnaire could be discarded because <20% of subjects experienced them.\n\n\nCONCLUSIONS\nSymptom questionnaires to investigate virtual reality viewing can be administered before viewing, without biasing the findings, allowing calculation of the amount of change from pre- to postviewing. However, symptoms dissipate rapidly and assessment of symptoms needs to occur in the first 5 min postviewing. Thirteen symptom questions, eight nonocular and five ocular, were determined to be useful for a questionnaire specifically related to virtual reality viewing using a head-mounted display.",
"title": ""
},
{
"docid": "8b552849d9c41d82171de2e87967836c",
"text": "The need for building robots with soft materials emerged recently from considerations of the limitations of service robots in negotiating natural environments, from observation of the role of compliance in animals and plants [1], and even from the role attributed to the physical body in movement control and intelligence, in the so-called embodied intelligence or morphological computation paradigm [2]-[4]. The wide spread of soft robotics relies on numerous investigations of diverse materials and technologies for actuation and sensing, and on research of control techniques, all of which can serve the purpose of building robots with high deformability and compliance. But the core challenge of soft robotics research is, in fact, the variability and controllability of such deformability and compliance.",
"title": ""
},
{
"docid": "0be3de2b6f0dd5d3158cc7a98286d571",
"text": "The use of tablet PCs is spreading rapidly, and accordingly users browsing and inputting personal information in public spaces can often be seen by third parties. Unlike conventional mobile phones and notebook PCs equipped with distinct input devices (e.g., keyboards), tablet PCs have touchscreen keyboards for data input. Such integration of display and input device increases the potential for harm when the display is captured by malicious attackers. This paper presents the description of reconstructing tablet PC displays via measurement of electromagnetic (EM) emanation. In conventional studies, such EM display capture has been achieved by using non-portable setups. Those studies also assumed that a large amount of time was available in advance of capture to obtain the electrical parameters of the target display. In contrast, this paper demonstrates that such EM display capture is feasible in real time by a setup that fits in an attaché case. The screen image reconstruction is achieved by performing a prior course profiling and a complemental signal processing instead of the conventional fine parameter tuning. Such complemental processing can eliminate the differences of leakage parameters among individuals and therefore correct the distortions of images. The attack distance, 2 m, makes this method a practical threat to general tablet PCs in public places. This paper discusses possible attack scenarios based on the setup described above. In addition, we describe a mechanism of EM emanation from tablet PCs and a countermeasure against such EM display capture.",
"title": ""
},
{
"docid": "4df7857714e8b5149e315666fd4badd2",
"text": "Visual place recognition and loop closure is critical for the global accuracy of visual Simultaneous Localization and Mapping (SLAM) systems. We present a place recognition algorithm which operates by matching local query image sequences to a database of image sequences. To match sequences, we calculate a matrix of low-resolution, contrast-enhanced image similarity probability values. The optimal sequence alignment, which can be viewed as a discontinuous path through the matrix, is found using a Hidden Markov Model (HMM) framework reminiscent of Dynamic Time Warping from speech recognition. The state transitions enforce local velocity constraints and the most likely path sequence is recovered efficiently using the Viterbi algorithm. A rank reduction on the similarity probability matrix is used to provide additional robustness in challenging conditions when scoring sequence matches. We evaluate our approach on seven outdoor vision datasets and show improved precision-recall performance against the recently published seqSLAM algorithm.",
"title": ""
},
{
"docid": "0e238250d980c944ed7046448d2681fa",
"text": "Analysing the behaviour of student performance in classroom education is an active area in educational research. Early prediction of student performance may be helpful for both teacher and the student. However, the influencing factors of the student performance need to be identified first to build up such early prediction model. The existing data mining literature on student performance primarily focuses on student-related factors, though it may be influenced by many external factors also. Superior teaching acts as a catalyst which improves the knowledge dissemination process from teacher to the student. It also motivates the student to put more effort on the study. However, the research question, how the performance or grade correlates with teaching, is still relevant in present days. In this work, we propose a quantifiable measure of improvement with respect to the expected performance of a student. Furthermore, this study analyses the impact of teaching on performance improvement in theoretical courses of classroom-based education. It explores nearly 0.2 million academic records collected from an online system of an academic institute of national importance in India. The association mining approach has been adopted here and the result shows that confidence of both non-negative and positive improvements increase with superior teaching. This result indeed establishes the fact that teaching has a positive impact on student performance. To be more specific, the growing confidence of non-negative and positive improvements indicate that superior teaching facilitates more students to obtain either expected or better than expected grade.",
"title": ""
},
{
"docid": "d46af3854769569a631fab2c3c7fa8f3",
"text": "Existing vector space models typically map synonyms and antonyms to similar word vectors, and thus fail to represent antonymy. We introduce a new vector space representation where antonyms lie on opposite sides of a sphere: in the word vector space, synonyms have cosine similarities close to one, while antonyms are close to minus one. We derive this representation with the aid of a thesaurus and latent semantic analysis (LSA). Each entry in the thesaurus – a word sense along with its synonyms and antonyms – is treated as a “document,” and the resulting document collection is subjected to LSA. The key contribution of this work is to show how to assign signs to the entries in the co-occurrence matrix on which LSA operates, so as to induce a subspace with the desired property. We evaluate this procedure with the Graduate Record Examination questions of (Mohammed et al., 2008) and find that the method improves on the results of that study. Further improvements result from refining the subspace representation with discriminative training, and augmenting the training data with general newspaper text. Altogether, we improve on the best previous results by 11 points absolute in F measure.",
"title": ""
},
{
"docid": "1e56ff2af1b76571823d54d1f7523b49",
"text": "Open-source intelligence offers value in information security decision making through knowledge of threats and malicious activities that potentially impact business. Open-source intelligence using the internet is common, however, using the darknet is less common for the typical cybersecurity analyst. The challenges to using the darknet for open-source intelligence includes using specialized collection, processing, and analysis tools. While researchers share techniques, there are few publicly shared tools; therefore, this paper explores an open-source intelligence automation toolset that scans across the darknet connecting, collecting, processing, and analyzing. It describes and shares the tools and processes to build a secure darknet connection, and then how to collect, process, store, and analyze data. Providing tools and processes serves as an on-ramp for cybersecurity intelligence analysts to search for threats. Future studies may refine, expand, and deepen this paper's toolset framework. © 2 01 7 T he SA NS In sti tut e, Au tho r R eta ins Fu ll R igh ts © 2017 The SANS Institute Author retains full rights. Data Mining in the Dark 2 Nafziger, Brian",
"title": ""
},
{
"docid": "9e0cbbe8d95298313fd929a7eb2bfea9",
"text": "We compare two technological approaches to augmented reality for 3-D medical visualization: optical and video see-through devices. We provide a context to discuss the technology by reviewing several medical applications of augmented-reality re search efforts driven by real needs in the medical field, both in the United States and in Europe. We then discuss the issues for each approach, optical versus video, from both a technology and human-factor point of view. Finally, we point to potentially promising future developments of such devices including eye tracking and multifocus planes capabilities, as well as hybrid optical/video technology.",
"title": ""
},
{
"docid": "c2c8efe7f626899f1a160aaa0112c80a",
"text": "The genome of a cancer cell carries somatic mutations that are the cumulative consequences of the DNA damage and repair processes operative during the cellular lineage between the fertilized egg and the cancer cell. Remarkably, these mutational processes are poorly characterized. Global sequencing initiatives are yielding catalogs of somatic mutations from thousands of cancers, thus providing the unique opportunity to decipher the signatures of mutational processes operative in human cancer. However, until now there have been no theoretical models describing the signatures of mutational processes operative in cancer genomes and no systematic computational approaches are available to decipher these mutational signatures. Here, by modeling mutational processes as a blind source separation problem, we introduce a computational framework that effectively addresses these questions. Our approach provides a basis for characterizing mutational signatures from cancer-derived somatic mutational catalogs, paving the way to insights into the pathogenetic mechanism underlying all cancers.",
"title": ""
},
{
"docid": "72462dd37b9d83f240778c794ddf0162",
"text": "A new record conversion efficiency of 24.7% was attained at the research level by using a heterojunction with intrinsic thin-layer structure of practical size (101.8 cm2, total area) at a 98-μm thickness. This is a world height record for any crystalline silicon-based solar cell of practical size (100 cm2 and above). Since we announced our former record of 23.7%, we have continued to reduce recombination losses at the hetero interface between a-Si and c-Si along with cutting down resistive losses by improving the silver paste with lower resistivity and optimization of the thicknesses in a-Si layers. Using a new technology that enables the formation of a-Si layer of even higher quality on the c-Si substrate, while limiting damage to the surface of the substrate, the Voc has been improved from 0.745 to 0.750 V. We also succeeded in improving the fill factor from 0.809 to 0.832.",
"title": ""
},
{
"docid": "61406f27199acc5f034c2721d66cda89",
"text": "Fischler PER •Sequence of tokens mapped to word embeddings. •Bidirectional LSTM builds context-dependent representations for each word. •A small feedforward layer encourages generalisation. •Conditional Random Field (CRF) at the top outputs the most optimal label sequence for the sentence. •Using character-based dynamic embeddings (Rei et al., 2016) to capture morphological patterns and unseen words.",
"title": ""
},
{
"docid": "65eb604a2d45f29923ba24976130adc1",
"text": "The recognition of boundaries, e.g., between chorus and verse, is an important task in music structure analysis. The goal is to automatically detect such boundaries in audio signals so that the results are close to human annotation. In this work, we apply Convolutional Neural Networks to the task, trained directly on mel-scaled magnitude spectrograms. On a representative subset of the SALAMI structural annotation dataset, our method outperforms current techniques in terms of boundary retrieval F -measure at different temporal tolerances: We advance the state-of-the-art from 0.33 to 0.46 for tolerances of±0.5 seconds, and from 0.52 to 0.62 for tolerances of ±3 seconds. As the algorithm is trained on annotated audio data without the need of expert knowledge, we expect it to be easily adaptable to changed annotation guidelines and also to related tasks such as the detection of song transitions.",
"title": ""
},
{
"docid": "a1fed0bcce198ad333b45bfc5e0efa12",
"text": "Contemporary games are making significant strides towards offering complex, immersive experiences for players. We can now explore sprawling 3D virtual environments populated by beautifully rendered characters and objects with autonomous behavior, engage in highly visceral action-oriented experiences offering a variety of missions with multiple solutions, and interact in ever-expanding online worlds teeming with physically customizable player avatars.",
"title": ""
},
{
"docid": "5054ad32c33dc2650c1dcee640961cd5",
"text": "Benchmarks have played a vital role in the advancement of visual object recognition and other fields of computer vision (LeCun et al., 1998; Deng et al., 2009; ). The challenges posed by these standard datasets have helped identify and overcome the shortcomings of existing approaches, and have led to great advances of the state of the art. Even the recent massive increase of interest in deep learning methods can be attributed to their success in difficult benchmarks such as ImageNet (Krizhevsky et al., 2012; LeCun et al., 2015). Neuromorphic vision uses silicon retina sensors such as the dynamic vision sensor (DVS; Lichtsteiner et al., 2008). These sensors and their DAVIS (Dynamic and Activepixel Vision Sensor) and ATIS (Asynchronous Time-based Image Sensor) derivatives (Brandli et al., 2014; Posch et al., 2014) are inspired by biological vision by generating streams of asynchronous events indicating local log-intensity brightness changes. They thereby greatly reduce the amount of data to be processed, and their dynamic nature makes them a good fit for domains such as optical flow, object tracking, action recognition, or dynamic scene understanding. Compared to classical computer vision, neuromorphic vision is a younger and much smaller field of research, and lacks benchmarks, which impedes the progress of the field. To address this we introduce the largest event-based vision benchmark dataset published to date, hoping to satisfy a growing demand and stimulate challenges for the community. In particular, the availability of such benchmarks should help the development of algorithms processing event-based vision input, allowing a direct fair comparison of different approaches. We have explicitly chosen mostly dynamic vision tasks such as action recognition or tracking, which could benefit from the strengths of neuromorphic vision sensors, although algorithms that exploit these features are largely missing. A major reason for the lack of benchmarks is that currently neuromorphic vision sensors are only available as R&D prototypes. Nonetheless, there are several datasets already available; see Tan et al. (2015) for an informative review. Unlabeled DVS data was made available around 2007 in the jAER project1 and was used for development of spike timing-based unsupervised feature learning e.g., in Bichler et al. (2012). The first labeled and published event-based neuromorphic vision sensor benchmarks were created from the MNIST digit recognition dataset by jiggling the image on the screen (see Serrano-Gotarredona and Linares-Barranco, 2015 for an informative history) and later to reduce frame artifacts by jiggling the camera view with a pan-tilt unit (Orchard et al., 2015). These datasets automated the scene movement necessary to generate DVS output from the static images, and will be an important step forward for evaluating neuromorphic object recognition systems such as spiking deep networks (Pérez-Carrasco et al., 2013; O’Connor et al., 2013; Cao et al., 2014; Diehl et al., 2015), which so far have been tested mostly on static image datasets converted",
"title": ""
},
{
"docid": "b6cc88bc123a081d580c9430c0ad0207",
"text": "This paper presents a comparative survey of research activities and emerging technologies of solid-state fault current limiters for power distribution systems.",
"title": ""
},
{
"docid": "503101a7b0f923f8fecb6dc9bb0bde37",
"text": "In-vehicle electronic equipment aims to increase safety, by detecting risk factors and taking/suggesting corrective actions. This paper presents a knowledge-based framework for assisting a driver via her PDA. Car data extracted under On Board Diagnostics (OBD-II) protocol, data acquired from PDA embedded micro-devices and information retrieved from the Web are properly combined: a simple data fusion algorithm has been devised to collect and semantically annotate relevant safety events. Finally, a logic-based matchmaking allows to infer potential risk factors, enabling the system to issue accurate and timely warnings. The proposed approach has been implemented in a prototypical application for the Apple iPhone platform, in order to provide experimental evaluation in real-world test drives for corroborating the approach. Keywords-Semantic Web; On Board Diagnostics; Ubiquitous Computing; Data Fusion; Intelligent Transportation Systems",
"title": ""
},
{
"docid": "94f040bf8f9bc6f30109b822b977c3b5",
"text": "Introduction: The tooth mobility due to periodontal bone loss can cause masticatory discomfort, mainly in protrusive movements in the region of the mandibular anterior teeth. Thus, the splinting is a viable alternative to keep them in function satisfactorily. Objective: This study aimed to demonstrate, through a clinical case with medium-term following-up, the clinical application of splinting with glass fiber-reinforced composite resin. Case report: Female patient, 73 years old, complained about masticatory discomfort related to the right mandibular lateral incisor. Clinical and radiographic evaluation showed grade 2 dental mobility, bone loss and increased periodontal ligament space. The proposed treatment was splinting with glass fiber-reinforced composite resin from the right mandibular canine to left mandibular canine. Results: Four-year follow-up showed favorable clinical and radiographic results with respect to periodontal health and maintenance of functional aspects. Conclusion: The splinting with glass fiber-reinforced composite resin is a viable technique and stable over time for the treatment of tooth mobility.",
"title": ""
}
] | scidocsrr |
1433b929b171815ba51b87a2f3459e9b | Automatic video description generation via LSTM with joint two-stream encoding | [
{
"docid": "4f58d355a60eb61b1c2ee71a457cf5fe",
"text": "Real-world videos often have complex dynamics, methods for generating open-domain video descriptions should be sensitive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD).",
"title": ""
},
{
"docid": "9734f4395c306763e6cc5bf13b0ca961",
"text": "Generating descriptions for videos has many applications including assisting blind people and human-robot interaction. The recent advances in image captioning as well as the release of large-scale movie description datasets such as MPII-MD [28] allow to study this task in more depth. Many of the proposed methods for image captioning rely on pre-trained object classifier CNNs and Long-Short Term Memory recurrent networks (LSTMs) for generating descriptions. While image description focuses on objects, we argue that it is important to distinguish verbs, objects, and places in the challenging setting of movie description. In this work we show how to learn robust visual classifiers from the weak annotations of the sentence descriptions. Based on these visual classifiers we learn how to generate a description using an LSTM. We explore different design choices to build and train the LSTM and achieve the best performance to date on the challenging MPII-MD dataset. We compare and analyze our approach and prior work along various dimensions to better understand the key challenges of the movie description task.",
"title": ""
},
{
"docid": "7ebff2391401cef25b27d510675e9acd",
"text": "We present a new approach for modeling multi-modal data sets, focusing on the specific case of segmented images with associated text. Learning the joint distribution of image regions and words has many applications. We consider in detail predicting words associated with whole images (auto-annotation) and corresponding to particular image regions (region naming). Auto-annotation might help organize and access large collections of images. Region naming is a model of object recognition as a process of translating image regions to words, much as one might translate from one language to another. Learning the relationships between image regions and semantic correlates (words) is an interesting example of multi-modal data mining, particularly because it is typically hard to apply data mining techniques to collections of images. We develop a number of models for the joint distribution of image regions and words, including several which explicitly learn the correspondence between regions and words. We study multi-modal and correspondence extensions to Hofmann’s hierarchical clustering/aspect model, a translation model adapted from statistical machine translation (Brown et al.), and a multi-modal extension to mixture of latent Dirichlet allocation (MoM-LDA). All models are assessed using a large collection of annotated images of real c ©2003 Kobus Barnard, Pinar Duygulu, David Forsyth, Nando de Freitas, David Blei and Michael Jordan. BARNARD, DUYGULU, FORSYTH, DE FREITAS, BLEI AND JORDAN scenes. We study in depth the difficult problem of measuring performance. For the annotation task, we look at prediction performance on held out data. We present three alternative measures, oriented toward different types of task. Measuring the performance of correspondence methods is harder, because one must determine whether a word has been placed on the right region of an image. We can use annotation performance as a proxy measure, but accurate measurement requires hand labeled data, and thus must occur on a smaller scale. We show results using both an annotation proxy, and manually labeled data.",
"title": ""
},
{
"docid": "cd45dd9d63c85bb0b23ccb4a8814a159",
"text": "Parameter set learned using all WMT12 data (Callison-Burch et al., 2012): • 100,000 binary rankings covering 8 language directions. •Restrict scoring for all languages to exact and paraphrase matching. Parameters encode human preferences that generalize across languages: •Prefer recall over precision. •Prefer word choice over word order. •Prefer correct translations of content words over function words. •Prefer exact matches over paraphrase matches, while still giving significant credit to paraphrases. Visualization",
"title": ""
}
] | [
{
"docid": "af6b26efef62f3017a0eccc5d2ae3c33",
"text": "Universal, intelligent, and multifunctional devices controlling power distribution and measurement will become the enabling technology of the Smart Grid ICT. In this paper, we report on a novel automation architecture which supports distributed multiagent intelligence, interoperability, and configurability and enables efficient simulation of distributed automation systems. The solution is based on the combination of IEC 61850 object-based modeling and interoperable communication with IEC 61499 function block executable specification. Using the developed simulation environment, we demonstrate the possibility of multiagent control to achieve self-healing grid through collaborative fault location and power restoration.",
"title": ""
},
{
"docid": "4761b8398018e4a15a1d67a127dd657d",
"text": "The increasing popularity of social networks, such as Facebook and Orkut, has raised several privacy concerns. Traditional ways of safeguarding privacy of personal information by hiding sensitive attributes are no longer adequate. Research shows that probabilistic classification techniques can effectively infer such private information. The disclosed sensitive information of friends, group affiliations and even participation in activities, such as tagging and commenting, are considered background knowledge in this process. In this paper, we present a privacy protection tool, called Privometer, that measures the amount of sensitive information leakage in a user profile and suggests self-sanitization actions to regulate the amount of leakage. In contrast to previous research, where inference techniques use publicly available profile information, we consider an augmented model where a potentially malicious application installed in the user's friend profiles can access substantially more information. In our model, merely hiding the sensitive information is not sufficient to protect the user privacy. We present an implementation of Privometer in Facebook.",
"title": ""
},
{
"docid": "f8ecc204d84c239b9f3d544fd8d74a5c",
"text": "Storyline detection from news articles aims at summarizing events described under a certain news topic and revealing how those events evolve over time. It is a difficult task because it requires first the detection of events from news articles published in different time periods and then the construction of storylines by linking events into coherent news stories. Moreover, each storyline has different hierarchical structures which are dependent across epochs. Existing approaches often ignore the dependency of hierarchical structures in storyline generation. In this paper, we propose an unsupervised Bayesian model, called dynamic storyline detection model, to extract structured representations and evolution patterns of storylines. The proposed model is evaluated on a large scale news corpus. Experimental results show that our proposed model outperforms several baseline approaches.",
"title": ""
},
{
"docid": "d8b19c953cc66b6157b87da402dea98a",
"text": "In this paper we propose a new semi-supervised GAN architecture (ss-InfoGAN) for image synthesis that leverages information from few labels (as little as 0.22%, max. 10% of the dataset) to learn semantically meaningful and controllable data representations where latent variables correspond to label categories. The architecture builds on Information Maximizing Generative Adversarial Networks (InfoGAN) and is shown to learn both continuous and categorical codes and achieves higher quality of synthetic samples compared to fully unsupervised settings. Furthermore, we show that using small amounts of labeled data speeds-up training convergence. The architecture maintains the ability to disentangle latent variables for which no labels are available. Finally, we contribute an information-theoretic reasoning on how introducing semi-supervision increases mutual information between synthetic and real data.",
"title": ""
},
{
"docid": "285da3b342a3b3bd14fb14bca73914cd",
"text": "This paper presents expressions for the waveforms and design equations to satisfy the ZVS/ZDS conditions in the class-E power amplifier, taking into account the MOSFET gate-to-drain linear parasitic capacitance and the drain-to-source nonlinear parasitic capacitance. Expressions are given for power output capability and power conversion efficiency. Design examples are presented along with the PSpice-simulation and experimental waveforms at 2.3 W output power and 4 MHz operating frequency. It is shown from the expressions that the slope of the voltage across the MOSFET gate-to-drain parasitic capacitance during the switch-off state affects the switch-voltage waveform. Therefore, it is necessary to consider the MOSFET gate-to-drain capacitance for achieving the class-E ZVS/ZDS conditions. As a result, the power output capability and the power conversion efficiency are also affected by the MOSFET gate-to-drain capacitance. The waveforms obtained from PSpice simulations and circuit experiments showed the quantitative agreements with the theoretical predictions, which verify the expressions given in this paper.",
"title": ""
},
{
"docid": "175551435f1a4c73110b79e01306412f",
"text": "The development of MEMS actuators is rapidly evolving and continuously new progress in terms of efficiency, power and force output is reported. Pneumatic and hydraulic are an interesting class of microactuators that are easily overlooked. Despite the 20 years of research, and hundreds of publications on this topic, these actuators are only popular in microfluidic systems. In other MEMS applications, pneumatic and hydraulic actuators are rare in comparison with electrostatic, thermal or piezo-electric actuators. However, several studies have shown that hydraulic and pneumatic actuators deliver among the highest force and power densities at microscale. It is believed that this asset is particularly important in modern industrial and medical microsystems, and therefore, pneumatic and hydraulic actuators could start playing an increasingly important role. This paper shows an in-depth overview of the developments in this field ranging from the classic inflatable membrane actuators to more complex piston–cylinder and drag-based microdevices. (Some figures in this article are in colour only in the electronic version)",
"title": ""
},
{
"docid": "1675d99203da64eab8f9722b77edaab5",
"text": "Estimation of the semantic relatedness between biomedical concepts has utility for many informatics applications. Automated methods fall into two broad categories: methods based on distributional statistics drawn from text corpora, and methods based on the structure of existing knowledge resources. In the former case, taxonomic structure is disregarded. In the latter, semantically relevant empirical information is not considered. In this paper, we present a method that retrofits the context vector representation of MeSH terms by using additional linkage information from UMLS/MeSH hierarchy such that linked concepts have similar vector representations. We evaluated the method relative to previously published physician and coder’s ratings on sets of MeSH terms. Our experimental results demonstrate that the retrofitted word vector measures obtain a higher correlation with physician judgments. The results also demonstrate a clear improvement on the correlation with experts’ ratings from the retrofitted vector representation in comparison to the vector representation without retrofitting.",
"title": ""
},
{
"docid": "47e84cacb4db05a30bedfc0731dd2717",
"text": "Although short-range wireless communication explicitly targets local and regional applications, range continues to be a highly important issue. The range directly depends on the so-called link budget, which can be increased by the choice of modulation and coding schemes. The recent transceiver generation in particular comes with extensive and flexible support for software-defined radio (SDR). The SX127× family from Semtech Corp. is a member of this device class and promises significant benefits for range, robust performance, and battery lifetime compared to competing technologies. This contribution gives a short overview of the technologies to support Long Range (LoRa™) and the corresponding Layer 2 protocol (LoRaWAN™). It particularly describes the possibility to combine the Internet Protocol, i.e. IPv6, into LoRaWAN™, so that it can be directly integrated into a full-fledged Internet of Things (IoT). The proposed solution, which we name 6LoRaWAN, has been implemented and tested; results of the experiments are also shown in this paper.",
"title": ""
},
{
"docid": "c78a4446be38b8fff2a949cba30a8b65",
"text": "This paper will derive the Black-Scholes pricing model of a European option by calculating the expected value of the option. We will assume that the stock price is log-normally distributed and that the universe is riskneutral. Then, using Ito’s Lemma, we will justify the use of the risk-neutral rate in these initial calculations. Finally, we will prove put-call parity in order to price European put options, and extend the concepts of the Black-Scholes formula to value an option with pricing barriers.",
"title": ""
},
{
"docid": "c5443c3bdfed74fd643e7b6c53a70ccc",
"text": "Background\nAbsorbable suture suspension (Silhouette InstaLift, Sinclair Pharma, Irvine, CA) is a novel, minimally invasive system that utilizes a specially manufactured synthetic suture to help address the issues of facial aging, while minimizing the risks associated with historic thread lifting modalities.\n\n\nObjectives\nThe purpose of the study was to assess the safety, efficacy, and patient satisfaction of the absorbable suture suspension system in regards to facial rejuvenation and midface volume enhancement.\n\n\nMethods\nThe first 100 treated patients who underwent absorbable suture suspension, by the senior author, were critically evaluated. Subjects completed anonymous surveys evaluating their experience with the new modality.\n\n\nResults\nSurvey results indicate that absorbable suture suspension is a tolerable (96%) and manageable (89%) treatment that improves age related changes (83%), which was found to be in concordance with our critical review.\n\n\nConclusions\nAbsorbable suture suspension generates high patient satisfaction by nonsurgically lifting mid and lower face and neck skin and has the potential to influence numerous facets of aesthetic medicine. The study provides a greater understanding concerning patient selection, suture trajectory, and possible adjuvant therapies.\n\n\nLevel of Evidence 4",
"title": ""
},
{
"docid": "246866da7509b2a8a2bda734a664de9c",
"text": "In this paper we present an approach of procedural game content generation that focuses on a gameplay loops formal language (GLFL). In fact, during an iterative game design process, game designers suggest modifications that often require high development costs. The proposed language and its operational semantic allow reducing the gap between game designers' requirement and game developers' needs, enhancing therefore video games productivity. Using gameplay loops concept for game content generation offers a low cost solution to adjust game challenges, objectives and rewards in video games. A pilot experiment have been conducted to study the impact of this approach on game development.",
"title": ""
},
{
"docid": "b776b58f6f78e77c81605133c6e4edce",
"text": "The phase response of noisy speech has largely been ignored, but recent research shows the importance of phase for perceptual speech quality. A few phase enhancement approaches have been developed. These systems, however, require a separate algorithm for enhancing the magnitude response. In this paper, we present a novel framework for performing monaural speech separation in the complex domain. We show that much structure is exhibited in the real and imaginary components of the short-time Fourier transform, making the complex domain appropriate for supervised estimation. Consequently, we define the complex ideal ratio mask (cIRM) that jointly enhances the magnitude and phase of noisy speech. We then employ a single deep neural network to estimate both the real and imaginary components of the cIRM. The evaluation results show that complex ratio masking yields high quality speech enhancement, and outperforms related methods that operate in the magnitude domain or separately enhance magnitude and phase.",
"title": ""
},
{
"docid": "4783e35e54d0c7f555015427cbdc011d",
"text": "The language of deaf and dumb which uses body parts to convey the message is known as sign language. Here, we are doing a study to convert speech into sign language used for conversation. In this area we have many developed method to recognize alphabets and numerals of ISL (Indian sign language). There are various approaches for recognition of ISL and we have done a comparative studies between them [1].",
"title": ""
},
{
"docid": "2ed36e909f52e139b5fd907436e80443",
"text": "It is difficult to draw sweeping general conclusions about the blastogenesis of CT, principally because so few thoroughly studied cases are reported. It is to be hoped that methods such as painstaking gross or electronic dissection will increase the number of well-documented cases. Nevertheless, the following conclusions can be proposed: 1. Most CT can be classified into a few main anatomic types (or paradigms), and there are also rare transitional types that show gradation between the main types. 2. Most CT have two full notochordal axes (Fig. 5); the ventral organs induced along these axes may be severely disorientated, malformed, or aplastic in the process of being arranged within one body. Reported anatomic types of CT represent those notochordal arrangements that are compatible with reasonably complete embryogenesis. New ventro-lateral axes are formed in many types of CT because of space constriction in the ventral zones. The new structures represent areas of \"mutual recognition and organization\" rather than \"fusion\" (Fig. 17). 3. Orientations of the pairs of axes in the embryonic disc can be deduced from the resulting anatomy. Except for dicephalus, the axes are not side by side. Notochords are usually \"end-on\" or ventro-ventral in orientation (Fig. 5). 4. A single gastrulation event or only partial duplicated gastrulation event seems to occur in dicephalics, despite a full double notochord. 5. The anatomy of diprosopus requires further clarification, particularly in cases with complete crania rather than anencephaly-equivalent. Diprosopus CT offer the best opportunity to study the effects of true forking of the notochord, if this actually occurs. 6. In cephalothoracopagus, thoracopagus, and ischiopagus, remarkably complete new body forms are constructed at right angles to the notochordal axes. The extent of expression of viscera in these types depends on the degree of noncongruity of their ventro-ventral axes (Figs. 4, 11, 15b). 7. Some organs and tissues fail to develop (interaction aplasia) because of conflicting migrational pathways or abnormal concentrations of morphogens in and around the neoaxes. 8. Where the cardiovascular system is discordantly expressed in dicephalus and thoracopagus twins, the right heart is more severely malformed, depending on the degree of interaction of the two embryonic septa transversa. 9. The septum transversum provides mesenchymal components to the heawrt and liver; the epithelial components (derived fro the foregut[s]) may vary in number from the number of mesenchymal septa transversa contributing to the liver of the CT embryo.(ABSTRACT TRUNCATED AT 400 WORDS)",
"title": ""
},
{
"docid": "33e45b66cca92f15270500c32a1c0b94",
"text": "We study a dataset of billions of program binary files that appeared on 100 million computers over the course of 12 months, discovering that 94% of these files were present on a single machine. Though malware polymorphism is one cause for the large number of singleton files, additional factors also contribute to polymorphism, given that the ratio of benign to malicious singleton files is 80:1. The huge number of benign singletons makes it challenging to reliably identify the minority of malicious singletons. We present a large-scale study of the properties, characteristics, and distribution of benign and malicious singleton files. We leverage the insights from this study to build a classifier based purely on static features to identify 92% of the remaining malicious singletons at a 1.4% percent false positive rate, despite heavy use of obfuscation and packing techniques by most malicious singleton files that we make no attempt to de-obfuscate. Finally, we demonstrate robustness of our classifier to important classes of automated evasion attacks.",
"title": ""
},
{
"docid": "9b17dd1fc2c7082fa8daecd850fab91c",
"text": "This paper presents all the stages of development of a solar tracker for a photovoltaic panel. The system was made with a microcontroller which was design as an embedded control. It has a data base of the angles of orientation horizontal axle, therefore it has no sensor inlet signal and it function as an open loop control system. Combined of above mention characteristics in one the tracker system is a new technique of the active type. It is also a rotational robot of 1 degree of freedom.",
"title": ""
},
{
"docid": "a02fb872137fe7bc125af746ba814849",
"text": "23% of the total global burden of disease is attributable to disorders in people aged 60 years and older. Although the proportion of the burden arising from older people (≥60 years) is highest in high-income regions, disability-adjusted life years (DALYs) per head are 40% higher in low-income and middle-income regions, accounted for by the increased burden per head of population arising from cardiovascular diseases, and sensory, respiratory, and infectious disorders. The leading contributors to disease burden in older people are cardiovascular diseases (30·3% of the total burden in people aged 60 years and older), malignant neoplasms (15·1%), chronic respiratory diseases (9·5%), musculoskeletal diseases (7·5%), and neurological and mental disorders (6·6%). A substantial and increased proportion of morbidity and mortality due to chronic disease occurs in older people. Primary prevention in adults aged younger than 60 years will improve health in successive cohorts of older people, but much of the potential to reduce disease burden will come from more effective primary, secondary, and tertiary prevention targeting older people. Obstacles include misplaced global health priorities, ageism, the poor preparedness of health systems to deliver age-appropriate care for chronic diseases, and the complexity of integrating care for complex multimorbidities. Although population ageing is driving the worldwide epidemic of chronic diseases, substantial untapped potential exists to modify the relation between chronological age and health. This objective is especially important for the most age-dependent disorders (ie, dementia, stroke, chronic obstructive pulmonary disease, and vision impairment), for which the burden of disease arises more from disability than from mortality, and for which long-term care costs outweigh health expenditure. The societal cost of these disorders is enormous.",
"title": ""
},
{
"docid": "afae66e9ff49274bbb546cd68490e5e4",
"text": "Question-Answering Bulletin Boards (QABB), such as Yahoo! Answers and Windows Live QnA, are gaining popularity recently. Communications on QABB connect users, and the overall connections can be regarded as a social network. If the evolution of social networks can be predicted, it is quite useful for encouraging communications among users. This paper describes an improved method for predicting links based on weighted proximity measures of social networks. The method is based on an assumption that proximities between nodes can be estimated better by using both graph proximity measures and the weights of existing links in a social network. In order to show the effectiveness of our method, the data of Yahoo! Chiebukuro (Japanese Yahoo! Answers) are used for our experiments. The results show that our method outperforms previous approaches, especially when target social networks are sufficiently dense.",
"title": ""
},
{
"docid": "6d13952afa196a6a77f227e1cc9f43bd",
"text": "Spreadsheets contain valuable data on many topics, but they are difficult to integrate with other sources. Converting spreadsheet data to the relational model would allow relational integration tools to be used, but using manual methods to do this requires large amounts of work for each integration candidate. Automatic data extraction would be useful but it is very challenging: spreadsheet designs generally requires human knowledge to understand the metadata being described. Even if it is possible to obtain this metadata information automatically, a single mistake can yield an output relation with a huge number of incorrect tuples. We propose a two-phase semiautomatic system that extracts accurate relational metadata while minimizing user effort. Based on conditional random fields (CRFs), our system enables downstream spreadsheet integration applications. First, the automatic extractor uses hints from spreadsheets’ graphical style and recovered metadata to extract the spreadsheet data as accurately as possible. Second, the interactive repair component identifies similar regions in distinct spreadsheets scattered across large spreadsheet corpora, allowing a user’s single manual repair to be amortized over many possible extraction errors. Through our method of integrating the repair workflow into the extraction system, a human can obtain the accurate extraction with just 31% of the manual operations required by a standard classification based technique. We demonstrate and evaluate our system using two corpora: more than 1,000 spreadsheets published by the US government and more than 400,000 spreadsheets downloaded from the Web.",
"title": ""
},
{
"docid": "1d3b2a5906d7db650db042db9ececed1",
"text": "Music consists of precisely patterned sequences of both movement and sound that engage the mind in a multitude of experiences. We move in response to music and we move in order to make music. Because of the intimate coupling between perception and action, music provides a panoramic window through which we can examine the neural organization of complex behaviors that are at the core of human nature. Although the cognitive neuroscience of music is still in its infancy, a considerable behavioral and neuroimaging literature has amassed that pertains to neural mechanisms that underlie musical experience. Here we review neuroimaging studies of explicit sequence learning and temporal production—findings that ultimately lay the groundwork for understanding how more complex musical sequences are represented and produced by the brain. These studies are also brought into an existing framework concerning the interaction of attention and time-keeping mechanisms in perceiving complex patterns of information that are distributed in time, such as those that occur in music.",
"title": ""
}
] | scidocsrr |
516a57352a3d2bbf6172c2e4425d424d | Recent Advance in Content-based Image Retrieval: A Literature Survey | [
{
"docid": "d063f8a20e2b6522fe637794e27d7275",
"text": "Bag-of-Words (BoW) model based on SIFT has been widely used in large scale image retrieval applications. Feature quantization plays a crucial role in BoW model, which generates visual words from the high dimensional SIFT features, so as to adapt to the inverted file structure for indexing. Traditional feature quantization approaches suffer several problems: 1) high computational cost---visual words generation (codebook construction) is time consuming especially with large amount of features; 2) limited reliability---different collections of images may produce totally different codebooks and quantization error is hard to be controlled; 3) update inefficiency--once the codebook is constructed, it is not easy to be updated. In this paper, a novel feature quantization algorithm, scalar quantization, is proposed. With scalar quantization, a SIFT feature is quantized to a descriptive and discriminative bit-vector, of which the first tens of bits are taken out as code word. Our quantizer is independent of collections of images. In addition, the result of scalar quantization naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error can be flexibly reduced and controlled by efficiently enumerating nearest neighbors of code words.\n The performance of scalar quantization has been evaluated in partial-duplicate Web image search on a database of one million images. Experiments reveal that the proposed scalar quantization achieves a relatively 42% improvement in mean average precision over the baseline (hierarchical visual vocabulary tree approach), and also outperforms the state-of-the-art Hamming Embedding approach and soft assignment method.",
"title": ""
},
{
"docid": "83ad3f9cce21b2f4c4f8993a3d418a44",
"text": "Effective and efficient generation of keypoints from an image is a well-studied problem in the literature and forms the basis of numerous Computer Vision applications. Established leaders in the field are the SIFT and SURF algorithms which exhibit great performance under a variety of image transformations, with SURF in particular considered as the most computationally efficient amongst the high-performance methods to date. In this paper we propose BRISK1, a novel method for keypoint detection, description and matching. A comprehensive evaluation on benchmark datasets reveals BRISK's adaptive, high quality performance as in state-of-the-art algorithms, albeit at a dramatically lower computational cost (an order of magnitude faster than SURF in cases). The key to speed lies in the application of a novel scale-space FAST-based detector in combination with the assembly of a bit-string descriptor from intensity comparisons retrieved by dedicated sampling of each keypoint neighborhood.",
"title": ""
}
] | [
{
"docid": "c16f21fd2b50f7227ea852882004ef5b",
"text": "We study a stock dealer’s strategy for submitting bid and ask quotes in a limit order book. The agent faces an inventory risk due to the diffusive nature of the stock’s mid-price and a transactions risk due to a Poisson arrival of market buy and sell orders. After setting up the agent’s problem in a maximal expected utility framework, we derive the solution in a two step procedure. First, the dealer computes a personal indifference valuation for the stock, given his current inventory. Second, he calibrates his bid and ask quotes to the market’s limit order book. We compare this ”inventory-based” strategy to a ”naive” best bid/best ask strategy by simulating stock price paths and displaying the P&L profiles of both strategies. We find that our strategy has a P&L profile that has both a higher return and lower variance than the benchmark strategy.",
"title": ""
},
{
"docid": "7f68d112267f94d91cd4c45ecb7f874a",
"text": "In this paper we study the problem of learning Rectified Linear Units (ReLUs) which are functions of the form x ↦ max(0, ⟨w,x⟩) with w ∈ R denoting the weight vector. We study this problem in the high-dimensional regime where the number of observations are fewer than the dimension of the weight vector. We assume that the weight vector belongs to some closed set (convex or nonconvex) which captures known side-information about its structure. We focus on the realizable model where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to a planted weight vector. We show that projected gradient descent, when initialized at 0, converges at a linear rate to the planted model with a number of samples that is optimal up to numerical constants. Our results on the dynamics of convergence of these very shallow neural nets may provide some insights towards understanding the dynamics of deeper architectures.",
"title": ""
},
{
"docid": "5207f7a986dd1fecbe4afd0789d0628a",
"text": "Characterization of driving maneuvers or driving styles through motion sensors has become a field of great interest. Before now, this characterization used to be carried out with signals coming from extra equipment installed inside the vehicle, such as On-Board Diagnostic (OBD) devices or sensors in pedals. Nowadays, with the evolution and scope of smartphones, these have become the devices for recording mobile signals in many driving characterization applications. Normally multiple available sensors are used, such as accelerometers, gyroscopes, magnetometers or the Global Positioning System (GPS). However, using sensors such as GPS increase significantly battery consumption and, additionally, many current phones do not include gyroscopes. Therefore, we propose the characterization of driving style through only the use of smartphone accelerometers. We propose a deep neural network (DNN) architecture that combines convolutional and recurrent networks to estimate the vehicle movement direction (VMD), which is the forward movement directional vector captured in a phone's coordinates. Once VMD is obtained, multiple applications such as characterizing driving styles or detecting dangerous events can be developed. In the development of the proposed DNN architecture, two different methods are compared. The first one is based on the detection and classification of significant acceleration driving forces, while the second one relies on longitudinal and transversal signals derived from the raw accelerometers. The final success rate of VMD estimation for the best method is of 90.07%.",
"title": ""
},
{
"docid": "ff8dec3914e16ae7da8801fe67421760",
"text": "A hypothesized need to form and maintain strong, stable interpersonal relationships is evaluated in light of the empirical literature. The need is for frequent, nonaversive interactions within an ongoing relational bond. Consistent with the belongingness hypothesis, people form social attachments readily under most conditions and resist the dissolution of existing bonds. Belongingness appears to have multiple and strong effects on emotional patterns and on cognitive processes. Lack of attachments is linked to a variety of ill effects on health, adjustment, and well-being. Other evidence, such as that concerning satiation, substitution, and behavioral consequences, is likewise consistent with the hypothesized motivation. Several seeming counterexamples turned out not to disconfirm the hypothesis. Existing evidence supports the hypothesis that the need to belong is a powerful, fundamental, and extremely pervasive motivation.",
"title": ""
},
{
"docid": "d2bea5e928167f295e05412962d44b99",
"text": "The development of e-commerce has increased the popularity of online shopping worldwide. In Malaysia, it was reported that online shopping market size was RM1.8 billion in 2013 and it is estimated to reach RM5 billion by 2015. However, online shopping was rated 11 th out of 15 purposes of using internet in 2012. Consumers’ perceived risks of online shopping becomes a hot topic to research as it will directly influence users’ attitude towards online purchasing, and their attitude will have significant impact to the online purchasing behaviour. The conceptualization of consumers’ perceived risk, attitude and online shopping behaviour of this study provides empirical evidence in the study of consumer online behaviour. Four types of risks product risk, financial, convenience and non-delivery risks were examined in term of their effect on consumers’ online attitude. A web-based survey was employed, and a total of 300 online shoppers of a Malaysia largest online marketplace participated in this study. The findings indicated that product risk, financial and non-delivery risks are hazardous and negatively affect the attitude of online shoppers. Convenience risk was found to have positive effect on consumers’ attitude, denoting that online buyers of this site trusted the online seller and they encountered less troublesome with the site. It also implies that consumers did not really concern on non-convenience aspect of online shopping, such as handling of returned products and examine the quality of products featured in the online seller website. The online buyers’ attitude was significantly and positively affects their online purchasing behaviour. The findings provide useful model for measuring and managing consumers’ perceived risk in internet-based transaction to increase their involvement in online shopping and to reduce their cognitive dissonance in the e-commerce setting.",
"title": ""
},
{
"docid": "8f13fbf6de0fb0685b4a39ee5f3bb415",
"text": "This review presents one of the eight theories of the quality of life (QOL) used for making the SEQOL (self-evaluation of quality of life) questionnaire or the quality of life as realizing life potential. This theory is strongly inspired by Maslow and the review furthermore serves as an example on how to fulfill the demand for an overall theory of life (or philosophy of life), which we believe is necessary for global and generic quality-of-life research. Whereas traditional medical science has often been inspired by mechanical models in its attempts to understand human beings, this theory takes an explicitly biological starting point. The purpose is to take a close view of life as a unique entity, which mechanical models are unable to do. This means that things considered to be beyond the individual's purely biological nature, notably the quality of life, meaning in life, and aspirations in life, are included under this wider, biological treatise. Our interpretation of the nature of all living matter is intended as an alternative to medical mechanism, which dates back to the beginning of the 20th century. New ideas such as the notions of the human being as nestled in an evolutionary and ecological context, the spontaneous tendency of self-organizing systems for realization and concord, and the central role of consciousness in interpreting, planning, and expressing human reality are unavoidable today in attempts to scientifically understand all living matter, including human life.",
"title": ""
},
{
"docid": "753983a2361a2439fe031543a209ad79",
"text": "Social media is playing an increasingly important role as the sources of health related information. The goal of this study is to investigate the extent social media appear in search engine results in the context of health-related information search. We simulate an information seeker’s use of a search engine for health consultation using a set of pre-defined keywords in combination with 5 types of complaints. The results showed that social media constitute a significant part of the search results, indicating that search engines likely direct information seekers to social media sites. This study confirms the growing importance of social media in health communication. It also provides evidence regarding opportunities and challenges faced by health professionals and general public.",
"title": ""
},
{
"docid": "7a005d66591330d6fdea5ffa8cb9020a",
"text": "First impressions influence the behavior of people towards a newly encountered person or a human-like agent. Apart from the physical characteristics of the encountered face, the emotional expressions displayed on it, as well as ambient information affect these impressions. In this work, we propose an approach to predict the first impressions people will have for a given video depicting a face within a context. We employ pre-trained Deep Convolutional Neural Networks to extract facial expressions, as well as ambient information. After video modeling, visual features that represent facial expression and scene are combined and fed to Kernel Extreme Learning Machine regressor. The proposed system is evaluated on the ChaLearn Challenge Dataset on First Impression Recognition, where the classification target is the ”Big Five” personality trait labels for each video. Our system achieved an accuracy of 90.94% on the sequestered test set, 0.36% points below the top system in the competition.",
"title": ""
},
{
"docid": "e3b707ad340b190393d3384a1a364e63",
"text": "ed Log Lines Categorize Bins Figure 3. High-level overview of our approach for abstracting execution logs to execution events. Table III. Log lines used as a running example to explain our approach. 1. Start check out 2. Paid for, item=bag, quality=1, amount=100 3. Paid for, item=book, quality=3, amount=150 4. Check out, total amount is 250 5. Check out done Copyright q 2008 John Wiley & Sons, Ltd. J. Softw. Maint. Evol.: Res. Pract. 2008; 20:249–267 DOI: 10.1002/smr AN AUTOMATED APPROACH FOR ABSTRACTING EXECUTION LOGS 257 Table IV. Running example logs after the anonymize step. 1. Start check out 2. Paid for, item=$v, quality=$v, amount=$v 3. Paid for, item=$v, quality=$v, amount=$v 4. Check out, total amount=$v 5. Check out done Table V. Running example logs after the tokenize step. Bin names (no. of words, no. of parameters) Log lines (3,0) 1. Start check out 5. Check out done (5,1) 4. Check out, total amount=$v (8,3) 2. Paid for, item=$v, quality=$v, amount=$v 3. Paid for, item=$v, quality=$v, amount=$v 4.2.2. The tokenize step The tokenize step separates the anonymized log lines into different groups (i.e., bins) according to the number of words and estimated parameters in each log line. The use of multiple bins limits the search space of the following step (i.e., the categorize step). The use of bins permits us to process large log files in a timely fashion using a limited memory footprint since the analysis is done per bin instead of having to load up all the lines in the log file. We estimate the number of parameters in a log line by counting the number of generic terms (i.e., $v). Log lines with the same number of tokens and parameters are placed in the same bin. Table V shows the sample log lines after the anonymize and tokenize steps. The left column indicates the name of a bin. Each bin is named with a tuple: number of words and number of parameters that are contained in the log line associated with that bin. The right column in Table VI shows the log lines. Each row shows the bin and its corresponding log lines. The second and the third log lines contain 8 words and are likely to contain 3 parameters. Thus, the second and third log lines are grouped together in the (8,3) bin. Similarly, the first and last log lines are grouped together in the (3,0) bin since they both contain 3 words and are likely to contain no parameters. 4.2.3. The categorize step The categorize step compares log lines in each bin and abstracts them to the corresponding execution events. The inferred execution events are stored in an execution events database for future references. The algorithm used in the categorize step is shown below. Our algorithm goes through the log lines Copyright q 2008 John Wiley & Sons, Ltd. J. Softw. Maint. Evol.: Res. Pract. 2008; 20:249–267 DOI: 10.1002/smr 258 Z. M. JIANG ET AL. Table VI. Running example logs after the categorize step. Execution events (word parameter id) Log lines 3 0 1 1. Start check out 3 0 2 5. Check out done 5 1 1 4. Check out, total amount=$v 8 3 1 2. Paid for, item=$v, quality=$v, amount=$v 8 3 1 3. Paid for, item=$v, quality=$v, amount=$v bin by bin. After this step, each log line should be abstracted to an execution event. Table VI shows the results of our working example after the categorize step. for each bin bi for each log line lk in bin bi for each execution event e(bi , j) corresponding to bi in the events DB perform word by word comparison between e(bi , j) and lk if (there is no difference) then lk is of type e(bi , j) break end if end for // advance to next e(bi , j) if ( lk does not have a matching execution event) then lk is a new execution event store an abstracted lk into the execution events DB end if end for // advance to the next log line end for // advance to the next bin We now explain our algorithm using the running example. Our algorithm starts with the (3,0) bin. Initially, there are no execution events that correspond to this bin yet. Therefore, the execution event corresponding to the first log line becomes the first execution event namely 3 0 1. The 1 at the end of 3 0 1 indicates that this is the first execution event to correspond to the bin, which has 3 words and no parameters (i.e., bin 3 0). Then the algorithm moves to the next log line in the (3,0) bin, which contains the fifth log line. The algorithm compares the fifth log line with all the existing execution events in the (3,0) bin. Currently, there is only one execution event: 3 0 1. As the fifth log line is not similar to the 3 0 1 execution event, we create a new execution event 3 0 2 for the fifth log line. With all the log lines in the (3,0) bin processed, we can move on to the (5,1) bin. As there are no execution events that correspond to the (5,1) bin initially, the fourth log line gets assigned to a new execution event 5 1 1. Finally, we move on to the (8,3) bin. First, the second log line gets assigned with a new execution event 8 3 1 since there are no execution events corresponding to this bin yet. As the third log line is the same as the second log line (after the anonymize step), the third log line is categorized as the same execution event as the second log Copyright q 2008 John Wiley & Sons, Ltd. J. Softw. Maint. Evol.: Res. Pract. 2008; 20:249–267 DOI: 10.1002/smr AN AUTOMATED APPROACH FOR ABSTRACTING EXECUTION LOGS 259 line. Table VI shows the sample log lines after the categorize step. The left column is the abstracted execution event. The right column shows the line number together with the corresponding log lines. 4.2.4. The reconcile step Since the anonymize step uses heuristics to identify dynamic information in a log line, there is a chance that we might miss to anonymize some dynamic information. The missed dynamic information will result in the abstraction of several log lines to several execution events that are very similar. Table VII shows an example of dynamic information that was missed by the anonymize step. The table shows five different execution events. However, the user names after ‘for user’ are dynamic information and should have been replaced by the generic token ‘$v’. All the log lines shown in Table VII should have been abstracted to the same execution event after the categorize step. The reconcile step addresses this situation. All execution events are re-examined to identify which ones are to be merged. Execution events are merged if: 1. They belong to the same bin. 2. They differ from each other by one token at the same positions. 3. There exists a few of such execution events. We used a threshold of five events in our case studies. Other values are possibly based on the content of the analyzed log files. The threshold prevents the merging of similar yet different execution events, such as ‘Start processing’ and ‘Stop processing’, which should not be merged. Looking at the execution events in Table VII, we note that they all belong to the ‘5 0’ bin and differ from each other only in the last token. Since there are five of such events, we merged them into one event. Table VIII shows the execution events from Table VII after the reconcile step. Note that if the ‘5 0’ bin contains another execution event: ‘Stop processing for user John’; it will not be merged with the above execution events since it differs by two tokens instead of only the last token. Table VII. Sample logs that the categorize step would fail to abstract. Event IDs Execution events 5 0 1 Start processing for user Jen 5 0 2 Start processing for user Tom 5 0 3 Start processing for user Henry 5 0 4 Start processing for user Jack 5 0 5 Start processing for user Peter Table VIII. Sample logs after the reconcile step. Event IDs Execution events 5 0 1 Start processing for user $v Copyright q 2008 John Wiley & Sons, Ltd. J. Softw. Maint. Evol.: Res. Pract. 2008; 20:249–267 DOI: 10.1002/smr 260 Z. M. JIANG ET AL.",
"title": ""
},
{
"docid": "94c6ab34e39dd642b94cc2f538451af8",
"text": "Like every other social practice, journalism cannot now fully be understood apart from globalization. As part of a larger platform of communication media, journalism contributes to this experience of the world-as-a-single-place and thus represents a key component in these social transformations, both as cause and outcome. These issues at the intersection of journalism and globalization define an important and growing field of research, particularly concerning the public sphere and spaces for political discourse. In this essay, I review this intersection of journalism and globalization by considering the communication field’s approach to ‘media globalization’ within a broader interdisciplinary perspective that mixes the sociology of globalization with aspects of geography and social anthropology. By placing the emphasis on social practices, elites, and specific geographical spaces, I introduce a less media-centric approach to media globalization and how journalism fits into the process. Beyond ‘global village journalism,’ this perspective captures the changes globalization has brought to journalism. Like every other social practice, journalism cannot now fully be understood apart from globalization. This process refers to the intensification of social interconnections, which allows apprehending the world as a single place, creating a greater awareness of our own place and its relative location within the range of world experience. As part of a larger platform of communication media, journalism contributes to this experience and thus represents a key component in these social transformations, both as cause and outcome. These issues at the intersection of journalism and globalization define an important and growing field of research, particularly concerning the public sphere and spaces for political discourse. The study of globalization has become a fashionable growth industry, attracting an interdisciplinary assortment of scholars. Journalism, meanwhile, itself has become an important subject in its own right within media studies, with a growing number of projects taking an international perspective (reviewed in Reese 2009). Combining the two areas yields a complex subject that requires some careful sorting out to get beyond the jargon and the easy country–by-country case studies. From the globalization studies side, the media role often seems like an afterthought, a residual category of social change, or a self-evident symbol of the global era–CNN, for example. Indeed, globalization research has been slower to consider the changing role of journalism, compared to the attention devoted to financial and entertainment flows. That may be expected, given that economic and cultural globalization is further along than that of politics, and journalism has always been closely tied to democratic structures, many of which are inherently rooted in local communities. The media-centrism of communication research, on the other hand, may give the media—and the journalism associated with them—too much credit in the globalization process, treating certain media as the primary driver of global connections and the proper object of study. Global connections support new forms of journalism, which create politically significant new spaces within social systems, lead to social change, and privilege certain forms Sociology Compass 4/6 (2010): 344–353, 10.1111/j.1751-9020.2010.00282.x a 2010 The Author Journal Compilation a 2010 Blackwell Publishing Ltd of power. Therefore, we want to know how journalism has contributed to these new spaces, bringing together new combinations of transnational élites, media professionals, and citizens. To what extent are these interactions shaped by a globally consistent shared logic, and what are the consequences for social change and democratic values? Here, however, the discussion often gets reduced to whether a cultural homogenization is taking place, supporting a ‘McWorld’ thesis of a unitary media and journalistic form. But we do not have to subscribe to a one-world media monolith prediction to expect certain transnational logics to emerge to take their place along side existing ones. Journalism at its best contributes to social transparency, which is at the heart of the globalization optimists’ hopes for democracy (e.g. Giddens 2000). The insertion of these new logics into national communities, especially those closed or tightly controlled societies, can bring an important impulse for social change (seen in a number of case studies from China, as in Reese and Dai 2009). In this essay, I will review a few of the issues at the intersection of journalism and globalization and consider a more nuanced view of media within a broader network of actors, particularly in the case of journalism as it helps create emerging spaces for public affairs discourse. Understanding the complex interplay of the global and local requires an interdisciplinary perspective, mixing the sociology of globalization with aspects of geography and social anthropology. This helps avoid equating certain emerging global news forms with a new and distinct public sphere. The globalization of journalism occurs through a multitude of levels, relationships, social actors, and places, as they combine to create new public spaces. Communication research may bring journalism properly to the fore, but it must be considered within the insights into places and relationships provided by these other disciplines. Before addressing these questions, it is helpful to consider how journalism has figured into some larger debates. Media Globalization: Issues of Scale and Homogeneity One major fault line lies within the broader context of ‘media,’ where journalism has been seen as providing flows of information and transnational connections. That makes it a key factor in the phenomenon of ‘media globalization.’ McLuhan gave us the enduring image of the ‘global village,’ a quasi-utopian idea that has seeped into such theorizing about the contribution of media. The metaphor brings expectations of an extensive, unitary community, with a corresponding set of universal, global values, undistorted by parochial interests and propaganda. The interaction of world media systems, however, has not as of yet yielded the kind of transnational media and programs that would support such ‘village’-worthy content (Ferguson 1992; Sparks 2007). In fact, many of the communication barriers show no signs of coming down, with many specialized enclaves becoming stronger. In this respect, changes in media reflect the larger crux of globalization that it simultaneously facilitates certain ‘monoculture’ global standards along with the proliferation of a host of micro-communities that were not possible before. In a somewhat analogous example, the global wine trade has led to convergent trends in internationally desirable tastes but also allowed a number of specialized local wineries to survive and flourish through the ability to reach global markets. The very concept of ‘media globalization’ suggests that we are not quite sure if media lead to globalization or are themselves the result of it. In any case, giving the media a privileged place in shaping a globalized future has led to high expectations for international journalism, satellite television, and other media to provide a workable global public sphere, making them an easy target if they come up short. In his book, Media globalization Journalism and Globalization 345 a 2010 The Author Sociology Compass 4/6 (2010): 344–353, 10.1111/j.1751-9020.2010.00282.x Journal Compilation a 2010 Blackwell Publishing Ltd myth, Kai Hafez (2007) provides that kind of attack. Certainly, much of the discussion has suffered from overly optimistic and under-conceptualized research, with global media technology being a ‘necessary but not sufficient condition for global communication.’ (p. 2) Few truly transnational media forms have emerged that have a more supranational than national allegiance (among newspapers, the International Herald Tribune, Wall St. Journal Europe, Financial Times), and among transnational media even CNN does not present a single version to the world, split as it is into various linguistic viewer zones. Defining cross-border communication as the ‘core phenomenon’ of globalization leads to comparing intrato inter-national communication as the key indicator of globalization. For example, Hafez rejects the internet as a global system of communication, because global connectivity does not exceed local and regional connections. With that as a standard, we may indeed conclude that media globalization has failed to produce true transnational media platforms or dialogs across boundaries. Rather a combination of linguistic and digital divides, along with enduring regional preferences, actually reinforces some boundaries. (The wishful thinking for a global media may be tracked to highly mobile Western scholars, who in Hafez’s ‘hotel thesis’ overestimate the role of such transnational media, because they are available to them in their narrow and privileged travel circles.) Certainly, the foreign news most people receive, even about big international events, is domesticated through the national journalistic lens. Indeed, international reporting, as a key component of the would-be global public sphere, flunks Hafez’s ‘global test,’ incurring the same criticisms others have leveled for years at national journalism: elite-focused, conflictual, and sensational, with a narrow, parochial emphasis. If ‘global’ means giving ‘dialogic’ voices a chance to speak to each other without reproducing national ethnocentrism, then the world’s media still fail to measure up. Conceptualizing the ‘Global’ For many, ‘global’ means big. That goes too for the global village perspective, which emphasizes the scaling dimension and equates the global with ‘bigness,’ part of a nested hierarchy of levels of analysis based on size: beyond local, regional, and nationa",
"title": ""
},
{
"docid": "d2fe95e4955b78aeef8c8a565fbc9fae",
"text": "With the advance of the World-Wide Web (WWW) technology, people can easily share content on the Web, including geospatial data and web services. Thus, the “big geospatial data management” issues start attracting attention. Among the big geospatial data issues, this research focuses on discovering distributed geospatial resources. As resources are scattered on the WWW, users cannot find resources of their interests efficiently. While the WWW has Web search engines addressing web resource discovery issues, we envision that the geospatial Web (i.e., GeoWeb) also requires GeoWeb search engines. To realize a GeoWeb search engine, one of the first steps is to proactively discover GeoWeb resources on the WWW. Hence, in this study, we propose the GeoWeb Crawler, an extensible Web crawling framework that can find various types of GeoWeb resources, such as Open Geospatial Consortium (OGC) web services, Keyhole Markup Language (KML) and Environmental Systems Research Institute, Inc (ESRI) Shapefiles. In addition, we apply the distributed computing concept to promote the performance of the GeoWeb Crawler. The result shows that for 10 targeted resources types, the GeoWeb Crawler discovered 7351 geospatial services and 194,003 datasets. As a result, the proposed GeoWeb Crawler framework is proven to be extensible and scalable to provide a comprehensive index of GeoWeb.",
"title": ""
},
{
"docid": "e5d474fc8c0d2c97cc798eda4f9c52dd",
"text": "Gesture typing is an efficient input method for phones and tablets using continuous traces created by a pointed object (e.g., finger or stylus). Translating such continuous gestures into textual input is a challenging task as gesture inputs exhibit many features found in speech and handwriting such as high variability, co-articulation and elision. In this work, we address these challenges with a hybrid approach, combining a variant of recurrent networks, namely Long Short Term Memories [1] with conventional Finite State Transducer decoding [2]. Results using our approach show considerable improvement relative to a baseline shape-matching-based system, amounting to 4% and 22% absolute improvement respectively for small and large lexicon decoding on real datasets and 2% on a synthetic large scale dataset.",
"title": ""
},
{
"docid": "88ffb30f1506bedaf7c1a3f43aca439e",
"text": "The multiprotein mTORC1 protein kinase complex is the central component of a pathway that promotes growth in response to insulin, energy levels, and amino acids and is deregulated in common cancers. We find that the Rag proteins--a family of four related small guanosine triphosphatases (GTPases)--interact with mTORC1 in an amino acid-sensitive manner and are necessary for the activation of the mTORC1 pathway by amino acids. A Rag mutant that is constitutively bound to guanosine triphosphate interacted strongly with mTORC1, and its expression within cells made the mTORC1 pathway resistant to amino acid deprivation. Conversely, expression of a guanosine diphosphate-bound Rag mutant prevented stimulation of mTORC1 by amino acids. The Rag proteins do not directly stimulate the kinase activity of mTORC1, but, like amino acids, promote the intracellular localization of mTOR to a compartment that also contains its activator Rheb.",
"title": ""
},
{
"docid": "9bca70974fcccc23c2b3463909c1d641",
"text": "Advances in online and computer supported education afford exciting opportunities to revolutionize the classroom, while also presenting a number of new challenges not faced in traditional educational settings. Foremost among these challenges is the problem of accurately and efficiently evaluating learner work as the class size grows, which is directly related to the larger goal of providing quality, timely, and actionable formative feedback. Recently there has been a surge in interest in using peer grading methods coupled with machine learning to accurately and fairly evaluate learner work while alleviating the instructor bottleneck and grading overload. Prior work in peer grading almost exclusively focuses on numerically scored grades -- either real-valued or ordinal. In this work, we consider the implications of peer ranking in which learners rank a small subset of peer work from strongest to weakest, and propose new types of computational analyses that can be applied to this ranking data. We adopt a Bayesian approach to the ranked peer grading problem and develop a novel model and method for utilizing ranked peer-grading data. We additionally develop a novel procedure for adaptively identifying which work should be ranked by particular peers in order to dynamically resolve ambiguity in the data and rapidly resolve a clearer picture of learner performance. We showcase our results on both synthetic and several real-world educational datasets.",
"title": ""
},
{
"docid": "72d59a0605a82fc714020ac67ac1e52b",
"text": "We present an accurate stereo matching method using <italic>local expansion moves</italic> based on graph cuts. This new move-making scheme is used to efficiently infer per-pixel 3D plane labels on a pairwise Markov random field (MRF) that effectively combines recently proposed slanted patch matching and curvature regularization terms. The local expansion moves are presented as many <inline-formula><tex-math notation=\"LaTeX\">$\\alpha$</tex-math><alternatives> <inline-graphic xlink:href=\"taniai-ieq1-2766072.gif\"/></alternatives></inline-formula>-expansions defined for small grid regions. The local expansion moves extend traditional expansion moves by two ways: localization and spatial propagation. By localization, we use different candidate <inline-formula><tex-math notation=\"LaTeX\">$\\alpha$</tex-math> <alternatives><inline-graphic xlink:href=\"taniai-ieq2-2766072.gif\"/></alternatives></inline-formula>-labels according to the locations of local <inline-formula><tex-math notation=\"LaTeX\">$\\alpha$</tex-math><alternatives> <inline-graphic xlink:href=\"taniai-ieq3-2766072.gif\"/></alternatives></inline-formula>-expansions. By spatial propagation, we design our local <inline-formula><tex-math notation=\"LaTeX\">$\\alpha$</tex-math><alternatives> <inline-graphic xlink:href=\"taniai-ieq4-2766072.gif\"/></alternatives></inline-formula>-expansions to propagate currently assigned labels for nearby regions. With this localization and spatial propagation, our method can efficiently infer MRF models with a continuous label space using randomized search. Our method has several advantages over previous approaches that are based on fusion moves or belief propagation; it produces <italic>submodular moves </italic> deriving a <italic>subproblem optimality</italic>; it helps find good, smooth, piecewise linear disparity maps; it is suitable for parallelization; it can use cost-volume filtering techniques for accelerating the matching cost computations. Even using a simple pairwise MRF, our method is shown to have best performance in the Middlebury stereo benchmark V2 and V3.",
"title": ""
},
{
"docid": "e1fe3c9b60f316c8658a18796245c243",
"text": "The ransomware nightmare is taking over the internet impacting common users, small businesses and large ones. The interest and investment which are pushed into this market each month, tells us a few things about the evolution of both technical and social engineering and what to expect in the short-coming future from them. In this paper we analyze how ransomware programs developed in the last few years and how they were released in certain market segments throughout the deep web via RaaS, exploits or SPAM, while learning from their own mistakes to bring profit to the next level. We will also try to highlight some mistakes that were made, which allowed recovering the encrypted data, along with the ransomware authors preference for specific encryption types, how they got to distribute, the silent agreement between ransomwares, coin-miners and bot-nets and some edge cases of encryption, which may prove to be exploitable in the short-coming future.",
"title": ""
},
{
"docid": "f4a31f5dbd98ae0cc9faf3f0255dbca6",
"text": "Automotive SoCs are constantly being tested for correct functional operation, even long after they have left fabrication. The testing is done at the start of operation (car ignition) and repeatedly during operation (during the drive) to check for faults. Faults can result from, but are not restricted to, a failure in a part of a semiconductor circuit such as a failed transistor, interconnect failure due to electromigration, or faults caused by soft errors (e.g., an alpha particle switching a bit in a RAM or other circuit element). While the tests can run long after the chip was taped-out, the safety definition and test plan effort is starting as early as the specification definitions. In this paper we give an introduction to functional safety concentrating on the ISO26262 standard and we touch on a couple of approaches to functional safety for an Intellectual Property (IP) part such as a microprocessor, including software self-test libraries and logic BIST. We discuss the additional effort needed for developing a design for the automotive market. Lastly, we focus on our experience of using fault grading as a method for developing a self-test library that periodically tests the circuit operation. We discuss the effect that implementation decisions have on this effort and why it is important to start with this effort early in the design process.",
"title": ""
},
{
"docid": "a3205b696c9f93f1fbe1c8a198d41c57",
"text": "The axial magnetic flux leakage(MFL) inspection tools cannot reliably detect or size axially aligned cracks, such as SCC, longitudinal corrosion, long seam defects, and axially oriented mechanical damage. To focus on this problem, the circumferential MFL inspection tool is introduced. The finite element (FE) model is established by adopting ANSYS software to simulate magnetostatics. The results show that the amount of flux that is diverted out of the pipe depends on the geometry of the defect, the primary variables that affect the flux leakage are the ones that define the volume of the defect. The defect location can significantly affect flux leakage, the magnetic field magnitude arising due to the presence of the defect is immersed in the high field close to the permanent magnets. These results demonstrate the feasibility of detecting narrow axial defects and the practicality of developing a circumferential MFL tool.",
"title": ""
},
{
"docid": "2e4c4e734532fb9e70742c3a6333d592",
"text": "In this paper we address the problem of automated classification of isolates, i.e., the problem of determining the family of genomes to which a given genome belongs. Additionally, we address the problem of automated unsupervised hierarchical clustering of isolates according only to their statistical substring properties. For both of these problems we present novel algorithms based on nucleotide n-grams, with no required preprocessing steps such as sequence alignment. Results obtained experimentally are very positive and suggest that the proposed techniques can be successfully used in a variety of related problems. The reported experiments demonstrate better performance than some of the state-of-the-art methods. We report on a new distance measure between n-gram profiles, which shows superior performance compared to many other measures, including commonly used Euclidean distance.",
"title": ""
},
{
"docid": "ccfa5c06643cb3913b0813103a85e0b0",
"text": "We consider the problem of zero-shot recognition: learning a visual classifier for a category with zero training examples, just using the word embedding of the category and its relationship to other categories, which visual data are provided. The key to dealing with the unfamiliar or novel category is to transfer knowledge obtained from familiar classes to describe the unfamiliar class. In this paper, we build upon the recently introduced Graph Convolutional Network (GCN) and propose an approach that uses both semantic embeddings and the categorical relationships to predict the classifiers. Given a learned knowledge graph (KG), our approach takes as input semantic embeddings for each node (representing visual category). After a series of graph convolutions, we predict the visual classifier for each category. During training, the visual classifiers for a few categories are given to learn the GCN parameters. At test time, these filters are used to predict the visual classifiers of unseen categories. We show that our approach is robust to noise in the KG. More importantly, our approach provides significant improvement in performance compared to the current state-of-the-art results (from 2 ~ 3% on some metrics to whopping 20% on a few).",
"title": ""
}
] | scidocsrr |
db7ee58f58d7f901dac2d5e03c4c4e75 | Arbitrary-Oriented Vehicle Detection in Aerial Imagery with Single Convolutional Neural Networks | [
{
"docid": "a0dbf8e57a7e11f88bc3ed14a1eabad7",
"text": "Detecting vehicles in aerial imagery plays an important role in a wide range of applications. The current vehicle detection methods are mostly based on sliding-window search and handcrafted or shallow-learning-based features, having limited description capability and heavy computational costs. Recently, due to the powerful feature representations, region convolutional neural networks (CNN) based detection methods have achieved state-of-the-art performance in computer vision, especially Faster R-CNN. However, directly using it for vehicle detection in aerial images has many limitations: (1) region proposal network (RPN) in Faster R-CNN has poor performance for accurately locating small-sized vehicles, due to the relatively coarse feature maps; and (2) the classifier after RPN cannot distinguish vehicles and complex backgrounds well. In this study, an improved detection method based on Faster R-CNN is proposed in order to accomplish the two challenges mentioned above. Firstly, to improve the recall, we employ a hyper region proposal network (HRPN) to extract vehicle-like targets with a combination of hierarchical feature maps. Then, we replace the classifier after RPN by a cascade of boosted classifiers to verify the candidate regions, aiming at reducing false detection by negative example mining. We evaluate our method on the Munich vehicle dataset and the collected vehicle dataset, with improvements in accuracy and robustness compared to existing methods.",
"title": ""
}
] | [
{
"docid": "9113e4ba998ec12dd2536073baf40610",
"text": "Fast adaptation of deep neural networks (DNN) is an important research topic in deep learning. In this paper, we have proposed a general adaptation scheme for DNN based on discriminant condition codes, which are directly fed to various layers of a pre-trained DNN through a new set of connection weights. Moreover, we present several training methods to learn connection weights from training data as well as the corresponding adaptation methods to learn new condition code from adaptation data for each new test condition. In this work, the fast adaptation scheme is applied to supervised speaker adaptation in speech recognition based on either frame-level cross-entropy or sequence-level maximum mutual information training criterion. We have proposed three different ways to apply this adaptation scheme based on the so-called speaker codes: i) Nonlinear feature normalization in feature space; ii) Direct model adaptation of DNN based on speaker codes; iii) Joint speaker adaptive training with speaker codes. We have evaluated the proposed adaptation methods in two standard speech recognition tasks, namely TIMIT phone recognition and large vocabulary speech recognition in the Switchboard task. Experimental results have shown that all three methods are quite effective to adapt large DNN models using only a small amount of adaptation data. For example, the Switchboard results have shown that the proposed speaker-code-based adaptation methods may achieve up to 8-10% relative error reduction using only a few dozens of adaptation utterances per speaker. Finally, we have achieved very good performance in Switchboard (12.1% in WER) after speaker adaptation using sequence training criterion, which is very close to the best performance reported in this task (\"Deep convolutional neural networks for LVCSR,\" T. N. Sainath et al., Proc. IEEE Acoust., Speech, Signal Process., 2013).",
"title": ""
},
{
"docid": "920b3c1264ad303bbb1a263ecf7c1162",
"text": "Nowadays, operational quality and robustness of cellular networks are among the hottest topics wireless communications research. As a response to a growing need in reduction of expenses for mobile operators, 3rd Generation Partnership Project (3GPP) initiated work on Minimization of Drive Tests (MDT). There are several major areas of standardization related to MDT, such as coverage, capacity, mobility optimization and verification of end user quality [1]. This paper presents results of the research devoted to Quality of Service (QoS) verification for MDT. The main idea is to jointly observe the user experienced QoS in terms of throughput, and corresponding radio conditions. Also the necessity to supplement the existing MDT metrics with the new reporting types is elaborated.",
"title": ""
},
{
"docid": "d7b60ce82b6deb61efdf2d6aef5f5341",
"text": "The Evolution of Cognitive Bias Despite widespread claims to the contrary, the human mind is not worse than rational… but may often be better than rational. On the surface, cognitive biases appear to be somewhat puzzling when viewed through an evolutionary lens. Because they depart from standards of logic and accuracy, they appear to be design flaws instead of examples of good engineering. Cognitive traits can be evaluated according to any number of performance criteria-logical sufficiency, accuracy, speed of processing, and so on. The value of a criterion depends on the question the scientist is asking. To the evolutionary psychologist, however, the evaluative task is not whether the cognitive feature is accurate or logical, but rather how well it solves a particular problem, and how solving this problem contributed to fitness ancestrally. Viewed in this way, if a cognitive bias positively impacted fitness it is not a design flaw – it is a design feature. This chapter discusses the many biases that are probably not the result of mere constraints on the design of the mind or other mysterious irrationalities, but rather are adaptations that can be studied and better understood from an evolutionary perspective. By cognitive bias, we mean cases in which human cognition reliably produces representations that are systematically distorted compared to some aspect of objective reality. We note that the term bias is used in the literature in a number of different ways (see, We do not seek to make commitments about these definitions here; rather, we use bias throughout this chapter in the relatively noncommittal sense defined above. An evolutionary psychological perspective predicts that the mind is equipped with function-specific mechanisms adapted for special purposes—mechanisms with special design for Cognitive Bias-3 solving problems such as mating, which are separate, at least in part, from those involved in solving problems of food choice, predator avoidance, and social exchange (e. demonstrating domain specificity in solving a particular problem is a part of building a case that the trait has been shaped by selection to perform that function. The evolved function of the eye, for instance, is to facilitate sight because it does this well (it exhibits proficiency), the features of the eye have the common and unique effect of facilitating sight (it exhibits specificity), and there are no plausible alternative hypotheses that account for the eye's features. Some design features that appear to be flaws when viewed in …",
"title": ""
},
{
"docid": "c77b2b45f189b6246c9f2e2ed527772f",
"text": "PaaS vendors face challenges in efficiently providing services with the growth of their offerings. In this paper, we explore how PaaS vendors are using containers as a means of hosting Apps. The paper starts with a discussion of PaaS Use case and the current adoption of Container based PaaS architectures with the existing vendors. We explore various container implementations - Linux Containers, Docker, Warden Container, lmctfy and OpenVZ. We look at how each of this implementation handle Process, FileSystem and Namespace isolation. We look at some of the unique features of each container and how some of them reuse base Linux Container implementation or differ from it. We also explore how IaaSlayer itself has started providing support for container lifecycle management along with Virtual Machines. In the end, we look at factors affecting container implementation choices and some of the features missing from the existing implementations for the next generation PaaS.",
"title": ""
},
{
"docid": "3b8e716e658176cebfbdb313c8cb22ac",
"text": "To realize the vision of Internet-of-Things (IoT), numerous IoT devices have been developed for improving daily lives, in which smart home devices are among the most popular ones. Smart locks rely on smartphones to ease the burden of physical key management and keep tracking the door opening/close status, the security of which have aroused great interests from the security community. As security is of utmost importance for the IoT environment, we try to investigate the security of IoT by examining smart lock security. Specifically, we focus on analyzing the security of August smart lock. The threat models are illustrated for attacking August smart lock. We then demonstrate several practical attacks based on the threat models toward August smart lock including handshake key leakage, owner account leakage, personal information leakage, and denial-of-service (DoS) attacks. We also propose the corresponding defense methods to counteract these attacks.",
"title": ""
},
{
"docid": "7b44c4ec18d01f46fdd513780ba97963",
"text": "This paper presents a robust approach for road marking detection and recognition from images captured by an embedded camera mounted on a car. Our method is designed to cope with illumination changes, shadows, and harsh meteorological conditions. Furthermore, the algorithm can effectively group complex multi-symbol shapes into an individual road marking. For this purpose, the proposed technique relies on MSER features to obtain candidate regions which are further merged using density-based clustering. Finally, these regions of interest are recognized using machine learning approaches. Worth noting, the algorithm is versatile since it does not utilize any prior information about lane position or road space. The proposed method compares favorably to other existing works through a large number of experiments on an extensive road marking dataset.",
"title": ""
},
{
"docid": "fc7d777932e990ddba30b13c77cfc88c",
"text": "With increasing volumes in data and more sophisticated Machine Learning algorithms, the demand for fast and energy efficient computation systems is also growing. The combination of classical CPU systems with more specialized hardware such as FPGAs offer one way to meet this demand. FPGAs are fast and energy efficient reconfigurable hardware devices allowing new design explorations for algorithms and their implementations. This report briefly discusses FPGAs as computational hardware and their application in the domain of Machine Learning, specifically in combination with Gaussian Processes.",
"title": ""
},
{
"docid": "853220dc960afe1b4b2137b934b1e235",
"text": "Multi-level marketing is a marketing approach that motivates its participants to promote a certain product among their friends. The popularity of this approach increases due to the accessibility of modern social networks, however, it existed in one form or the other long before the Internet age began (the infamous Pyramid scheme that dates back at least a century is in fact a special case of multi-level marketing). This paper lays foundations for the study of reward mechanisms in multi-level marketing within social networks. We provide a set of desired properties for such mechanisms and show that they are uniquely satisfied by geometric reward mechanisms. The resilience of mechanisms to false-name manipulations is also considered; while geometric reward mechanisms fail against such manipulations, we exhibit other mechanisms which are false-name-proof.",
"title": ""
},
{
"docid": "bf2065f6c04f566110667a22a9d1b663",
"text": "Casticin, a polymethoxyflavone occurring in natural plants, has been shown to have anticancer activities. In the present study, we aims to investigate the anti-skin cancer activity of casticin on melanoma cells in vitro and the antitumor effect of casticin on human melanoma xenografts in nu/nu mice in vivo. A flow cytometric assay was performed to detect expression of viable cells, cell cycles, reactive oxygen species production, levels of [Formula: see text] and caspase activity. A Western blotting assay and confocal laser microscope examination were performed to detect expression of protein levels. In the in vitro studies, we found that casticin induced morphological cell changes and DNA condensation and damage, decreased the total viable cells, and induced G2/M phase arrest. Casticin promoted reactive oxygen species (ROS) production, decreased the level of [Formula: see text], and promoted caspase-3 activities in A375.S2 cells. The induced G2/M phase arrest indicated by the Western blotting assay showed that casticin promoted the expression of p53, p21 and CHK-1 proteins and inhibited the protein levels of Cdc25c, CDK-1, Cyclin A and B. The casticin-induced apoptosis indicated that casticin promoted pro-apoptotic proteins but inhibited anti-apoptotic proteins. These findings also were confirmed by the fact that casticin promoted the release of AIF and Endo G from mitochondria to cytosol. An electrophoretic mobility shift assay (EMSA) assay showed that casticin inhibited the NF-[Formula: see text]B binding DNA and that these effects were time-dependent. In the in vivo studies, results from immuno-deficient nu/nu mice bearing the A375.S2 tumor xenograft indicated that casticin significantly suppressed tumor growth based on tumor size and weight decreases. Early G2/M arrest and mitochondria-dependent signaling contributed to the apoptotic A375.S2 cell demise induced by casticin. In in vivo experiments, A375.S2 also efficaciously suppressed tumor volume in a xenotransplantation model. Therefore, casticin might be a potential therapeutic agent for the treatment of skin cancer in the future.",
"title": ""
},
{
"docid": "a0aa33c4afa58bd4dff7eb209bfb7924",
"text": "OBJECTIVE\nTo assess whether frequent marijuana use is associated with residual neuropsychological effects.\n\n\nDESIGN\nSingle-blind comparison of regular users vs infrequent users of marijuana.\n\n\nPARTICIPANTS\nTwo samples of college undergraduates: 65 heavy users, who had smoked marijuana a median of 29 days in the last 30 days (range, 22 to 30 days) and who also displayed cannabinoids in their urine, and 64 light users, who had smoked a median of 1 day in the last 30 days (range, 0 to 9 days) and who displayed no urinary cannabinoids.\n\n\nINTERVENTION\nSubjects arrived at 2 PM on day 1 of their study visit, then remained at our center overnight under supervision. Neuropsychological tests were administered to all subjects starting at 9 AM on day 2. Thus, all subjects were abstinent from marijuana and other drugs for a minimum of 19 hours before testing.\n\n\nMAIN OUTCOME MEASURES\nSubjects received a battery of standard neuropsychological tests to assess general intellectual functioning, abstraction ability, sustained attention, verbal fluency, and ability to learn and recall new verbal and visuospatial information.\n\n\nRESULTS\nHeavy users displayed significantly greater impairment than light users on attention/executive functions, as evidenced particularly by greater perseverations on card sorting and reduced learning of word lists. These differences remained after controlling for potential confounding variables, such as estimated levels of premorbid cognitive functioning, and for use of alcohol and other substances in the two groups.\n\n\nCONCLUSIONS\nHeavy marijuana use is associated with residual neuropsychological effects even after a day of supervised abstinence from the drug. However, the question remains open as to whether this impairment is due to a residue of drug in the brain, a withdrawal effect from the drug, or a frank neurotoxic effect of the drug. from marijuana",
"title": ""
},
{
"docid": "fe19e30124ab7472521f93fe9408dd54",
"text": "uted to the discovery and characterization of new materials. The discovery of semiconductors laid the foundation for modern electronics, while the formulation of new molecules allows us to treat diseases previously thought incurable. Looking into the future, some of the largest problems facing humanity now are likely to be solved by the discovery of new materials. In this article, we explore the techniques materials scientists are using and show how our novel artificial intelligence system, Phase-Mapper, allows materials scientists to quickly solve material systems to infer their underlying crystal structures and has led to the discovery of new solar light absorbers. Articles",
"title": ""
},
{
"docid": "5a889a9091282e50eeae2fa4fedc750d",
"text": "This study explores the role of speech register and prosody for the task of word segmentation. Since these two factors are thought to play an important role in early language acquisition, we aim to quantify their contribution for this task. We study a Japanese corpus containing both infantand adult-directed speech and we apply four different word segmentation models, with and without knowledge of prosodic boundaries. The results showed that the difference between registers is smaller than previously reported and that prosodic boundary information helps more adultthan infant-directed speech.",
"title": ""
},
{
"docid": "43beba8ec2a324546bce095e9c1d9f0c",
"text": "Scenario-based specifications such as Message Sequence Charts (MSCs) are useful as part of a requirements specification. A scenario is a partial story, describing how system components, the environment, and users work concurrently and interact in order to provide system level functionality. Scenarios need to be combined to provide a more complete description of system behavior. Consequently, scenario synthesis is central to the effective use of scenario descriptions. How should a set of scenarios be interpreted? How do they relate to one another? What is the underlying semantics? What assumptions are made when synthesizing behavior models from multiple scenarios? In this paper, we present an approach to scenario synthesis based on a clear sound semantics, which can support and integrate many of the existing approaches to scenario synthesis. The contributions of the paper are threefold. We first define an MSC language with sound abstract semantics in terms of labeled transition systems and parallel composition. The language integrates existing approaches based on scenario composition by using high-level MSCs (hMSCs) and those based on state identification by introducing explicit component state labeling. This combination allows stakeholders to break up scenario specifications into manageable parts and reuse scenarios using hMCSs; it also allows them to introduce additional domainspecific information and general assumptions explicitly into the scenario specification using state labels. Second, we provide a sound synthesis algorithm which translates scenarios into a behavioral specification in the form of Finite Sequential Processes. This specification can be analyzed with the Labeled Transition System Analyzer using model checking and animation. Finally, we demonstrate how many of the assumptions embedded in existing synthesis approaches can be made explicit and modeled in our approach. Thus, we provide the basis for a common approach to scenario-based specification, synthesis, and analysis.",
"title": ""
},
{
"docid": "5a5fbde8e0e264410fe23322a9070a39",
"text": "By asking users of career-oriented social networking sites I investigated their job search behavior. For further IS-theorizing I integrated the number of a user's contacts as an own construct into Venkatesh's et al. UTAUT2 model, which substantially rose its predictive quality from 19.0 percent to 80.5 percent concerning the variance of job search success. Besides other interesting results I found a substantial negative relationship between the number of contacts and job search success, which supports the experience of practitioners but contradicts scholarly findings. The results are useful for scholars and practitioners.",
"title": ""
},
{
"docid": "57384df0c477dca29d4a572af32a1871",
"text": "In this paper, a simple algorithm for detecting the range and shape of tumor in brain MR Images is described. Generally, CT scan or MRI that is directed into intracranial cavity produces a complete image of brain. This image is visually examined by the physician for detection and diagnosis of brain tumor. To avoid that, this project uses computer aided method for segmentation (detection) of brain tumor based on the combination of two algorithms. This method allows the segmentation of tumor tissue with accuracy and reproducibility comparable to manual segmentation. In addition, it also reduces the time for analysis. At the end of the process the tumor is extracted from the MR image and its exact position and the shape also determined. The stage of the tumor is displayed based on the amount of area calculated from the cluster.",
"title": ""
},
{
"docid": "b78d5e7047d340ebef8f4e80d28ab4d9",
"text": "Light scattering and color change are two major sources of distortion for underwater photography. Light scattering is caused by light incident on objects reflected and deflected multiple times by particles present in the water before reaching the camera. This in turn lowers the visibility and contrast of the image captured. Color change corresponds to the varying degrees of attenuation encountered by light traveling in the water with different wavelengths, rendering ambient underwater environments dominated by a bluish tone. No existing underwater processing techniques can handle light scattering and color change distortions suffered by underwater images, and the possible presence of artificial lighting simultaneously. This paper proposes a novel systematic approach to enhance underwater images by a dehazing algorithm, to compensate the attenuation discrepancy along the propagation path, and to take the influence of the possible presence of an artifical light source into consideration. Once the depth map, i.e., distances between the objects and the camera, is estimated, the foreground and background within a scene are segmented. The light intensities of foreground and background are compared to determine whether an artificial light source is employed during the image capturing process. After compensating the effect of artifical light, the haze phenomenon and discrepancy in wavelength attenuation along the underwater propagation path to camera are corrected. Next, the water depth in the image scene is estimated according to the residual energy ratios of different color channels existing in the background light. Based on the amount of attenuation corresponding to each light wavelength, color change compensation is conducted to restore color balance. The performance of the proposed algorithm for wavelength compensation and image dehazing (WCID) is evaluated both objectively and subjectively by utilizing ground-truth color patches and video downloaded from the Youtube website. Both results demonstrate that images with significantly enhanced visibility and superior color fidelity are obtained by the WCID proposed.",
"title": ""
},
{
"docid": "6d329c1fa679ac201387c81f59392316",
"text": "Mosquitoes represent the major arthropod vectors of human disease worldwide transmitting malaria, lymphatic filariasis, and arboviruses such as dengue virus and Zika virus. Unfortunately, no treatment (in the form of vaccines or drugs) is available for most of these diseases andvectorcontrolisstillthemainformofprevention. Thelimitationsoftraditionalinsecticide-based strategies, particularly the development of insecticide resistance, have resulted in significant efforts to develop alternative eco-friendly methods. Biocontrol strategies aim to be sustainable and target a range of different mosquito species to reduce the current reliance on insecticide-based mosquito control. In thisreview, weoutline non-insecticide basedstrategiesthat havebeenimplemented orare currently being tested. We also highlight the use of mosquito behavioural knowledge that can be exploited for control strategies.",
"title": ""
},
{
"docid": "3a7427c67b7758516af15da12b663c40",
"text": "The initial focus of recombinant protein production by filamentous fungi related to exploiting the extraordinary extracellular enzyme synthesis and secretion machinery of industrial strains, including Aspergillus, Trichoderma, Penicillium and Rhizopus species, was to produce single recombinant protein products. An early recognized disadvantage of filamentous fungi as hosts of recombinant proteins was their common ability to produce homologous proteases which could degrade the heterologous protein product and strategies to prevent proteolysis have met with some limited success. It was also recognized that the protein glycosylation patterns in filamentous fungi and in mammals were quite different, such that filamentous fungi are likely not to be the most suitable microbial hosts for production of recombinant human glycoproteins for therapeutic use. By combining the experience gained from production of single recombinant proteins with new scientific information being generated through genomics and proteomics research, biotechnologists are now poised to extend the biomanufacturing capabilities of recombinant filamentous fungi by enabling them to express genes encoding multiple proteins, including, for example, new biosynthetic pathways for production of new primary or secondary metabolites. It is recognized that filamentous fungi, most species of which have not yet been isolated, represent an enormously diverse source of novel biosynthetic pathways, and that the natural fungal host harboring a valuable biosynthesis pathway may often not be the most suitable organism for biomanufacture purposes. Hence it is expected that substantial effort will be directed to transforming other fungal hosts, non-fungal microbial hosts and indeed non microbial hosts to express some of these novel biosynthetic pathways. But future applications of recombinant expression of proteins will not be confined to biomanufacturing. Opportunities to exploit recombinant technology to unravel the causes of the deleterious impacts of fungi, for example as human, mammalian and plant pathogens, and then to bring forward solutions, is expected to represent a very important future focus of fungal recombinant protein technology.",
"title": ""
},
{
"docid": "55aea20148423bdb7296addac847d636",
"text": "This paper describes an underwater sensor network with dual communication and support for sensing and mobility. The nodes in the system are connected acoustically for broadcast communication using an acoustic modem we developed. For higher point to point communication speed the nodes are networked optically using custom built optical modems. We describe the hardware details of the underwater sensor node and the communication and networking protocols. Finally, we present and discuss the results from experiments with this system.",
"title": ""
},
{
"docid": "4ce681973defd1564e2774a38598d983",
"text": "OBJECTIVE\nThe Montreal Cognitive Assessment (MoCA; Nasreddine et al., 2005) is a cognitive screening tool that aims to differentiate healthy cognitive aging from Mild Cognitive Impairment (MCI). Several validation studies have been conducted on the MoCA, in a variety of clinical populations. Some studies have indicated that the originally suggested cutoff score of 26/30 leads to an inflated rate of false positives, particularly for those of older age and/or lower education. We conducted a systematic review and meta-analysis of the literature to determine the diagnostic accuracy of the MoCA for differentiating healthy cognitive aging from possible MCI.\n\n\nMETHODS\nOf the 304 studies identified, nine met inclusion criteria for the meta-analysis. These studies were assessed across a range of cutoff scores to determine the respective sensitivities, specificities, positive and negative predictive accuracies, likelihood ratios for positive and negative results, classification accuracies, and Youden indices.\n\n\nRESULTS\nMeta-analysis revealed a cutoff score of 23/30 yielded the best diagnostic accuracy across a range of parameters.\n\n\nCONCLUSIONS\nA MoCA cutoff score of 23, rather than the initially recommended score of 26, lowers the false positive rate and shows overall better diagnostic accuracy. We recommend the use of this cutoff score going forward. Copyright © 2017 John Wiley & Sons, Ltd.",
"title": ""
}
] | scidocsrr |
ac23df5a73ad17c004a7e12bef38b27e | Mobile bin picking with an anthropomorphic service robot | [
{
"docid": "a769b8f56d699b3f6eca54aeeb314f84",
"text": "Assistive mobile robots that autonomously manipulate objects within everyday settings have the potential to improve the lives of the elderly, injured, and disabled. Within this paper, we present the most recent version of the assistive mobile manipulator EL-E with a focus on the subsystem that enables the robot to retrieve objects from and deliver objects to flat surfaces. Once provided with a 3D location via brief illumination with a laser pointer, the robot autonomously approaches the location and then either grasps the nearest object or places an object. We describe our implementation in detail, while highlighting design principles and themes, including the use of specialized behaviors, task-relevant features, and low-dimensional representations. We also present evaluations of EL-E’s performance relative to common forms of variation. We tested EL-E’s ability to approach and grasp objects from the 25 object categories that were ranked most important for robotic retrieval by motor-impaired patients from the Emory ALS Center. Although reliability varied, EL-E succeeded at least once with objects from 21 out of 25 of these categories. EL-E also approached and grasped a cordless telephone on 12 different surfaces including floors, tables, and counter tops with 100% success. The same test using a vitamin pill (ca. 15mm ×5mm ×5mm) resulted in 58% success.",
"title": ""
},
{
"docid": "3bba459c9f4cae50db1aa4e8104891f1",
"text": "Unstructured human environments present a substantial challenge to effective robotic operation. Mobile manipulation in typical human environments requires dealing with novel unknown objects, cluttered workspaces, and noisy sensor data. We present an approach to mobile pick and place in such environments using a combination of 2D and 3D visual processing, tactile and proprioceptive sensor data, fast motion planning, reactive control and monitoring, and reactive grasping. We demonstrate our approach by using a two-arm mobile manipulation system to pick and place objects. Reactive components allow our system to account for uncertainty arising from noisy sensors, inaccurate perception (e.g. object detection or registration) or dynamic changes in the environment. We also present a set of tools that allow our system to be easily configured within a short time for a new robotic system.",
"title": ""
}
] | [
{
"docid": "976b6cd312c0e3386aad6ff830e589e4",
"text": "PROBLEM AND METHOD\nThis paper takes a critical look at the present state of bicycle infrastructure treatment safety research, highlighting data needs. Safety literature relating to 22 bicycle treatments is examined, including findings, study methodologies, and data sources used in the studies. Some preliminary conclusions related to research efficacy are drawn from the available data and findings in the research.\n\n\nRESULTS AND DISCUSSION\nWhile the current body of bicycle safety literature points toward some defensible conclusions regarding the safety and effectiveness of certain bicycle treatments, such as bike lanes and removal of on-street parking, the vast majority treatments are still in need of rigorous research. Fundamental questions arise regarding appropriate exposure measures, crash measures, and crash data sources.\n\n\nPRACTICAL APPLICATIONS\nThis research will aid transportation departments with regard to decisions about bicycle infrastructure and guide future research efforts toward understanding safety impacts of bicycle infrastructure.",
"title": ""
},
{
"docid": "bbc2645372369d0ad68551b20e57e24b",
"text": "The objective of this paper is to present an approach to electromagnetic field simulation based on the systematic use of the global (i.e. integral) quantities. In this approach, the equations of electromagnetism are obtained directly in a finite form starting from experimental laws without resorting to the differential formulation. This finite formulation is the natural extension of the network theory to electromagnetic field and it is suitable for computational electromagnetics.",
"title": ""
},
{
"docid": "6b1e67c1768f9ec7a6ab95a9369b92d1",
"text": "Autoregressive sequence models based on deep neural networks, such as RNNs, Wavenet and the Transformer attain state-of-the-art results on many tasks. However, they are difficult to parallelize and are thus slow at processing long sequences. RNNs lack parallelism both during training and decoding, while architectures like WaveNet and Transformer are much more parallelizable during training, yet still operate sequentially during decoding. We present a method to extend sequence models using discrete latent variables that makes decoding much more parallelizable. We first autoencode the target sequence into a shorter sequence of discrete latent variables, which at inference time is generated autoregressively, and finally decode the output sequence from this shorter latent sequence in parallel. To this end, we introduce a novel method for constructing a sequence of discrete latent variables and compare it with previously introduced methods. Finally, we evaluate our model end-to-end on the task of neural machine translation, where it is an order of magnitude faster at decoding than comparable autoregressive models. While lower in BLEU than purely autoregressive models, our model achieves higher scores than previously proposed non-autoregressive translation models.",
"title": ""
},
{
"docid": "123f5d93d0b7c483a50d73ba04762550",
"text": "Chemistry and biology are intimately connected sciences yet the chemistry-biology interface remains problematic and central issues regarding the very essence of living systems remain unresolved. In this essay we build on a kinetic theory of replicating systems that encompasses the idea that there are two distinct kinds of stability in nature-thermodynamic stability, associated with \"regular\" chemical systems, and dynamic kinetic stability, associated with replicating systems. That fundamental distinction is utilized to bridge between chemistry and biology by demonstrating that within the parallel world of replicating systems there is a second law analogue to the second law of thermodynamics, and that Darwinian theory may, through scientific reductionism, be related to that second law analogue. Possible implications of these ideas to the origin of life problem and the relationship between chemical emergence and biological evolution are discussed.",
"title": ""
},
{
"docid": "aa7029c5e29a72a8507cbcb461ef92b0",
"text": "Regenerative endodontics has been defined as \"biologically based procedure designed to replace damaged structures, including dentin and root structures, as well as cells of the pulp-dentin complex.\" This is an exciting and rapidly evolving field of human endodontics for the treatment of immature permanent teeth with infected root canal systems. These procedures have shown to be able not only to resolve pain and apical periodontitis but continued root development, thus increasing the thickness and strength of the previously thin and fracture-prone roots. In the last decade, over 80 case reports, numerous animal studies, and series of regenerative endodontic cases have been published. However, even with multiple successful case reports, there are still some remaining questions regarding terminology, patient selection, and procedural details. Regenerative endodontics provides the hope of converting a nonvital tooth into vital one once again.",
"title": ""
},
{
"docid": "0c6403b9486b5f44a735192edd807deb",
"text": "Prior to the start of cross-sex hormone therapy (CSH), androgenic progestins are often used to induce amenorrhea in female to male (FtM) pubertal adolescents with gender dysphoria (GD). The aim of this single-center study is to report changes in anthropometry, side effects, safety parameters, and hormone levels in a relatively large cohort of FtM adolescents with a diagnosis of GD at Tanner stage B4 or further, who were treated with lynestrenol (Orgametril®) monotherapy and in combination with testosterone esters (Sustanon®). A retrospective analysis of clinical and biochemical data obtained during at least 6 months of hormonal treatment in FtM adolescents followed at our adolescent gender clinic since 2010 (n = 45) was conducted. McNemar’s test to analyze reported side effects over time was performed. A paired Student’s t test or a Wilcoxon signed-ranks test was performed, as appropriate, on anthropometric and biochemical data. For biochemical analyses, all statistical tests were done in comparison with baseline parameters. Patients who were using oral contraceptives (OC) at intake were excluded if a Mann-Whitney U test indicated influence of OC. Metrorrhagia and acne were most pronounced during the first months of monotherapy and combination therapy respectively and decreased thereafter. Headaches, hot flushes, and fatigue were the most reported side effects. Over the course of treatment, an increase in musculature, hemoglobin, hematocrit, creatinine, and liver enzymes was seen, progressively sliding into male reference ranges. Lipid metabolism shifted to an unfavorable high-density lipoprotein (HDL)/low-density lipoprotein (LDL) ratio; glucose metabolism was not affected. Sex hormone-binding globulin (SHBG), total testosterone, and estradiol levels decreased, and free testosterone slightly increased during monotherapy; total and free testosterone increased significantly during combination therapy. Gonadotropins were only fully suppressed during combination therapy. Anti-Müllerian hormone (AMH) remained stable throughout the treatment. Changes occurred in the first 6 months of treatment and remained mostly stable thereafter. Treatment of FtM gender dysphoric adolescents with lynestrenol monotherapy and in combination with testosterone esters is effective, safe, and inexpensive; however, suppression of gonadotropins is incomplete. Regular blood controls allow screening for unphysiological changes in safety parameters or hormonal levels and for medication abuse.",
"title": ""
},
{
"docid": "1ecff18c4c4134bc8f501b8c4d9aa2d1",
"text": "Swarms of robots will revolutionize many industrial applications, from targeted material delivery to precision farming. However, several of the heterogeneous characteristics that make them ideal for certain future applications — robot autonomy, decentralized control, collective emergent behavior, etc. — hinder the evolution of the technology from academic institutions to real-world problems. Blockchain, an emerging technology originated in the Bitcoin field, demonstrates that by combining peer-topeer networks with cryptographic algorithms a group of agents can reach an agreement on a particular state of affairs and record that agreement without the need for a controlling authority. The combination of blockchain with other distributed systems, such as robotic swarm systems, can provide the necessary capabilities to make robotic swarm operations more secure, autonomous, flexible and even profitable. This work explains how blockchain technology can provide innovative solutions to four emergent issues in the swarm robotics research field. New security, decision making, behavior differentiation and business models for swarm robotic systems are described by providing case scenarios and examples. Finally, limitations and possible future problems that arise from the combination of these two technologies are described. I. THE BLOCKCHAIN: A DISRUPTIVE",
"title": ""
},
{
"docid": "4bb7720583a1a33b2dff5d7a994b44af",
"text": "Automatic License Plate Recognition (ALPR) systems capture a vehicle‟s license plate and recognize the license number and other required information from the captured image. ALPR systems have numbers of significant applications: law enforcement, public safety agencies, toll gate systems, etc. The goal of these systems is to recognize the characters and state on the license plate with high accuracy. ALPR has been implemented using various techniques. Traditional recognition methods use handcrafted features for obtaining features from the image. Unlike conventional methods, deep learning techniques automatically select features and are one of the game changing technologies in the field of computer vision, automatic recognition tasks and natural language processing. Some of the most successful deep learning methods involve Convolutional Neural Networks. This technique applies deep learning techniques to the ALPR problem of recognizing the state and license number from the USA license plate. Existing ALPR systems include three stages of",
"title": ""
},
{
"docid": "e49e65b40bf1cccdcbf223a109bec267",
"text": "Deep neural networks are being used increasingly to automate data analysis and decision making, yet their decision-making process is largely unclear and is difficult to explain to the end users. In this paper, we address the problem of Explainable AI for deep neural networks that take images as input and output a class probability. We propose an approach called RISE that generates an importance map indicating how salient each pixel is for the model’s prediction. In contrast to white-box approaches that estimate pixel importance using gradients or other internal network state, RISE works on blackbox models. It estimates importance empirically by probing the model with randomly masked versions of the input image and obtaining the corresponding outputs. We compare our approach to state-of-the-art importance extraction methods using both an automatic deletion/insertion metric and a pointing metric based on human-annotated object segments. Extensive experiments on several benchmark datasets show that our approach matches or exceeds the performance of other methods, including white-box approaches.",
"title": ""
},
{
"docid": "70a73dad03925580cdc3a7ef069f6f3a",
"text": "Recently, there has been a great attention to develop feature selection methods on the microarray high dimensional datasets. In this paper, an innovative method based on Maximum Relevancy and Minimum Redundancy (MRMR) approach by using Hesitant Fuzzy Sets (HFSs) is proposed to deal with feature subset selection; the method is called MRMR-HFS. MRMR-HFS is a novel filterbased feature selection algorithm that selects features by ensemble of ranking algorithms (as the measure of feature-class relevancy that must be maximized) and similarity measures (as the measure of feature-feature redundancy that must be minimized). The combination of ranking algorithms and similarity measures are done by using the fundamental concepts of information energies of HFSs. The proposed method has been inspired from Correlation based Feature Selection (CFS) within the sequential forward search in order to present a robust feature selection tool to solve high dimensional problems. To evaluate the effectiveness of the MRMR-HFS, several experimental results are carried out on nine well-known microarray high dimensional datasets. The obtained results are compared with those of other similar state-of-the-art algorithms including Correlation-based Feature Selection (CFS), Fast Correlation-based Filter (FCBF), Intract (INT), and Maximum Relevancy Minimum Redundancy (MRMR). The outcomes of comparison carried out via some non-parametric statistical tests confirm that the MRMR-HFS is effective for feature subset selection in high dimensional datasets in terms of accuracy, sensitivity, specificity, G-mean, and number of selected features.",
"title": ""
},
{
"docid": "8ed122ede076474bdad5c8fa2c8fd290",
"text": "Faced with changing markets and tougher competition, more and more companies realize that to compete effectively they must transform how they function. But while senior managers understand the necessity of change, they often misunderstand what it takes to bring it about. They assume that corporate renewal is the product of company-wide change programs and that in order to transform employee behavior, they must alter a company's formal structure and systems. Both these assumptions are wrong, say these authors. Using examples drawn from their four-year study of organizational change at six large corporations, they argue that change programs are, in fact, the greatest obstacle to successful revitalization and that formal structures and systems are the last thing a company should change, not the first. The most successful change efforts begin at the periphery of a corporation, in a single plant or division. Such efforts are led by general managers, not the CEO or corporate staff people. And these general managers concentrate not on changing formal structures and systems but on creating ad hoc organizational arrangements to solve concrete business problems. This focuses energy for change on the work itself, not on abstractions such as \"participation\" or \"culture.\" Once general managers understand the importance of this grass-roots approach to change, they don't have to wait for senior management to start a process of corporate renewal. The authors describe a six-step change process they call the \"critical path.\"",
"title": ""
},
{
"docid": "4d832a8716aebf7c36ae6894ce1bac33",
"text": "Autonomous vehicles require a reliable perception of their environment to operate in real-world conditions. Awareness of moving objects is one of the key components for the perception of the environment. This paper proposes a method for detection and tracking of moving objects (DATMO) in dynamic environments surrounding a moving road vehicle equipped with a Velodyne laser scanner and GPS/IMU localization system. First, at every time step, a local 2.5D grid is built using the last sets of sensor measurements. Along time, the generated grids combined with localization data are integrated into an environment model called local 2.5D map. In every frame, a 2.5D grid is compared with an updated 2.5D map to compute a 2.5D motion grid. A mechanism based on spatial properties is presented to suppress false detections that are due to small localization errors. Next, the 2.5D motion grid is post-processed to provide an object level representation of the scene. The detected moving objects are tracked over time by applying data association and Kalman filtering. The experiments conducted on different sequences from KITTI dataset showed promising results, demonstrating the applicability of the proposed method.",
"title": ""
},
{
"docid": "5ae415a28817c2bb774989b55e2f68b3",
"text": "Many applications of unmanned aerial vehicles (UAVs) require the capability to navigate to some goal and to perform precise and safe landing. In this paper, we present a visual navigation system as an alternative pose estimation method for environments and situations in which GPS is unavailable. The developed visual odometer is an incremental procedure that estimates the vehicle's ego-motion by extracting and tracking visual features, using an onboard camera. For more robustness and accuracy, the visual estimates are fused with measurements from an Inertial Measurement Unit (IMU) and a Pressure Sensor Altimeter (PSA) in order to provide accurate estimates of the vehicle's height, velocity and position relative to a given location. These estimates are then exploited by a nonlinear hierarchical controller for achieving various navigation tasks such as take-off, landing, hovering, target tracking, etc. In addition to the odometer description, the paper presents validation results from autonomous flights using a small quadrotor UAV.",
"title": ""
},
{
"docid": "4d70f4c4bd83e2ee531071ef99cac317",
"text": "Image features such as step edges, lines and Mach bands all give rise to points where the Fourier components of the image are maximally in phase. The use of phase congruency for marking features has signiicant advantages over gradient based methods. It is a dimension-less quantity that is invariant to changes in image brightness or contrast, hence it provides an absolute measure of the signiicance of feature points. This allows the use of universal threshold values that can be applied over wide classes of images. This paper presents a new way of calculating phase congruency through the use of wavelets. The existing theory that has been developed for 1D signals is extended to allow the calculation of phase congruency in 2D images. It is shown that for good localization it is important to consider the spread of frequencies present at a point of phase congruency. An eeective method for identifying, and compensating for, the level of noise in an image is presented. Finally, it is argued that high-pass ltering should be used to obtain image information at diierent scales. With this approach the choice of scale only aaects the relative signiicance of features without degrading their localization. Abstract Image features such as step edges, lines and Mach bands all give rise to points where the Fourier components of the image are maximally in phase. The use of phase congruency for marking features has signiicant advantages over gradient based methods. It is a dimensionless quantity that is invariant to changes in image brightness or contrast, hence it provides an absolute measure of the signiicance of feature points. This allows the use of universal threshold values that can be applied over wide classes of images. This paper presents a new way of calculating phase congruency through the use of wavelets. The existing theory that has been developed for 1D signals is extended to allow the calculation of phase congruency in 2D images. It is shown that for good localization it is important to consider the spread of frequencies present at a point of phase congruency. An eeective method for identifying, and compensating for, the level of noise in an image is presented. Finally, it is argued that high-pass ltering should be used to obtain image information at diierent scales. With this approach the choice of scale only aaects the relative signiicance of features without degrading their localization.",
"title": ""
},
{
"docid": "7ca5eac9be1ba8c1738862f24dd707d2",
"text": "This essay develops the philosophical foundations for design research in the Technology of Information Systems (TIS). Traditional writings on philosophy of science cannot fully describe this mode of research, which dares to intervene and improve to realize alternative futures instead of explaining or interpreting the past to discover truth. Accordingly, in addition to philosophy of science, the essay draws on writings about the act of designing, philosophy of technology and the substantive (IS) discipline. I define design research in TIS as in(ter)vention in the representational world defined by the hierarchy of concerns following semiotics. The complementary nature of the representational (internal) and real (external) environments provides the basis to articulate the dual ontological and epistemological bases. Understanding design research in TIS in this manner suggests operational principles in the internal world as the form of knowledge created by design researchers, and artifacts that embody these are seen as situated instantiations of normative theories that affect the external phenomena of interest. Throughout the paper, multiple examples illustrate the arguments. Finally, I position the resulting ‘method’ for design research vis-à-vis existing research methods and argue for its legitimacy as a viable candidate for research in the IS discipline.",
"title": ""
},
{
"docid": "0b1b4c8d501c3b1ab350efe4f2249978",
"text": "Motivated by formation control of multiple non-holonomic mobile robots, this paper presents a trajectory tracking control scheme design for nonholonomic mobile robots that are equipped with low-level linear and angular velocities control systems. The design includes a nonlinear kinematic trajectory tracking control law and a tracking control gains selection method that provide a means to implement the nonlinear tracking control law systematically based on the dynamic control performance of the robot's low-level control systems. In addition, the proposed scheme, by design, enables the mobile robot to execute reference trajectories that are represented by time-parameterized waypoints. This feature provides the scheme a generic interface with higher-level trajectory planners. The trajectory tracking control scheme is validated using an iRobot Packbot's parameteric model estimated from experimental data.",
"title": ""
},
{
"docid": "b9b027c5b511a5528d35cd05d3d57ff4",
"text": "A plasmid is defined as a double stranded, circular DNA molecule capable of autonomous replication. By definition, plasmids do not carry genes essential for the growth of host cells under non-stressed conditions but they have systems which guarantee their autonomous replication also controlling the copy number and ensuring stable inheritance during cell division. Most of the plasmids confer positively selectable phenotypes by the presence of antimicrobial resistance genes. Plasmids evolve as an integral part of the bacterial genome, providing resistance genes that can be easily exchanged among bacteria of different origin and source by conjugation. A multidisciplinary approach is currently applied to study the acquisition and spread of antimicrobial resistance in clinically relevant bacterial pathogens and the established surveillance can be implemented by replicon typing of plasmids. Particular plasmid families are more frequently detected among Enterobacteriaceae and play a major role in the diffusion of specific resistance genes. For instance, IncFII, IncA/C, IncL/M, IncN and IncI1 plasmids carrying extended-spectrum beta-lactamase genes and acquired AmpC genes are currently considered to be \"epidemic resistance plasmids\", being worldwide detected in Enterobacteriaceae of different origin and sources. The recognition of successful plasmids is an essential first step to design intervention strategies preventing their spread.",
"title": ""
},
{
"docid": "4083af4f4c546056123e8b4f0489e5cf",
"text": "In this paper, a multi-agent optimization algorithm (MAOA) is proposed for solving the resourceconstrained project scheduling problem (RCPSP). In the MAOA, multiple agents work in a grouped environment where each agent represents a feasible solution. The evolution of agents is achieved by using four main elements in the MAOA, including social behavior, autonomous behavior, self-learning, and environment adjustment. The social behavior includes the global one and the local one for performing exploration. Through the global social behavior, the leader agent in every group is guided by the global best leader. Through the local social behavior, each agent is guided by its own leader agent. Through the autonomous behavior, each agent exploits its own neighborhood. Through the self-learning, the best agent performs an intensified search to further exploit the promising region. Meanwhile, some agents perform migration among groups to adjust the environment dynamically for information sharing. The implementation of the MAOA for solving the RCPSP is presented in detail, and the effect of key parameters of the MAOA is investigated based on the Taguchi method of design of experiment. Numerical testing results are provided by using three sets of benchmarking instances. The comparisons to the existing algorithms demonstrate the effectiveness of the proposed MAOA for solving the RCPSP. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6a82dfa1d79016388c38ccba77c56ae5",
"text": "Scripts define knowledge about how everyday scenarios (such as going to a restaurant) are expected to unfold. One of the challenges to learning scripts is the hierarchical nature of the knowledge. For example, a suspect arrested might plead innocent or guilty, and a very different track of events is then expected to happen. To capture this type of information, we propose an autoencoder model with a latent space defined by a hierarchy of categorical variables. We utilize a recently proposed vector quantization based approach, which allows continuous embeddings to be associated with each latent variable value. This permits the decoder to softly decide what portions of the latent hierarchy to condition on by attending over the value embeddings for a given setting. Our model effectively encodes and generates scripts, outperforming a recent language modeling-based method on several standard tasks, and allowing the autoencoder model to achieve substantially lower perplexity scores compared to the previous language modelingbased method.",
"title": ""
},
{
"docid": "d5242082f647198311ca12f209c6665a",
"text": "Despite the tremendous empirical success of neural models in natural language processing, many of them lack the strong intuitions that accompany classical machine learning approaches. Recently, connections have been shown between convolutional neural networks (CNNs) and weighted finite state automata (WFSAs), leading to new interpretations and insights. In this work, we show that some recurrent neural networks also share this connection to WFSAs. We characterize this connection formally, defining rational recurrences to be recurrent hidden state update functions that can be written as the Forward calculation of a finite set of WFSAs. We show that several recent neural models use rational recurrences. Our analysis provides a fresh view of these models and facilitates devising new neural architectures that draw inspiration from WFSAs. We present one such model, which performs better than two recent baselines on language modeling and text classification. Our results demonstrate that transferring intuitions from classical models like WFSAs can be an effective approach to designing and understanding neural models.",
"title": ""
}
] | scidocsrr |
202a652cfa3e199a78fd20234f5c1dd8 | A Sentence Simplification System for Improving Relation Extraction | [
{
"docid": "5aeffba75c1e6d5f0e7bde54662da8e8",
"text": "A large number of Open Relation Extraction approaches have been proposed recently, covering a wide range of NLP machinery, from “shallow” (e.g., part-of-speech tagging) to “deep” (e.g., semantic role labeling–SRL). A natural question then is what is the tradeoff between NLP depth (and associated computational cost) versus effectiveness. This paper presents a fair and objective experimental comparison of 8 state-of-the-art approaches over 5 different datasets, and sheds some light on the issue. The paper also describes a novel method, EXEMPLAR, which adapts ideas from SRL to less costly NLP machinery, resulting in substantial gains both in efficiency and effectiveness, over binary and n-ary relation extraction tasks.",
"title": ""
},
{
"docid": "4261755b137a5cde3d9f33c82bc53cd7",
"text": "We study the problem of automatically extracting information networks formed by recognizable entities as well as relations among them from social media sites. Our approach consists of using state-of-the-art natural language processing tools to identify entities and extract sentences that relate such entities, followed by using text-clustering algorithms to identify the relations within the information network. We propose a new term-weighting scheme that significantly improves on the state-of-the-art in the task of relation extraction, both when used in conjunction with the standard tf ċ idf scheme and also when used as a pruning filter. We describe an effective method for identifying benchmarks for open information extraction that relies on a curated online database that is comparable to the hand-crafted evaluation datasets in the literature. From this benchmark, we derive a much larger dataset which mimics realistic conditions for the task of open information extraction. We report on extensive experiments on both datasets, which not only shed light on the accuracy levels achieved by state-of-the-art open information extraction tools, but also on how to tune such tools for better results.",
"title": ""
},
{
"docid": "5f2818d3a560aa34cc6b3dbfd6b8f2cc",
"text": "Open Information Extraction (IE) systems extract relational tuples from text, without requiring a pre-specified vocabulary, by identifying relation phrases and associated arguments in arbitrary sentences. However, stateof-the-art Open IE systems such as REVERB and WOE share two important weaknesses – (1) they extract only relations that are mediated by verbs, and (2) they ignore context, thus extracting tuples that are not asserted as factual. This paper presents OLLIE, a substantially improved Open IE system that addresses both these limitations. First, OLLIE achieves high yield by extracting relations mediated by nouns, adjectives, and more. Second, a context-analysis step increases precision by including contextual information from the sentence in the extractions. OLLIE obtains 2.7 times the area under precision-yield curve (AUC) compared to REVERB and 1.9 times the AUC of WOE.",
"title": ""
},
{
"docid": "40405c31dfd3439252eb1810a373ec0e",
"text": "Traditional relation extraction seeks to identify pre-specified semantic relations within natural language text, while open Information Extraction (Open IE) takes a more general approach, and looks for a variety of relations without restriction to a fixed relation set. With this generalization comes the question, what is a relation? For example, should the more general task be restricted to relations mediated by verbs, nouns, or both? To help answer this question, we propose two levels of subtasks for Open IE. One task is to determine if a sentence potentially contains a relation between two entities? The other task looks to confirm explicit relation words for two entities. We propose multiple SVM models with dependency tree kernels for both tasks. For explicit relation extraction, our system can extract both noun and verb relations. Our results on three datasets show that our system is superior when compared to state-of-the-art systems like REVERB and OLLIE for both tasks. For example, in some experiments our system achieves 33% improvement on nominal relation extraction over OLLIE. In addition we propose an unsupervised rule-based approach which can serve as a strong baseline for Open IE systems.",
"title": ""
},
{
"docid": "ed189b8fa606cc2d86706d199dd71a89",
"text": "This paper presents PATTY: a large resource for textual patterns that denote binary relations between entities. The patterns are semantically typed and organized into a subsumption taxonomy. The PATTY system is based on efficient algorithms for frequent itemset mining and can process Web-scale corpora. It harnesses the rich type system and entity population of large knowledge bases. The PATTY taxonomy comprises 350,569 pattern synsets. Random-sampling-based evaluation shows a pattern accuracy of 84.7%. PATTY has 8,162 subsumptions, with a random-sampling-based precision of 75%. The PATTY resource is freely available for interactive access and download.",
"title": ""
}
] | [
{
"docid": "ffdd14d8d74a996971284a8e5e950996",
"text": "Ten years on from a review in the twentieth issue of this journal, this contribution assess the direction research in the field of glucose sensing for diabetes is headed and various technologies to be seen in the future. The emphasis of this review was placed on the home blood glucose testing market. After an introduction to diabetes and glucose sensing, this review analyses state of the art and pipeline devices; in particular their user friendliness and technological advancement. This review complements conventional reviews based on scholarly published papers in journals.",
"title": ""
},
{
"docid": "39cacae62d16bc187f88884fe72ace59",
"text": "Microplastics are present throughout the marine environment and ingestion of these plastic particles (<1 mm) has been demonstrated in a laboratory setting for a wide array of marine organisms. Here, we investigate the presence of microplastics in two species of commercially grown bivalves: Mytilus edulis and Crassostrea gigas. Microplastics were recovered from the soft tissues of both species. At time of human consumption, M. edulis contains on average 0.36 ± 0.07 particles g(-1) (wet weight), while a plastic load of 0.47 ± 0.16 particles g(-1) ww was detected in C. gigas. As a result, the annual dietary exposure for European shellfish consumers can amount to 11,000 microplastics per year. The presence of marine microplastics in seafood could pose a threat to food safety, however, due to the complexity of estimating microplastic toxicity, estimations of the potential risks for human health posed by microplastics in food stuffs is not (yet) possible.",
"title": ""
},
{
"docid": "66b104459bdfc063cf7559c363c5802f",
"text": "We present a new local strategy to solve incremental learning tasks. Applied to Support Vector Machines based on local kernel, it allows to avoid re-learning of all the parameters by selecting a working subset where the incremental learning is performed. Automatic selection procedure is based on the estimation of generalization error by using theoretical bounds that involve the margin notion. Experimental simulation on three typical datasets of machine learning give promising results.",
"title": ""
},
{
"docid": "93d498adaee9070ffd608c5c1fe8e8c9",
"text": "INTRODUCTION\nFluorescence anisotropy (FA) is one of the major established methods accepted by industry and regulatory agencies for understanding the mechanisms of drug action and selecting drug candidates utilizing a high-throughput format.\n\n\nAREAS COVERED\nThis review covers the basics of FA and complementary methods, such as fluorescence lifetime anisotropy and their roles in the drug discovery process. The authors highlight the factors affecting FA readouts, fluorophore selection and instrumentation. Furthermore, the authors describe the recent development of a successful, commercially valuable FA assay for long QT syndrome drug toxicity to illustrate the role that FA can play in the early stages of drug discovery.\n\n\nEXPERT OPINION\nDespite the success in drug discovery, the FA-based technique experiences competitive pressure from other homogeneous assays. That being said, FA is an established yet rapidly developing technique, recognized by academic institutions, the pharmaceutical industry and regulatory agencies across the globe. The technical problems encountered in working with small molecules in homogeneous assays are largely solved, and new challenges come from more complex biological molecules and nanoparticles. With that, FA will remain one of the major work-horse techniques leading to precision (personalized) medicine.",
"title": ""
},
{
"docid": "6eab3ef8777363641b734ff4eacc90fe",
"text": "Big data, because it can mine new knowledge for economic growth and technical innovation, has recently received considerable attention, and many research efforts have been directed to big data processing due to its high volume, velocity, and variety (referred to as \"3V\") challenges. However, in addition to the 3V challenges, the flourishing of big data also hinges on fully understanding and managing newly arising security and privacy challenges. If data are not authentic, new mined knowledge will be unconvincing; while if privacy is not well addressed, people may be reluctant to share their data. Because security has been investigated as a new dimension, \"veracity,\" in big data, in this article, we aim to exploit new challenges of big data in terms of privacy, and devote our attention toward efficient and privacy-preserving computing in the big data era. Specifically, we first formalize the general architecture of big data analytics, identify the corresponding privacy requirements, and introduce an efficient and privacy-preserving cosine similarity computing protocol as an example in response to data mining's efficiency and privacy requirements in the big data era.",
"title": ""
},
{
"docid": "e6d309d24e7773d7fc78c3ebeb926ba0",
"text": "INTRODUCTION\nLiver disease is the third most common cause of premature mortality in the UK. Liver failure accelerates frailty, resulting in skeletal muscle atrophy, functional decline and an associated risk of liver transplant waiting list mortality. However, there is limited research investigating the impact of exercise on patient outcomes pre and post liver transplantation. The waitlist period for patients listed for liver transplantation provides a unique opportunity to provide and assess interventions such as prehabilitation.\n\n\nMETHODS AND ANALYSIS\nThis study is a phase I observational study evaluating the feasibility of conducting a randomised control trial (RCT) investigating the use of a home-based exercise programme (HBEP) in the management of patients awaiting liver transplantation. Twenty eligible patients will be randomly selected from the Queen Elizabeth University Hospital Birmingham liver transplant waiting list. Participants will be provided with an individually tailored 12-week HBEP, including step targets and resistance exercises. Activity trackers and patient diaries will be provided to support data collection. For the initial 6 weeks, telephone support will be given to discuss compliance with the study intervention, achievement of weekly targets, and to address any queries or concerns regarding the intervention. During weeks 6-12, participants will continue the intervention without telephone support to evaluate longer term adherence to the study intervention. On completing the intervention, all participants will be invited to engage in a focus group to discuss their experiences and the feasibility of an RCT.\n\n\nETHICS AND DISSEMINATION\nThe protocol is approved by the National Research Ethics Service Committee North West - Greater Manchester East and Health Research Authority (REC reference: 17/NW/0120). Recruitment into the study started in April 2017 and ended in July 2017. Follow-up of participants is ongoing and due to finish by the end of 2017. The findings of this study will be disseminated through peer-reviewed publications and international presentations. In addition, the protocol will be placed on the British Liver Trust website for public access.\n\n\nTRIAL REGISTRATION NUMBER\nNCT02949505; Pre-results.",
"title": ""
},
{
"docid": "0195e112c19f512b7de6a7f00e9f1099",
"text": "Medication-related osteonecrosis of the jaw (MRONJ) is a severe adverse drug reaction, consisting of progressive bone destruction in the maxillofacial region of patients. ONJ can be caused by two pharmacological agents: Antiresorptive (including bisphosphonates (BPs) and receptor activator of nuclear factor kappa-B ligand inhibitors) and antiangiogenic. MRONJ pathophysiology is not completely elucidated. There are several suggested hypothesis that could explain its unique localization to the jaws: Inflammation or infection, microtrauma, altered bone remodeling or over suppression of bone resorption, angiogenesis inhibition, soft tissue BPs toxicity, peculiar biofilm of the oral cavity, terminal vascularization of the mandible, suppression of immunity, or Vitamin D deficiency. Dental screening and adequate treatment are fundamental to reduce the risk of osteonecrosis in patients under antiresorptive or antiangiogenic therapy, or before initiating the administration. The treatment of MRONJ is generally difficult and the optimal therapy strategy is still to be established. For this reason, prevention is even more important. It is suggested that a multidisciplinary team approach including a dentist, an oncologist, and a maxillofacial surgeon to evaluate and decide the best therapy for the patient. The choice between a conservative treatment and surgery is not easy, and it should be made on a case by case basis. However, the initial approach should be as conservative as possible. The most important goals of treatment for patients with established MRONJ are primarily the control of infection, bone necrosis progression, and pain. The aim of this paper is to represent the current knowledge about MRONJ, its preventive measures and management strategies.",
"title": ""
},
{
"docid": "4d5bf5e40ca09c6acd5d86e1147ab1d6",
"text": "In the next few decades, the proportion of Americans age 65 or older is expected to increase from 12% (36 million) to 20% (80 million) of the total US population [1]. As life expectancy increases, an even greater need arises for cost-effective interventions to improve function and quality of life among older adults [2-4]. All older adults face numerous health problems that can reduce or limit both the quality and quantity of life they will experience. Some of the main problems faced by older adults include reduced physical function and well-being, challenges with mental and emotional functioning and well-being, and more limited social functioning. Not surprisingly, these factors comprise the primary components of comprehensive health-related quality of life [5,6].",
"title": ""
},
{
"docid": "d17622889db09b8484d94392cadf1d78",
"text": "Software development has always inherently required multitasking: developers switch between coding, reviewing, testing, designing, and meeting with colleagues. The advent of software ecosystems like GitHub has enabled something new: the ability to easily switch between projects. Developers also have social incentives to contribute to many projects; prolific contributors gain social recognition and (eventually) economic rewards. Multitasking, however, comes at a cognitive cost: frequent context-switches can lead to distraction, sub-standard work, and even greater stress. In this paper, we gather ecosystem-level data on a group of programmers working on a large collection of projects. We develop models and methods for measuring the rate and breadth of a developers' context-switching behavior, and we study how context-switching affects their productivity. We also survey developers to understand the reasons for and perceptions of multitasking. We find that the most common reason for multitasking is interrelationships and dependencies between projects. Notably, we find that the rate of switching and breadth (number of projects) of a developer's work matter. Developers who work on many projects have higher productivity if they focus on few projects per day. Developers that switch projects too much during the course of a day have lower productivity as they work on more projects overall. Despite these findings, developers perceptions of the benefits of multitasking are varied.",
"title": ""
},
{
"docid": "546f0d09b23ed639ca78882746331cff",
"text": "This paper deals with the use of Petri nets in modelling railway network and designing appropriate control logic for it to avoid collision. Here, the whole railway network is presented as a combination of the elementary models – tracks, stations and points (switch) within the station including sensors and semaphores. We use generalized mutual exclusion constraints and constraints containing the firing vector to ensure safeness of the railway network. In this research work, we have actually introduced constraints at the points within the station. These constraints ensure that when a track is occupied, we control the switch so that another train will not enter into the same track and thus avoid collision.",
"title": ""
},
{
"docid": "45d72f6c70c034122c86301be9531e97",
"text": "Multiple Classifier Systems (MCS) have been widely studied as an alternative for increasing accuracy in pattern recognition. One of the most promising MCS approaches is Dynamic Selection (DS), in which the base classifiers are selected on the fly, according to each new sample to be classified. This paper provides a review of the DS techniques proposed in the literature from a theoretical and empirical point of view. We propose an updated taxonomy based on the main characteristics found in a dynamic selection system: (1) The methodology used to define a local region for the estimation of the local competence of the base classifiers; (2) The source of information used to estimate the level of competence of the base classifiers, such as local accuracy, oracle, ranking and probabilistic models, and (3) The selection approach, which determines whether a single or an ensemble of classifiers is selected. We categorize the main dynamic selection techniques in the DS literature based on the proposed taxonomy. We also conduct an extensive experimental analysis, considering a total of 18 state-of-the-art dynamic selection techniques, as well as static ensemble combination and single classification models. To date, this is the first analysis comparing all the key DS techniques under the same experimental protocol. Furthermore, we also present several perspectives and open research questions that can be used as a guide for future works in this domain. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f1369ac01e63236d9b6e20bcac25b8b1",
"text": "Traumatic dislocation of the testis is a rare event in which the full extent of the dislocation is present immediately following the initial trauma. We present a case in which the testicular dislocation progressed over a period of four days following the initial scrotal trauma.",
"title": ""
},
{
"docid": "0815549f210c57b28a7e2fc87c20f616",
"text": "Portable automatic seizure detection system is very convenient for epilepsy patients to carry. In order to make the system on-chip trainable with high efficiency and attain high detection accuracy, this paper presents a very large scale integration (VLSI) design based on the nonlinear support vector machine (SVM). The proposed design mainly consists of a feature extraction (FE) module and an SVM module. The FE module performs the three-level Daubechies discrete wavelet transform to fit the physiological bands of the electroencephalogram (EEG) signal and extracts the time–frequency domain features reflecting the nonstationary signal properties. The SVM module integrates the modified sequential minimal optimization algorithm with the table-driven-based Gaussian kernel to enable efficient on-chip learning. The presented design is verified on an Altera Cyclone II field-programmable gate array and tested using the two publicly available EEG datasets. Experiment results show that the designed VLSI system improves the detection accuracy and training efficiency.",
"title": ""
},
{
"docid": "30842064b771dd6b47e514574257928f",
"text": "To be successful in financial market trading it is necessary to correctly predict future market trends. Most professional traders use technical analysis to forecast future market prices. In this paper, we present a new hybrid intelligent method to forecast financial time series, especially for the Foreign Exchange Market (FX). To emulate the way real traders make predictions, this method uses both historical market data and chart patterns to forecast market trends. First, wavelet full decomposition of time series analysis was used as an Adaptive Network-based Fuzzy Inference System (ANFIS) input data for forecasting future market prices. Also, Quantum-behaved Particle Swarm Optimization (QPSO) for tuning the ANFIS membership functions has been used. The second part of this paper proposes a novel hybrid Dynamic Time Warping (DTW)-Wavelet Transform (WT) method for automatic pattern extraction. The results indicate that the presented hybrid method is a very useful and effective one for financial price forecasting and financial pattern extraction. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "04b7ad51d2464052ebd3d32baeb5b57b",
"text": "Rob Antrobus Security Lancaster Research Centre Lancaster University Lancaster LA1 4WA UK security-centre.lancs.ac.uk [email protected] Sylvain Frey Security Lancaster Research Centre Lancaster University Lancaster LA1 4WA UK security-centre.lancs.ac.uk [email protected] Benjamin Green Security Lancaster Research Centre Lancaster University Lancaster LA1 4WA UK security-centre.lancs.ac.uk [email protected]",
"title": ""
},
{
"docid": "6dc9ebf5dea1c78e1688a560f241f804",
"text": "This paper reports finding from a study carried out in a remote rural area of Bangladesh during December 2000. Nineteen key informants were interviewed for collecting data on domestic violence against women. Each key informant provided information about 10 closest neighbouring ever-married women covering a total of 190 women. The questionnaire included information about frequency of physical violence, verbal abuse, and other relevant information, including background characteristics of the women and their husbands. 50.5% of the women were reported to be battered by their husbands and 2.1% by other family members. Beating by the husband was negatively related with age of husband: the odds of beating among women with husbands aged less than 30 years were six times of those with husbands aged 50 years or more. Members of micro-credit societies also had higher odds of being beaten than non-members. The paper discusses the possibility of community-centred interventions by raising awareness about the violation of human rights issues and other legal and psychological consequences to prevent domestic violence against women.",
"title": ""
},
{
"docid": "4396d53b9cfeb4997b4e7c7293d67586",
"text": "Title Type cities and complexity understanding cities with cellular automata agent-based models and fractals PDF the complexity of cooperation agent-based models of competition and collaboration PDF party competition an agent-based model princeton studies in complexity PDF sharing cities a case for truly smart and sustainable cities urban and industrial environments PDF global metropolitan globalizing cities in a capitalist world questioning cities PDF state of the worlds cities 201011 cities for all bridging the urban divide PDF new testament cities in western asia minor light from archaeology on cities of paul and the seven churches of revelation PDF",
"title": ""
},
{
"docid": "f83f099437475aebb81fe92be355f331",
"text": "The main receptors for amyloid-beta peptide (Abeta) transport across the blood-brain barrier (BBB) from brain to blood and blood to brain are low-density lipoprotein receptor related protein-1 (LRP1) and receptor for advanced glycation end products (RAGE), respectively. In normal human plasma a soluble form of LRP1 (sLRP1) is a major endogenous brain Abeta 'sinker' that sequesters some 70 to 90 % of plasma Abeta peptides. In Alzheimer's disease (AD), the levels of sLRP1 and its capacity to bind Abeta are reduced which increases free Abeta fraction in plasma. This in turn may increase brain Abeta burden through decreased Abeta efflux and/or increased Abeta influx across the BBB. In Abeta immunotherapy, anti-Abeta antibody sequestration of plasma Abeta enhances the peripheral Abeta 'sink action'. However, in contrast to endogenous sLRP1 which does not penetrate the BBB, some anti-Abeta antibodies may slowly enter the brain which reduces the effectiveness of their sink action and may contribute to neuroinflammation and intracerebral hemorrhage. Anti-Abeta antibody/Abeta immune complexes are rapidly cleared from brain to blood via FcRn (neonatal Fc receptor) across the BBB. In a mouse model of AD, restoring plasma sLRP1 with recombinant LRP-IV cluster reduces brain Abeta burden and improves functional changes in cerebral blood flow (CBF) and behavioral responses, without causing neuroinflammation and/or hemorrhage. The C-terminal sequence of Abeta is required for its direct interaction with sLRP and LRP-IV cluster which is completely blocked by the receptor-associated protein (RAP) that does not directly bind Abeta. Therapies to increase LRP1 expression or reduce RAGE activity at the BBB and/or restore the peripheral Abeta 'sink' action, hold potential to reduce brain Abeta and inflammation, and improve CBF and functional recovery in AD models, and by extension in AD patients.",
"title": ""
},
{
"docid": "5898f4adaf86393972bcbf4c4ab91540",
"text": "This paper presents a non-intrusive approach for monitoring driver drowsiness using the fusion of several optimized indicators based on driver physical and driving performance measures, obtained from ADAS (Advanced Driver Assistant Systems) in simulated conditions. The paper is focused on real-time drowsiness detection technology rather than on long-term sleep/awake regulation prediction technology. We have developed our own vision system in order to obtain robust and optimized driver indicators able to be used in simulators and future real environments. These indicators are principally based on driver physical and driving performance skills. The fusion of several indicators, proposed in the literature, is evaluated using a neural network and a stochastic optimization method to obtain the best combination. We propose a new method for ground-truth generation based on a supervised Karolinska Sleepiness Scale (KSS). An extensive evaluation of indicators, derived from trials over a third generation simulator with several test subjects during different driving sessions, was performed. The main conclusions about the performance of single indicators and the best combinations of them are included, as well as the future works derived from this study.",
"title": ""
}
] | scidocsrr |
945a19b9004f75cd7a1f0e7743ed3221 | Learning Sense-specific Word Embeddings By Exploiting Bilingual Resources | [
{
"docid": "a1a1ba8a6b7515f676ba737434c6d86a",
"text": "Semantic hierarchy construction aims to build structures of concepts linked by hypernym–hyponym (“is-a”) relations. A major challenge for this task is the automatic discovery of such relations. This paper proposes a novel and effective method for the construction of semantic hierarchies based on word embeddings, which can be used to measure the semantic relationship between words. We identify whether a candidate word pair has hypernym–hyponym relation by using the word-embedding-based semantic projections between words and their hypernyms. Our result, an F-score of 73.74%, outperforms the state-of-theart methods on a manually labeled test dataset. Moreover, combining our method with a previous manually-built hierarchy extension method can further improve Fscore to 80.29%.",
"title": ""
}
] | [
{
"docid": "c6e14529a55b0e6da44dd0966896421a",
"text": "Context-based pairing solutions increase the usability of IoT device pairing by eliminating any human involvement in the pairing process. This is possible by utilizing on-board sensors (with same sensing modalities) to capture a common physical context (e.g., ambient sound via each device's microphone). However, in a smart home scenario, it is impractical to assume that all devices will share a common sensing modality. For example, a motion detector is only equipped with an infrared sensor while Amazon Echo only has microphones. In this paper, we develop a new context-based pairing mechanism called Perceptio that uses time as the common factor across differing sensor types. By focusing on the event timing, rather than the specific event sensor data, Perceptio creates event fingerprints that can be matched across a variety of IoT devices. We propose Perceptio based on the idea that devices co-located within a physically secure boundary (e.g., single family house) can observe more events in common over time, as opposed to devices outside. Devices make use of the observed contextual information to provide entropy for Perceptio's pairing protocol. We design and implement Perceptio, and evaluate its effectiveness as an autonomous secure pairing solution. Our implementation demonstrates the ability to sufficiently distinguish between legitimate devices (placed within the boundary) and attacker devices (placed outside) by imposing a threshold on fingerprint similarity. Perceptio demonstrates an average fingerprint similarity of 94.9% between legitimate devices while even a hypothetical impossibly well-performing attacker yields only 68.9% between itself and a valid device.",
"title": ""
},
{
"docid": "68bb5cb195c910e0a52c81a42a9e141c",
"text": "With advances in brain-computer interface (BCI) research, a portable few- or single-channel BCI system has become necessary. Most recent BCI studies have demonstrated that the common spatial pattern (CSP) algorithm is a powerful tool in extracting features for multiple-class motor imagery. However, since the CSP algorithm requires multi-channel information, it is not suitable for a few- or single-channel system. In this study, we applied a short-time Fourier transform to decompose a single-channel electroencephalography signal into the time-frequency domain and construct multi-channel information. Using the reconstructed data, the CSP was combined with a support vector machine to obtain high classification accuracies from channels of both the sensorimotor and forehead areas. These results suggest that motor imagery can be detected with a single channel not only from the traditional sensorimotor area but also from the forehead area.",
"title": ""
},
{
"docid": "ce94ff17f677b6c2c6c81295fa53b8df",
"text": "The Information Artifact Ontology (IAO) was created to serve as a domain‐neutral resource for the representation of types of information content entities (ICEs) such as documents, data‐bases, and digital im‐ ages. We identify a series of problems with the current version of the IAO and suggest solutions designed to advance our understanding of the relations between ICEs and associated cognitive representations in the minds of human subjects. This requires embedding IAO in a larger framework of ontologies, including most importantly the Mental Func‐ tioning Ontology (MFO). It also requires a careful treatment of the aboutness relations between ICEs and associated cognitive representa‐ tions and their targets in reality.",
"title": ""
},
{
"docid": "7664e1bb09bf8547bbc7333a41404f2f",
"text": "A Nyquist ADC with time-based pipelined architecture is proposed. The proposed hybrid pipeline stage, incorporating time-domain amplification based on a charge pump, enables power efficient analog to digital conversion. The proposed ADC also adopts a minimalist switched amplifier with 24dB open-loop dc gain in the first stage MDAC that is based on a new V-T operation, instead of a conventional high gain amplifier. The measured results of the prototype ADC implemented in a 0.13μm CMOS demonstrate peak SNDR of 69.3dB at 6.38mW power, with a near rail-to-rail 1MHz input of 2.4VP-P at 70MHz sampling frequency and 1.3V supply. This results in 38.2fJ/conversion-step FOM.",
"title": ""
},
{
"docid": "e485aca373cf4543e1a8eeadfa0e6772",
"text": "Identifying peer-review helpfulness is an important task for improving the quality of feedback that students receive from their peers. As a first step towards enhancing existing peerreview systems with new functionality based on helpfulness detection, we examine whether standard product review analysis techniques also apply to our new context of peer reviews. In addition, we investigate the utility of incorporating additional specialized features tailored to peer review. Our preliminary results show that the structural features, review unigrams and meta-data combined are useful in modeling the helpfulness of both peer reviews and product reviews, while peer-review specific auxiliary features can further improve helpfulness prediction.",
"title": ""
},
{
"docid": "9bc681a751d8fe9e2c93204ea06786b8",
"text": "In this paper, a complimentary split ring resonator (CSRR) enhanced wideband log-periodic antenna with coupled microstrip line feeding is presented. Here in this work, coupled line feeding to the patches is proposed to avoid individual microstrip feed matching complexities. Three CSRR elements were etched in the ground plane. Individual patches were designed according to the conventional log-periodic design rules. FR4 dielectric substrate is used to design a five-element log-periodic patch with CSRR printed on the ground plane. The result shows a wide operating band ranging from 4.5 GHz to 9 GHz. Surface current distribution of the antenna shows a strong resonance of CSRR's placed in the ground plane. The design approach of the antenna is reported and performance of the proposed antenna has been evaluated through three dimensional electromagnetic simulation validating performance enhancement of the antenna due to presence of CSRRs. Antennas designed in this work may be used in satellite and indoor wireless communication.",
"title": ""
},
{
"docid": "36f2be7a14eeb10ad975aa00cfd30f36",
"text": "Recovering a low-rank tensor from incomplete information is a recurring problem in signal processing and machine learning. The most popular convex relaxation of this problem minimizes the sum of the nuclear norms of the unfoldings of the tensor. We show that this approach can be substantially suboptimal: reliably recovering a K-way tensor of length n and Tucker rank r from Gaussian measurements requires Ω(rnK−1) observations. In contrast, a certain (intractable) nonconvex formulation needs only O(r +nrK) observations. We introduce a very simple, new convex relaxation, which partially bridges this gap. Our new formulation succeeds with O(rbK/2cndK/2e) observations. While these results pertain to Gaussian measurements, simulations strongly suggest that the new norm also outperforms the sum of nuclear norms for tensor completion from a random subset of entries. Our lower bound for the sum-of-nuclear-norms model follows from a new result on recovering signals with multiple sparse structures (e.g. sparse, low rank), which perhaps surprisingly demonstrates the significant suboptimality of the commonly used recovery approach via minimizing the sum of individual sparsity inducing norms (e.g. l1, nuclear norm). Our new formulation for low-rank tensor recovery however opens the possibility in reducing the sample complexity by exploiting several structures jointly.",
"title": ""
},
{
"docid": "eef87d8905b621d2d0bb2b66108a56c1",
"text": "We study deep learning approaches to inferring numerical coordinates for points of interest in an input image. Existing convolutional neural network-based solutions to this problem either take a heatmap matching approach or regress to coordinates with a fully connected output layer. Neither of these approaches is ideal, since the former is not entirely differentiable, and the latter lacks inherent spatial generalization. We propose our differentiable spatial to numerical transform (DSNT) to fill this gap. The DSNT layer adds no trainable parameters, is fully differentiable, and exhibits good spatial generalization. Unlike heatmap matching, DSNT works well with low heatmap resolutions, so it can be dropped in as an output layer for a wide range of existing fully convolutional architectures. Consequently, DSNT offers a better trade-off between inference speed and prediction accuracy compared to existing techniques. When used to replace the popular heatmap matching approach used in almost all state-of-the-art methods for pose estimation, DSNT gives better prediction accuracy for all model architectures tested.",
"title": ""
},
{
"docid": "f0ef9541461cd9d9e42ea355ea31ac41",
"text": "We introduce and create a framework for deriving probabilistic models of Information Retrieval. The models are nonparametric models of IR obtained in the language model approach. We derive term-weighting models by measuring the divergence of the actual term distribution from that obtained under a random process. Among the random processes we study the binomial distribution and Bose--Einstein statistics. We define two types of term frequency normalization for tuning term weights in the document--query matching process. The first normalization assumes that documents have the same length and measures the information gain with the observed term once it has been accepted as a good descriptor of the observed document. The second normalization is related to the document length and to other statistics. These two normalization methods are applied to the basic models in succession to obtain weighting formulae. Results show that our framework produces different nonparametric models forming baseline alternatives to the standard tf-idf model.",
"title": ""
},
{
"docid": "ed929cce16774307d93719f50415e138",
"text": "BACKGROUND\nMore than one in five patients who undergo treatment for breast cancer will develop breast cancer-related lymphedema (BCRL). BCRL can occur as a result of breast cancer surgery and/or radiation therapy. BCRL can negatively impact comfort, function, and quality of life (QoL). Manual lymphatic drainage (MLD), a type of hands-on therapy, is frequently used for BCRL and often as part of complex decongestive therapy (CDT). CDT is a fourfold conservative treatment which includes MLD, compression therapy (consisting of compression bandages, compression sleeves, or other types of compression garments), skin care, and lymph-reducing exercises (LREs). Phase 1 of CDT is to reduce swelling; Phase 2 is to maintain the reduced swelling.\n\n\nOBJECTIVES\nTo assess the efficacy and safety of MLD in treating BCRL.\n\n\nSEARCH METHODS\nWe searched Medline, EMBASE, CENTRAL, WHO ICTRP (World Health Organization's International Clinical Trial Registry Platform), and Cochrane Breast Cancer Group's Specialised Register from root to 24 May 2013. No language restrictions were applied.\n\n\nSELECTION CRITERIA\nWe included randomized controlled trials (RCTs) or quasi-RCTs of women with BCRL. The intervention was MLD. The primary outcomes were (1) volumetric changes, (2) adverse events. Secondary outcomes were (1) function, (2) subjective sensations, (3) QoL, (4) cost of care.\n\n\nDATA COLLECTION AND ANALYSIS\nWe collected data on three volumetric outcomes. (1) LE (lymphedema) volume was defined as the amount of excess fluid left in the arm after treatment, calculated as volume in mL of affected arm post-treatment minus unaffected arm post-treatment. (2) Volume reduction was defined as the amount of fluid reduction in mL from before to after treatment calculated as the pretreatment LE volume of the affected arm minus the post-treatment LE volume of the affected arm. (3) Per cent reduction was defined as the proportion of fluid reduced relative to the baseline excess volume, calculated as volume reduction divided by baseline LE volume multiplied by 100. We entered trial data into Review Manger 5.2 (RevMan), pooled data using a fixed-effect model, and analyzed continuous data as mean differences (MDs) with 95% confidence intervals (CIs). We also explored subgroups to determine whether mild BCRL compared to moderate or severe BCRL, and BCRL less than a year compared to more than a year was associated with a better response to MLD.\n\n\nMAIN RESULTS\nSix trials were included. Based on similar designs, trials clustered in three categories.(1) MLD + standard physiotherapy versus standard physiotherapy (one trial) showed significant improvements in both groups from baseline but no significant between-groups differences for per cent reduction.(2) MLD + compression bandaging versus compression bandaging (two trials) showed significant per cent reductions of 30% to 38.6% for compression bandaging alone, and an additional 7.11% reduction for MLD (MD 7.11%, 95% CI 1.75% to 12.47%; two RCTs; 83 participants). Volume reduction was borderline significant (P = 0.06). LE volume was not significant. Subgroup analyses was significant showing that participants with mild-to-moderate BCRL were better responders to MLD than were moderate-to-severe participants.(3) MLD + compression therapy versus nonMLD treatment + compression therapy (three trials) were too varied to pool. One of the trials compared compression sleeve plus MLD to compression sleeve plus pneumatic pump. Volume reduction was statistically significant favoring MLD (MD 47.00 mL, 95% CI 15.25 mL to 78.75 mL; 1 RCT; 24 participants), per cent reduction was borderline significant (P=0.07), and LE volume was not significant. A second trial compared compression sleeve plus MLD to compression sleeve plus self-administered simple lymphatic drainage (SLD), and was significant for MLD for LE volume (MD -230.00 mL, 95% CI -450.84 mL to -9.16 mL; 1 RCT; 31 participants) but not for volume reduction or per cent reduction. A third trial of MLD + compression bandaging versus SLD + compression bandaging was not significant (P = 0.10) for per cent reduction, the only outcome measured (MD 11.80%, 95% CI -2.47% to 26.07%, 28 participants).MLD was well tolerated and safe in all trials.Two trials measured function as range of motion with conflicting results. One trial reported significant within-groups gains for both groups, but no between-groups differences. The other trial reported there were no significant within-groups gains and did not report between-groups results. One trial measured strength and reported no significant changes in either group.Two trials measured QoL, but results were not usable because one trial did not report any results, and the other trial did not report between-groups results.Four trials measured sensations such as pain and heaviness. Overall, the sensations were significantly reduced in both groups over baseline, but with no between-groups differences. No trials reported cost of care.Trials were small ranging from 24 to 45 participants. Most trials appeared to randomize participants adequately. However, in four trials the person measuring the swelling knew what treatment the participants were receiving, and this could have biased results.\n\n\nAUTHORS' CONCLUSIONS\nMLD is safe and may offer additional benefit to compression bandaging for swelling reduction. Compared to individuals with moderate-to-severe BCRL, those with mild-to-moderate BCRL may be the ones who benefit from adding MLD to an intensive course of treatment with compression bandaging. This finding, however, needs to be confirmed by randomized data.In trials where MLD and sleeve were compared with a nonMLD treatment and sleeve, volumetric outcomes were inconsistent within the same trial. Research is needed to identify the most clinically meaningful volumetric measurement, to incorporate newer technologies in LE assessment, and to assess other clinically relevant outcomes such as fibrotic tissue formation.Findings were contradictory for function (range of motion), and inconclusive for quality of life.For symptoms such as pain and heaviness, 60% to 80% of participants reported feeling better regardless of which treatment they received.One-year follow-up suggests that once swelling had been reduced, participants were likely to keep their swelling down if they continued to use a custom-made sleeve.",
"title": ""
},
{
"docid": "63a548ee4f8857823e4bcc7ccbc31d36",
"text": "The growing amounts of textual data require automatic methods for structuring relevant information so that it can be further processed by computers and systematically accessed by humans. The scenario dealt with in this dissertation is known as Knowledge Base Population (KBP), where relational information about entities is retrieved from a large text collection and stored in a database, structured according to a prespecified schema. Most of the research in this dissertation is placed in the context of the KBP benchmark of the Text Analysis Conference (TAC KBP), which provides a test-bed to examine all steps in a complex end-to-end relation extraction setting. In this dissertation a new state of the art for the TAC KBP benchmark was achieved by focussing on the following research problems: (1) The KBP task was broken down into a modular pipeline of sub-problems, and the most pressing issues were identified and quantified at all steps. (2) The quality of semi-automatically generated training data was increased by developing noise-reduction methods, decreasing the influence of false-positive training examples. (3) A focus was laid on fine-grained entity type modelling, entity expansion, entity matching and tagging, to maintain as much recall as possible on the relational argument level. (4) A new set of effective methods for generating training data, encoding features and training relational classifiers was developed and compared with previous state-of-the-art methods.",
"title": ""
},
{
"docid": "8406ce55a8de0995d07896761ac76051",
"text": "The genesis of the internet and web has created huge information on the web, including users’ digital or textual opinions and reviews. This leads to compiling many features in document-level. Consequently, we will have a high-dimensional feature space. In this paper, we propose an algorithm based on standard deviation method to solve the high-dimensional feature space. The algorithm constructs feature subsets based on dispersion of features. In other words, algorithm selects the features with higher value of standard deviation for construction of the subsets. To do this, the paper presents an experiment of performance estimation on sentiment analysis dataset using ensemble of classifiers when dimensionality reduction is performed on the input space using three different methods. Also different types of base classifiers and classifier combination rules were used.",
"title": ""
},
{
"docid": "4b9d994288fc555c89554cc2c7e41712",
"text": "The authors have been developing humanoid robots in order to develop new mechanisms and functions for a humanoid robot that has the ability to communicate naturally with a human by expressing human-like emotion. In 2004, we developed the emotion expression humanoid robot WE-4RII (Waseda Eye No.4 Refined II) by integrating the new humanoid robot hands RCH-I (RoboCasa Hand No.1) into the emotion expression humanoid robot WE-4R. We confirmed that WE-4RII can effectively express its emotion.",
"title": ""
},
{
"docid": "35c8c5f950123154f4445b6c6b2399c2",
"text": "Online social media have democratized the broadcasting of information, encouraging users to view the world through the lens of social networks. The exploitation of this lens, termed social sensing, presents challenges for researchers at the intersection of computer science and the social sciences.",
"title": ""
},
{
"docid": "51a67685249e0108c337d53b5b1c7c92",
"text": "CONTEXT\nEvidence suggests that early adverse experiences play a preeminent role in development of mood and anxiety disorders and that corticotropin-releasing factor (CRF) systems may mediate this association.\n\n\nOBJECTIVE\nTo determine whether early-life stress results in a persistent sensitization of the hypothalamic-pituitary-adrenal axis to mild stress in adulthood, thereby contributing to vulnerability to psychopathological conditions.\n\n\nDESIGN AND SETTING\nProspective controlled study conducted from May 1997 to July 1999 at the General Clinical Research Center of Emory University Hospital, Atlanta, Ga.\n\n\nPARTICIPANTS\nForty-nine healthy women aged 18 to 45 years with regular menses, with no history of mania or psychosis, with no active substance abuse or eating disorder within 6 months, and who were free of hormonal and psychotropic medications were recruited into 4 study groups (n = 12 with no history of childhood abuse or psychiatric disorder [controls]; n = 13 with diagnosis of current major depression who were sexually or physically abused as children; n = 14 without current major depression who were sexually or physically abused as children; and n = 10 with diagnosis of current major depression and no history of childhood abuse).\n\n\nMAIN OUTCOME MEASURES\nAdrenocorticotropic hormone (ACTH) and cortisol levels and heart rate responses to a standardized psychosocial laboratory stressor compared among the 4 study groups.\n\n\nRESULTS\nWomen with a history of childhood abuse exhibited increased pituitary-adrenal and autonomic responses to stress compared with controls. This effect was particularly robust in women with current symptoms of depression and anxiety. Women with a history of childhood abuse and a current major depression diagnosis exhibited a more than 6-fold greater ACTH response to stress than age-matched controls (net peak of 9.0 pmol/L [41.0 pg/mL]; 95% confidence interval [CI], 4.7-13.3 pmol/L [21.6-60. 4 pg/mL]; vs net peak of 1.4 pmol/L [6.19 pg/mL]; 95% CI, 0.2-2.5 pmol/L [1.0-11.4 pg/mL]; difference, 8.6 pmol/L [38.9 pg/mL]; 95% CI, 4.6-12.6 pmol/L [20.8-57.1 pg/mL]; P<.001).\n\n\nCONCLUSIONS\nOur findings suggest that hypothalamic-pituitary-adrenal axis and autonomic nervous system hyperreactivity, presumably due to CRF hypersecretion, is a persistent consequence of childhood abuse that may contribute to the diathesis for adulthood psychopathological conditions. Furthermore, these results imply a role for CRF receptor antagonists in the prevention and treatment of psychopathological conditions related to early-life stress. JAMA. 2000;284:592-597",
"title": ""
},
{
"docid": "3cdab5427efd08edc4f73266b7ed9176",
"text": "Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transformations, a set of powerful, stably invertible, and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact and efficient sampling, exact and efficient inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation, and latent variable manipulations.",
"title": ""
},
{
"docid": "d4b98b3872a94da9c8f7f93ff4f09cf5",
"text": "Hadapt is a start-up company currently commercializing the Yale University research project called HadoopDB. The company focuses on building a platform for Big Data analytics in the cloud by introducing a storage layer optimized for structured data and by providing a framework for executing SQL queries efficiently. This work considers processing data warehousing queries over very large datasets. Our goal is to maximize perfor mance while, at the same time, not giving up fault tolerance and scalability. We analyze the complexity of this problem in the split execution environment of HadoopDB. Here, incoming queries are examined; parts of the query are pushed down and executed inside the higher performing database layer; and the rest of the query is processed in a more generic MapReduce framework.\n In this paper, we discuss in detail performance-oriented query execution strategies for data warehouse queries in split execution environments, with particular focus on join and aggregation operations. The efficiency of our techniques is demonstrated by running experiments using the TPC-H benchmark with 3TB of data. In these experiments we compare our results with a standard commercial parallel database and an open-source MapReduce implementation featuring a SQL interface (Hive). We show that HadoopDB successfully competes with other systems.",
"title": ""
},
{
"docid": "ac2d144c5c06fcfb2d0530b115f613dc",
"text": "In medical imaging, Computer Aided Diagnosis (CAD) is a rapidly growing dynamic area of research. In recent years, significant attempts are made for the enhancement of computer aided diagnosis applications because errors in medical diagnostic systems can result in seriously misleading medical treatments. Machine learning is important in Computer Aided Diagnosis. After using an easy equation, objects such as organs may not be indicated accurately. So, pattern recognition fundamentally involves learning from examples. In the field of bio-medical, pattern recognition and machine learning promise the improved accuracy of perception and diagnosis of disease. They also promote the objectivity of decision-making process. For the analysis of high-dimensional and multimodal bio-medical data, machine learning offers a worthy approach for making classy and automatic algorithms. This survey paper provides the comparative analysis of different machine learning algorithms for diagnosis of different diseases such as heart disease, diabetes disease, liver disease, dengue disease and hepatitis disease. It brings attention towards the suite of machine learning algorithms and tools that are used for the analysis of diseases and decision-making process accordingly.",
"title": ""
},
{
"docid": "88f7c90be37cc4cb863fccbaf3a3a9e0",
"text": "A tensegrity is finite configuration of points in Ed suspended rigidly by inextendable cables and incompressable struts. Here it is explained how a stress-energy function, given by a symmetric stress matrix, can be used to create tensegrities that are globally rigid in the sense that the only configurations that satisfy the cable and strut constraints are congruent copies.",
"title": ""
},
{
"docid": "e85a019405a29e19670c99f9eabfff78",
"text": "Online shopping, different from traditional shopping behavior, is characterized with uncertainty, anonymity, and lack of control and potential opportunism. Therefore, trust is an important factor to facilitate online transactions. The purpose of this study is to explore the role of trust in consumer online purchase behavior. This study undertook a comprehensive survey of online customers having e-shopping experiences in Taiwan and we received 1258 valid questionnaires. The empirical results, using structural equation modeling, indicated that perceived ease of use and perceived usefulness affect have a significant impact on trust in e-commerce. Trust also has a significant influence on attitude towards online purchase. However, there is no significant impact from trust on the intention of online purchase.",
"title": ""
}
] | scidocsrr |
9b5f081b1b47f73031a51d2f0c654f10 | Group-based multi-trajectory modeling. | [
{
"docid": "e56bc26cd567aff51de3cb47f9682149",
"text": "Recent technological advances have expanded the breadth of available omic data, from whole-genome sequencing data, to extensive transcriptomic, methylomic and metabolomic data. A key goal of analyses of these data is the identification of effective models that predict phenotypic traits and outcomes, elucidating important biomarkers and generating important insights into the genetic underpinnings of the heritability of complex traits. There is still a need for powerful and advanced analysis strategies to fully harness the utility of these comprehensive high-throughput data, identifying true associations and reducing the number of false associations. In this Review, we explore the emerging approaches for data integration — including meta-dimensional and multi-staged analyses — which aim to deepen our understanding of the role of genetics and genomics in complex outcomes. With the use and further development of these approaches, an improved understanding of the relationship between genomic variation and human phenotypes may be revealed.",
"title": ""
}
] | [
{
"docid": "69d42340c09303b69eafb19de7170159",
"text": "Based on an example of translational motion, this report shows how to model and initialize the Kalman Filter. Basic rules about physical motion are introduced to point out, that the well-known laws of physical motion are a mere approximation. Hence, motion of non-constant velocity or acceleration is modelled by additional use of white noise. Special attention is drawn to the matrix initialization for use in the Kalman Filter, as, in general, papers and books do not give any hint on this; thus inducing the impression that initializing is not important and may be arbitrary. For unknown matrices many users of the Kalman Filter choose the unity matrix. Sometimes it works, sometimes it does not. In order to close this gap, initialization is shown on the example of human interactive motion. In contrast to measuring instruments with documented measurement errors in manuals, the errors generated by vision-based sensoring must be estimated carefully. Of course, the described methods may be adapted to other circumstances.",
"title": ""
},
{
"docid": "b150c18332645bf46e7f2e8ababbcfc4",
"text": "Wilkinson Power Dividers/Combiners The in-phase power combiners and dividers are important components of the RF and microwave transmitters when it is necessary to deliver a high level of the output power to antenna, especially in phased-array systems. In this case, it is also required to provide a high degree of isolation between output ports over some frequency range for identical in-phase signals with equal amplitudes. Figure 19(a) shows a planar structure of the basic parallel beam N-way divider/combiner, which provides a combination of powers from the N signal sources. Here, the input impedance of the N transmission lines (connected in parallel) with the characteristic impedance of Z0 each is equal to Z0/N. Consequently, an additional quarterwave transmission line with the characteristic impedance",
"title": ""
},
{
"docid": "72a1798a864b4514d954e1e9b6089ad8",
"text": "Clustering image pixels is an important image segmentation technique. While a large amount of clustering algorithms have been published and some of them generate impressive clustering results, their performance often depends heavily on user-specified parameters. This may be a problem in the practical tasks of data clustering and image segmentation. In order to remove the dependence of clustering results on user-specified parameters, we investigate the characteristics of existing clustering algorithms and present a parameter-free algorithm based on the DSets (dominant sets) and DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithms. First, we apply histogram equalization to the pairwise similarity matrix of input data and make DSets clustering results independent of user-specified parameters. Then, we extend the clusters from DSets with DBSCAN, where the input parameters are determined based on the clusters from DSets automatically. By merging the merits of DSets and DBSCAN, our algorithm is able to generate the clusters of arbitrary shapes without any parameter input. In both the data clustering and image segmentation experiments, our parameter-free algorithm performs better than or comparably with other algorithms with careful parameter tuning.",
"title": ""
},
{
"docid": "1306ec9eaa39a8c12acf08567ed733b2",
"text": "Energy restriction induces physiological effects that hinder further weight loss. Thus, deliberate periods of energy balance during weight loss interventions may attenuate these adaptive responses to energy restriction and thereby increase the efficiency of weight loss (i.e. the amount of weight or fat lost per unit of energy deficit). To address this possibility, we systematically searched MEDLINE, PreMEDLINE, PubMed and Cinahl and reviewed adaptive responses to energy restriction in 40 publications involving humans of any age or body mass index that had undergone a diet involving intermittent energy restriction, 12 with direct comparison to continuous energy restriction. Included publications needed to measure one or more of body weight, body mass index, or body composition before and at the end of energy restriction. 31 of the 40 publications involved 'intermittent fasting' of 1-7-day periods of severe energy restriction. While intermittent fasting appears to produce similar effects to continuous energy restriction to reduce body weight, fat mass, fat-free mass and improve glucose homeostasis, and may reduce appetite, it does not appear to attenuate other adaptive responses to energy restriction or improve weight loss efficiency, albeit most of the reviewed publications were not powered to assess these outcomes. Intermittent fasting thus represents a valid--albeit apparently not superior--option to continuous energy restriction for weight loss.",
"title": ""
},
{
"docid": "2974d042acbf8b7cfa5772aa6c27c5da",
"text": "Physical Unclonable Functions (PUFs) are cryptographic primitives that can be used to generate volatile secret keys for cryptographic operations and enable low-cost authentication of integrated circuits. Existing PUF designs mainly exploit variation effects on silicon and hence are not readily applicable for the authentication of printed circuit boards (PCBs). To tackle the above problem, in this paper, we propose a novel PUF device that is able to generate unique and stable IDs for individual PCB, namely BoardPUF. To be specific, we embed a number of capacitors in the internal layer of PCBs and utilize their variations for key generation. Then, by integrating a cryptographic primitive (e.g. hash function) into BoardPUF, we can effectively perform PCB authentication in a challenge-response manner. Our experimental results on fabricated boards demonstrate the efficacy of BoardPUF.",
"title": ""
},
{
"docid": "4a3951e865671f8c051f011e5e4459ae",
"text": "Intrusion Detection System (IDS) have become increasingly popular over the past years as an important network security technology to detect cyber attacks in a wide variety of network communication. IDS monitors' network or host system activities by collecting network information, and analyze this information for malicious activities. Cloud computing, with the concept of Software as a Service (SaaS) presents an exciting benefit when it enables providers to rent their services to users in perform complex tasks over the Internet. In addition, Cloud based services reduce a cost in investing new infrastructure, training new personnel, or licensing new software. In this paper, we introduce a novel framework based on Cloud computing called Cloud-based Intrusion Detection Service (CBIDS). This model enables the identification of malicious activities from different points of network and overcome the deficiency of classical intrusion detection. CBIDS can be implemented to detect variety of attacks in private and public Clouds.",
"title": ""
},
{
"docid": "9f9dcb320149d4a84bec8b1587b73aa2",
"text": "The sheer volume of multimedia contents generated by today's Internet services are stored in the cloud. The traditional indexing method associating the user-generated metadata with the content is vulnerable to the inaccuracy caused by the low quality of the metadata. While the content-based indexing does not depend on the error-prone metadata. However, the state-of-the-art research focuses on developing descriptive features and miss the system-oriented considerations when incorporating these features into the practical cloud computing systems. We propose an Update-Efficient and Parallel-Friendly content-based multimedia indexing system, called Partitioned Hash Forest (PHF). The PHF system incorporates the state-of-the-art content-based indexing models and multiple system-oriented optimizations. PHF contains an approximate content-based index and leverages the hierarchical memory system to support the high volume of updates. Additionally, the content-aware data partitioning and lock-free concurrency management module enable the parallel processing of the concurrent user requests. We evaluate PHF in terms of indexing accuracy and system efficiency by comparing it with the state-of-the-art content-based indexing algorithm and its variances. We achieve the significantly better accuracy with less resource consumption, around 37% faster in update processing and up to 2.5X throughput speedup in a multi-core platform comparing to other parallel-friendly designs.",
"title": ""
},
{
"docid": "5cc3d79d7bd762e8cfd9df658acae3fc",
"text": "With almost daily improvements in capabilities of artificial intelligence it is more important than ever to develop safety software for use by the AI research community. Building on our previous work on AI Containment Problem we propose a number of guidelines which should help AI safety researchers to develop reliable sandboxing software for intelligent programs of all levels. Such safety container software will make it possible to study and analyze intelligent artificial agent while maintaining certain level of safety against information leakage, social engineering attacks and cyberattacks from within the container.",
"title": ""
},
{
"docid": "bdd494b6a8a628025ab46b934f72495d",
"text": "This paper investigates the problem of robust H∞ output-feedback control for a class of nonlinear systems under unreliable communication links. The nonlinear plant is represented by a Takagi-Sugeno (T-S) uncertain fuzzy model, and the communication links between the plant and controller are assumed to be imperfect, i.e., data-packet dropouts occur intermittently, which is often the case in a network environment. Stochastic variables that satisfy the Bernoulli random-binary distribution are adopted to characterize the data-missing phenomenon, and the attention is focused on the design of a piecewise static-output-feedback (SOF) controller such that the closed-loop system is stochastically stable with a guaranteed H∞ performance. Based on a piecewise Lyapunov function combined with some novel convexifying techniques, the solutions to the problem are formulated in the form of linear matrix inequalities (LMIs). Finally, simulation examples are also provided to illustrate the effectiveness of the proposed approaches.",
"title": ""
},
{
"docid": "2ff08c8505e7d68304b63c6942feb837",
"text": "This paper presents a Retrospective Event Detection algorithm, called Eventy-Topic Detection (ETD), which automatically generates topics that describe events in a large, temporal text corpus. Our approach leverages the structure of the topic modeling framework, specifically the Latent Dirichlet Allocation (LDA), to generate topics which are then later labeled as Eventy-Topics or non-Eventy-Topics. The system first runs daily LDA topic models, then calculates the cosine similarity between the topics of the daily topic models, and then runs our novel Bump-Detection algorithm. Similar topics labeled as an Eventy-Topic are then grouped together. The algorithm is demonstrated on two Terabyte sized corpuses a Reuters News corpus and a Twitter corpus. Our method is evaluated on a human annotated test set. Our algorithm demonstrates its ability to accurately describe and label events in a temporal text corpus.",
"title": ""
},
{
"docid": "42cf4bd800000aed5e0599cba52ba317",
"text": "There is a significant amount of controversy related to the optimal amount of dietary carbohydrate. This review summarizes the health-related positives and negatives associated with carbohydrate restriction. On the positive side, there is substantive evidence that for many individuals, low-carbohydrate, high-protein diets can effectively promote weight loss. Low-carbohydrate diets (LCDs) also can lead to favorable changes in blood lipids (i.e., decreased triacylglycerols, increased high-density lipoprotein cholesterol) and decrease the severity of hypertension. These positives should be balanced by consideration of the likelihood that LCDs often lead to decreased intakes of phytochemicals (which could increase predisposition to cardiovascular disease and cancer) and nondigestible carbohydrates (which could increase risk for disorders of the lower gastrointestinal tract). Diets restricted in carbohydrates also are likely to lead to decreased glycogen stores, which could compromise an individual's ability to maintain high levels of physical activity. LCDs that are high in saturated fat appear to raise low-density lipoprotein cholesterol and may exacerbate endothelial dysfunction. However, for the significant percentage of the population with insulin resistance or those classified as having metabolic syndrome or prediabetes, there is much experimental support for consumption of a moderately restricted carbohydrate diet (i.e., one providing approximately 26%-44 % of calories from carbohydrate) that emphasizes high-quality carbohydrate sources. This type of dietary pattern would likely lead to favorable changes in the aforementioned cardiovascular disease risk factors, while minimizing the potential negatives associated with consumption of the more restrictive LCDs.",
"title": ""
},
{
"docid": "8a8dd829c9b7ce0c46ef1fd0736cc006",
"text": "In this paper, we introduce a generic inference hybrid framework for Convolutional Recurrent Neural Network (conv-RNN) of semantic modeling of text, seamless integrating the merits on extracting different aspects of linguistic information from both convolutional and recurrent neural network structures and thus strengthening the semantic understanding power of the new framework. Besides, based on conv-RNN, we also propose a novel sentence classification model and an attention based answer selection model with strengthening power for the sentence matching and classification respectively. We validate the proposed models on a very wide variety of data sets, including two challenging tasks of answer selection (AS) and five benchmark datasets for sentence classification (SC). To the best of our knowledge, it is by far the most complete comparison results in both AS and SC. We empirically show superior performances of conv-RNN in these different challenging tasks and benchmark datasets and also summarize insights on the performances of other state-of-the-arts methodologies.",
"title": ""
},
{
"docid": "40c2110eaefe79a096099aa5db7426fe",
"text": "One-hop broadcasting is the predominate form of network traffic in VANETs. Exchanging status information by broadcasting among the vehicles enhances vehicular active safety. Since there is no MAC layer broadcasting recovery for 802.11 based VANETs, efforts should be made towards more robust and effective transmission of such safety-related information. In this paper, a channel adaptive broadcasting method is proposed. It relies solely on channel condition information available at each vehicle by employing standard supported sequence number mechanisms. The proposed method is fully compatible with 802.11 and introduces no communication overhead. Simulation studies show that it outperforms standard broadcasting in term of reception rate and channel utilization.",
"title": ""
},
{
"docid": "2ff60b62850c325fa55904ccf4cb4070",
"text": "In DSM-IV-TR, trichotillomania (TTM) is classified as an impulse control disorder (not classified elsewhere), skin picking lacks its own diagnostic category (but might be diagnosed as an impulse control disorder not otherwise specified), and stereotypic movement disorder is classified as a disorder usually first diagnosed in infancy, childhood, or adolescence. ICD-10 classifies TTM as a habit and impulse disorder, and includes stereotyped movement disorders in a section on other behavioral and emotional disorders with onset usually occurring in childhood and adolescence. This article provides a focused review of nosological issues relevant to DSM-V, given recent empirical findings. This review presents a number of options and preliminary recommendations to be considered for DSM-V: (1) Although TTM fits optimally into a category of body-focused repetitive behavioral disorders, in a nosology comprised of relatively few major categories it fits best within a category of motoric obsessive-compulsive spectrum disorders, (2) available evidence does not support continuing to include (current) diagnostic criteria B and C for TTM in DSM-V, (3) the text for TTM should be updated to describe subtypes and forms of hair pulling, (4) there are persuasive reasons for referring to TTM as \"hair pulling disorder (trichotillomania),\" (5) diagnostic criteria for skin picking disorder should be included in DSM-V or in DSM-Vs Appendix of Criteria Sets Provided for Further Study, and (6) the diagnostic criteria for stereotypic movement disorder should be clarified and simplified, bringing them in line with those for hair pulling and skin picking disorder.",
"title": ""
},
{
"docid": "5e806d14356729d7c96dcf2d97ba9c30",
"text": "Recently, a variety of bioactive protein drugs have been available in large quantities as a result of advances in biotechnology. Such availability has prompted development of long-term protein delivery systems. Biodegradable microparticulate systems have been used widely for controlled release of protein drugs for days and months. The most widely used biodegradable polymer has been poly(d,l-lactic-co-glycolic acid) (PLGA). Protein-containing microparticles are usually prepared by the water/oil/water (W/O/W) double emulsion method, and variations of this method, such as solid/oil/water (S/O/W) and water/oil/oil (W/O/O), have also been used. Other methods of preparation include spray drying, ultrasonic atomization, and electrospray methods. The important factors in developing biodegradable microparticles for protein drug delivery are protein release profile (including burst release, duration of release, and extent of release), microparticle size, protein loading, encapsulation efficiency, and bioactivity of the released protein. Many studies used albumin as a model protein, and thus, the bioactivity of the release protein has not been examined. Other studies which utilized enzymes, insulin, erythropoietin, and growth factors have suggested that the right formulation to preserve bioactivity of the loaded protein drug during the processing and storage steps is important. The protein release profiles from various microparticle formulations can be classified into four distinct categories (Types A, B, C, and D). The categories are based on the magnitude of burst release, the extent of protein release, and the protein release kinetics followed by the burst release. The protein loading (i.e., the total amount of protein loaded divided by the total weight of microparticles) in various microparticles is 6.7+/-4.6%, and it ranges from 0.5% to 20.0%. Development of clinically successful long-term protein delivery systems based on biodegradable microparticles requires improvement in the drug loading efficiency, control of the initial burst release, and the ability to control the protein release kinetics.",
"title": ""
},
{
"docid": "a803773ad3d9fe09c2e24b26f96cadf8",
"text": "In this paper, we propose to use hardware performance counters (HPC) to detect malicious program modifications at load time (static) and at runtime (dynamic). HPC have been used for program characterization and testing, system testing and performance evaluation, and as side channels. We propose to use HPCs for static and dynamic integrity checking of programs.. The main advantage of HPC-based integrity checking is that it is almost free in terms of hardware cost; HPCs are built into almost all processors. The runtime performance overhead is minimal because we use the operating system for integrity checking, which is called anyway for process scheduling and other interrupts. Our preliminary results confirm that HPC very efficiently detect program modifications with very low cost.",
"title": ""
},
{
"docid": "1c6402b9dad05dd430eb522c6db9d70d",
"text": "In this paper, we describe SemEval-2013 Task 4: the definition, the data, the evaluation and the results. The task is to capture some of the meaning of English noun compounds via paraphrasing. Given a two-word noun compound, the participating system is asked to produce an explicitly ranked list of its free-form paraphrases. The list is automatically compared and evaluated against a similarly ranked list of paraphrases proposed by human annotators, recruited and managed through Amazon’s Mechanical Turk. The comparison of raw paraphrases is sensitive to syntactic and morphological variation. The “gold” ranking is based on the relative popularity of paraphrases among annotators. To make the ranking more reliable, highly similar paraphrases are grouped, so as to downplay superficial differences in syntax and morphology. Three systems participated in the task. They all beat a simple baseline on one of the two evaluation measures, but not on both measures. This shows that the task is difficult.",
"title": ""
},
{
"docid": "f71e7b267b78cd0d0227a4c9ff52fec9",
"text": "We present an iterative algorithm for calibrating vector network analyzers based on orthogonal distance regression. The algorithm features a robust, yet efficient, search algorithm, an error analysis that includes both random and systematic errors, a full covariance matrix relating calibration and measurement errors, 95% coverage factors, and an easy-to-use user interface that supports a wide variety of calibration standards. We also discuss evidence that the algorithm outperforms theMultiCal software package in the presence of measurement errors and accurately estimates the uncertainty of its results.",
"title": ""
}
] | scidocsrr |
1c34baee9829a1688c72bce6ddcf45a1 | gSpan: Graph-Based Substructure Pattern Mining | [
{
"docid": "3429145583d25ba1d603b5ade11f4312",
"text": "Sequential pattern mining is an important data mining problem with broad applications. It is challenging since one may need to examine a combinatorially explosive number of possible subsequence patterns. Most of the previously developed sequential pattern mining methods follow the methodology of which may substantially reduce the number of combinations to be examined. However, still encounters problems when a sequence database is large and/or when sequential patterns to be mined are numerous and/or long. In this paper, we propose a novel sequential pattern mining method, called PrefixSpan (i.e., Prefix-projected Sequential pattern mining), which explores prefixprojection in sequential pattern mining. PrefixSpan mines the complete set of patterns but greatly reduces the efforts of candidate subsequence generation. Moreover, prefix-projection substantially reduces the size of projected databases and leads to efficient processing. Our performance study shows that PrefixSpan outperforms both the -based GSP algorithm and another recently proposed method, FreeSpan, in mining large sequence",
"title": ""
}
] | [
{
"docid": "849613d7d30fb1ad10244d7e209f9fa8",
"text": "The tilt coordination technique is used in driving simulation for reproducing a sustained linear horizontal acceleration by tilting the simulator cabin. If combined with the translation motion of the simulator, this technique increases the acceleration rendering capabilities of the whole system. To perform this technique correctly, the rotational motion must be slow to remain under the perception threshold and thus be unnoticed by the driver. However, the acceleration to render changes quickly. Between the slow rotational motion limited by the tilt threshold and the fast change of acceleration to render, the design of the coupling between motions of rotation and translation plays a critical role in the realism of a driving simulator. This study focuses on the acceptance by drivers of different configurations for tilt restitution in terms of maximum tilt angle, tilt rate, and tilt acceleration. Two experiments were conducted, focusing respectively on roll tilt for a 0.2 Hz slaloming task and on pitch tilt for an acceleration/deceleration task. The results show what thresholds have to be followed in terms of amplitude, rate, and acceleration. These results are far superior to the standard human perception thresholds found in the literature.",
"title": ""
},
{
"docid": "dac5090c367ef05c8863da9c7979a619",
"text": "Full vinyl polysiloxane casts of the vagina were obtained from 23 Afro-American, 39 Caucasian and 15 Hispanic women in lying, sitting and standing positions. A new shape, the pumpkin seed, was found in 40% of Afro-American women, but not in Caucasians or Hispanics. Analyses of cast and introital measurements revealed: (1) posterior cast length is significantly longer, anterior cast length is significantly shorter and cast width is significantly larger in Hispanics than in the other two groups and (2) the Caucasian introitus is significantly greater than that of the Afro-American subject.",
"title": ""
},
{
"docid": "d54615bc5460d824aee45a8ac2c8009d",
"text": "In recent years, Deep Learning has become the go-to solution for a broad range of applications, often outperforming state-of-the-art. However, it is important, for both theoreticians and practitioners, to gain a deeper understanding of the difficulties and limitations associated with common approaches and algorithms. We describe four types of simple problems, for which the gradientbased algorithms commonly used in deep learning either fail or suffer from significant difficulties. We illustrate the failures through practical experiments, and provide theoretical insights explaining their source, and how they might be remedied.",
"title": ""
},
{
"docid": "5d9d507a8bdd0d356d7ac220d9b0ef70",
"text": "This paper provides insights of possible plagiarism detection approach based on modern technologies – programming assignment versioning, auto-testing and abstract syntax tree comparison to estimate code similarities. Keywords—automation; assignment; testing; continuous integration INTRODUCTION In the emerging world of information technologies, a growing number of students is choosing this specialization for their education. Therefore, the number of homework and laboratory research assignments that should be tested is also growing. The majority of these tasks is based on the necessity to implement some algorithm as a small program. This article discusses the possible solutions to the problem of automated testing of programming laboratory research assignments. The course “Algorithmization and Programming of Solutions” is offered to all the first-year students of The Faculty of Computer Science and Information Technology (~500 students) in Riga Technical University and it provides the students the basics of the algorithmization of computing processes and the technology of program design using Java programming language (the given course and the University will be considered as an example of the implementation of the automated testing). During the course eight laboratory research assignments are planned, where the student has to develop an algorithm, create a program and submit it to the education portal of the University. The VBA test program was designed as one of the solutions, the requirements for each laboratory assignment were determined and the special tests have been created. At some point, however, the VBA offered options were no longer able to meet the requirements, therefore the activities on identifying the requirements for the automation of the whole cycle of programming work reception, testing and evaluation have begun. I. PLAGIARISM DETECTION APPROACHES To identify possible plagiarism detection techniques, it is imperative to define scoring or detecting threshold. Surely it is not an easy task, since only identical works can be considered as “true” plagiarism. In all other cases a person must make his decision whether two pieces of code are identical by their means or not. However, it is possible to outline some widespread approaches of assessment comparison. A. Manual Work Comparison In this case, all works must be compared one-by-one. Surely, this approach will lead to progressively increasing error rate due to human memory and cognitive function limitations. Large student group homework assessment verification can take long time, which is another contributing factor to errorrate increase. B. Diff-tool Application It is possible to compare two code fragments using semiautomated diff tool which provides information about Levenshtein distance between fragments. Although several visualization tools exist, it is quite easy to fool algorithm to believe that a code has multiple different elements in it, but all of them are actually another name for variables/functions/etc. without any additional contribution. C. Abstract Syntax Tree (AST) comparison Abstract syntax tree is a tree representation of the abstract syntactic structure of source code written in a programming language. Each node of the tree denotes a construct occurring in the source code. Example of AST is shown on Fig. 1.syntax tree is a tree representation of the abstract syntactic structure of source code written in a programming language. Each node of the tree denotes a construct occurring in the source code. Example of AST is shown on Fig. 1.",
"title": ""
},
{
"docid": "bfb189f8052f41fe1491d8d71f9586f1",
"text": "In this paper, we introduce a novel reconfigurable architecture, named 3D field-programmable gate array (3D nFPGA), which utilizes 3D integration techniques and new nanoscale materials synergistically. The proposed architecture is based on CMOS nanohybrid techniques that incorporate nanomaterials such as carbon nanotube bundles and nanowire crossbars into CMOS fabrication process. This architecture also has built-in features for fault tolerance and heat alleviation. Using unique features of FPGAs and a novel 3D stacking method enabled by the application of nanomaterials, 3D nFPGA obtains a 4x footprint reduction comparing to the traditional CMOS-based 2D FPGAs. With a customized design automation flow, we evaluate the performance and power of 3D nFPGA driven by the 20 largest MCNC benchmarks. Results demonstrate that 3D nFPGA is able to provide a performance gain of 2.6 x with a small power overhead comparing to the traditional 2D FPGA architecture.",
"title": ""
},
{
"docid": "627587e2503a2555846efb5f0bca833b",
"text": "Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the selfattention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. While conceptually simple, our generative models significantly outperform the current state of the art in image generation on ImageNet, improving the best published negative log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we find that images generated by our super-resolution model fool human observers three times more often than the previous state of the art.",
"title": ""
},
{
"docid": "a4c76e58074a42133a59a31d9022450d",
"text": "This article reviews a free-energy formulation that advances Helmholtz's agenda to find principles of brain function based on conservation laws and neuronal energy. It rests on advances in statistical physics, theoretical biology and machine learning to explain a remarkable range of facts about brain structure and function. We could have just scratched the surface of what this formulation offers; for example, it is becoming clear that the Bayesian brain is just one facet of the free-energy principle and that perception is an inevitable consequence of active exchange with the environment. Furthermore, one can see easily how constructs like memory, attention, value, reinforcement and salience might disclose their simple relationships within this framework.",
"title": ""
},
{
"docid": "4f91d43bf0185ddb5969d5bb13cb3b7e",
"text": "Man-made objects usually exhibit descriptive curved features (i.e., curve networks). The curve network of an object conveys its high-level geometric and topological structure. We present a framework for extracting feature curve networks from unstructured point cloud data. Our framework first generates a set of initial curved segments fitting highly curved regions. We then optimize these curved segments to respect both data fitting and structural regularities. Finally, the optimized curved segments are extended and connected into curve networks using a clustering method. To facilitate effectiveness in case of severe missing data and to resolve ambiguities, we develop a user interface for completing the curve networks. Experiments on various imperfect point cloud data validate the effectiveness of our curve network extraction framework. We demonstrate the usefulness of the extracted curve networks for surface reconstruction from incomplete point clouds.",
"title": ""
},
{
"docid": "062fb8603fe65ddde2be90bac0519f97",
"text": "Meta-heuristic methods represent very powerful tools for dealing with hard combinatorial optimization problems. However, real life instances usually cannot be treated efficiently in \"reasonable\" computing times. Moreover, a major issue in metaheuristic design and calibration is to make them robust, i.e., to provide high performance solutions for a variety of problem settings. Parallel meta-heuristics aim to address both issues. The objective of this chapter is to present a state-of-the-art survey of the main parallel meta-heuristic ideas and strategies, and to discuss general design principles applicable to all meta-heuristic classes. To achieve this goal, we explain various paradigms related to parallel meta-heuristic development, where communications, synchronization and control aspects are the most relevant. We also discuss implementation issues, namely the influence of the target architecture on parallel execution of meta-heuristics, pointing out the characteristics of shared and distributed memory multiprocessor systems. All these topics are illustrated by examples from recent literature. These examples are related to the parallelization of various meta-heuristic methods, but we focus here on Variable Neighborhood Search and Bee Colony Optimization.",
"title": ""
},
{
"docid": "1204d1695e39bb7897b6771c445d809e",
"text": "The known disorders of cholesterol biosynthesis have expanded rapidly since the discovery that Smith-Lemli-Opitz syndrome is caused by a deficiency of 7-dehydrocholesterol. Each of the six now recognized sterol disorders-mevalonic aciduria, Smith-Lemli-Opitz syndrome, desmosterolosis, Conradi-Hünermann syndrome, CHILD syndrome, and Greenberg dysplasia-has added to our knowledge of the relationship between cholesterol metabolism and embryogenesis. One of the most important lessons learned from the study of these disorders is that abnormal cholesterol metabolism impairs the function of the hedgehog class of embryonic signaling proteins, which help execute the vertebrate body plan during the earliest weeks of gestation. The study of the enzymes and genes in these several syndromes has also expanded and better delineated an important class of enzymes and proteins with diverse structural functions and metabolic actions that include sterol biosynthesis, nuclear transcriptional signaling, regulation of meiosis, and even behavioral modulation.",
"title": ""
},
{
"docid": "80ee585d49685a24a2011a1ddc27bb55",
"text": "A developmental model of antisocial behavior is outlined. Recent findings are reviewed that concern the etiology and course of antisocial behavior from early childhood through adolescence. Evidence is presented in support of the hypothesis that the route to chronic delinquency is marked by a reliable developmental sequence of experiences. As a first step, ineffective parenting practices are viewed as determinants for childhood conduct disorders. The general model also takes into account the contextual variables that influence the family interaction process. As a second step, the conduct-disordered behaviors lead to academic failure and peer rejection. These dual failures lead, in turn, to increased risk for depressed mood and involvement in a deviant peer group. This third step usually occurs during later childhood and early adolescence. It is assumed that children following this developmental sequence are at high risk for engaging in chronic delinquent behavior. Finally, implications for prevention and intervention are discussed.",
"title": ""
},
{
"docid": "3a4f8e1a1401bc77b9d847b69d461746",
"text": "This paper presents a family of techniques that we call congealing for modeling image classes from data. The idea is to start with a set of images and make them appear as similar as possible by removing variability along the known axes of variation. This technique can be used to eliminate \"nuisance\" variables such as affine deformations from handwritten digits or unwanted bias fields from magnetic resonance images. In addition to separating and modeling the latent images - i.e., the images without the nuisance variables - we can model the nuisance variables themselves, leading to factorized generative image models. When nuisance variable distributions are shared between classes, one can share the knowledge learned in one task with another task, leading to efficient learning. We demonstrate this process by building a handwritten digit classifier from just a single example of each class. In addition to applications in handwritten character recognition, we describe in detail the application of bias removal from magnetic resonance images. Unlike previous methods, we use a separate, nonparametric model for the intensity values at each pixel. This allows us to leverage the data from the MR images of different patients to remove bias from each other. Only very weak assumptions are made about the distributions of intensity values in the images. In addition to the digit and MR applications, we discuss a number of other uses of congealing and describe experiments about the robustness and consistency of the method.",
"title": ""
},
{
"docid": "93ae39ed7b4d6b411a2deb9967e2dc7d",
"text": "This paper presents fundamental results about how zero-curvature (paper) surfaces behave near creases and apices of cones. These entities are natural generalizations of the edges and vertices of piecewise-planar surfaces. Consequently, paper surfaces may furnish a richer and yet still tractable class of surfaces for computer-aided design and computer graphics applications than do polyhedral surfaces.",
"title": ""
},
{
"docid": "4aeefa15b326ed583c9f922d7b035ff6",
"text": "In this paper, we present a Self-Supervised Neural Aggregation Network (SS-NAN) for human parsing. SS-NAN adaptively learns to aggregate the multi-scale features at each pixel \"address\". In order to further improve the feature discriminative capacity, a self-supervised joint loss is adopted as an auxiliary learning strategy, which imposes human joint structures into parsing results without resorting to extra supervision. The proposed SS-NAN is end-to-end trainable. SS-NAN can be integrated into any advanced neural networks to help aggregate features regarding the importance at different positions and scales and incorporate rich high-level knowledge regarding human joint structures from a global perspective, which in turn improve the parsing results. Comprehensive evaluations on the recent Look into Person (LIP) and the PASCAL-Person-Part benchmark datasets demonstrate the significant superiority of our method over other state-of-the-arts.",
"title": ""
},
{
"docid": "37637ca24397aba35e1e4926f1a94c91",
"text": "We propose a structured prediction architecture, which exploits the local generic features extracted by Convolutional Neural Networks and the capacity of Recurrent Neural Networks (RNN) to retrieve distant dependencies. The proposed architecture, called ReSeg, is based on the recently introduced ReNet model for image classification. We modify and extend it to perform the more challenging task of semantic segmentation. Each ReNet layer is composed of four RNN that sweep the image horizontally and vertically in both directions, encoding patches or activations, and providing relevant global information. Moreover, ReNet layers are stacked on top of pre-trained convolutional layers, benefiting from generic local features. Upsampling layers follow ReNet layers to recover the original image resolution in the final predictions. The proposed ReSeg architecture is efficient, flexible and suitable for a variety of semantic segmentation tasks. We evaluate ReSeg on several widely-used semantic segmentation datasets: Weizmann Horse, Oxford Flower, and CamVid, achieving stateof-the-art performance. Results show that ReSeg can act as a suitable architecture for semantic segmentation tasks, and may have further applications in other structured prediction problems. The source code and model hyperparameters are available on https://github.com/fvisin/reseg.",
"title": ""
},
{
"docid": "9cbeb94f97635cf115e3c19986b0acce",
"text": "This paper presents a hybrid algorithm for parameter estimation of synchronous generator. For large-residual problems (i.e., f(x) is large or f(x) is severely nonlinear), the performance of the Gauss-Newton method and Levenberg-Marquardt method is usually poor, and the slow convergence even causes iteration emergence divergence. The Quasi-Newton method can superlinearly converge, but it is not robust in the global stage of the iteration. Hybrid algorithm combining the two methods above is proved globally convergent with a high convergence speed through the example of synchronous generator parameter identification.",
"title": ""
},
{
"docid": "094906bcd076ae3207ba04755851c73a",
"text": "The paper describes our approach for SemEval-2018 Task 1: Affect Detection in Tweets. We perform experiments with manually compelled sentiment lexicons and word embeddings. We test their performance on twitter affect detection task to determine which features produce the most informative representation of a sentence. We demonstrate that general-purpose word embeddings produces more informative sentence representation than lexicon features. However, combining lexicon features with embeddings yields higher performance than embeddings alone.",
"title": ""
},
{
"docid": "8a30f829e308cb75164d1a076fa99390",
"text": "This paper proposes a planning method based on forward path generation and backward tracking algorithm for Automatic Parking Systems, especially suitable for backward parking situations. The algorithm is based on the steering property that backward moving trajectory coincides with the forward moving trajectory for the identical steering angle. The basic path planning is divided into two segments: a collision-free locating segment and an entering segment that considers the continuous steering angles for connecting the two paths. MATLAB simulations were conducted, along with experiments involving parallel and perpendicular situations.",
"title": ""
},
{
"docid": "dfade03850a7e0d27c76994e606ed078",
"text": "History of mental illness is a major factor behind suicide risk and ideation. However research efforts toward characterizing and forecasting this risk is limited due to the paucity of information regarding suicide ideation, exacerbated by the stigma of mental illness. This paper fills gaps in the literature by developing a statistical methodology to infer which individuals could undergo transitions from mental health discourse to suicidal ideation. We utilize semi-anonymous support communities on Reddit as unobtrusive data sources to infer the likelihood of these shifts. We develop language and interactional measures for this purpose, as well as a propensity score matching based statistical approach. Our approach allows us to derive distinct markers of shifts to suicidal ideation. These markers can be modeled in a prediction framework to identify individuals likely to engage in suicidal ideation in the future. We discuss societal and ethical implications of this research.",
"title": ""
},
{
"docid": "5d546a8d21859a057d36cdbd3fa7f887",
"text": "In 1984, a prospective cohort study, Coronary Artery Risk Development in Young Adults (CARDIA) was initiated to investigate life-style and other factors that influence, favorably and unfavorably, the evolution of coronary heart disease risk factors during young adulthood. After a year of planning and protocol development, 5,116 black and white women and men, age 18-30 years, were recruited and examined in four urban areas: Birmingham, Alabama; Chicago, Illinois; Minneapolis, Minnesota, and Oakland, California. The initial examination included carefully standardized measurements of major risk factors as well as assessments of psychosocial, dietary, and exercise-related characteristics that might influence them, or that might be independent risk factors. This report presents the recruitment and examination methods as well as the mean levels of blood pressure, total plasma cholesterol, height, weight and body mass index, and the prevalence of cigarette smoking by age, sex, race and educational level. Compared to recent national samples, smoking is less prevalent in CARDIA participants, and weight tends to be greater. Cholesterol levels are representative and somewhat lower blood pressures in CARDIA are probably, at least in part, due to differences in measurement methods. Especially noteworthy among several differences in risk factor levels by demographic subgroup, were a higher body mass index among black than white women and much higher prevalence of cigarette smoking among persons with no more than a high school education than among those with more education.",
"title": ""
}
] | scidocsrr |
57976eabf115bf9ce2fe2d70fe8d36c9 | An Empirical Study on the Usage of the Swift Programming Language | [
{
"docid": "40d7847859a974d2a91cccab55ba625b",
"text": "Programming question and answer (Q&A) websites, such as Stack Overflow, leverage the knowledge and expertise of users to provide answers to technical questions. Over time, these websites turn into repositories of software engineering knowledge. Such knowledge repositories can be invaluable for gaining insight into the use of specific technologies and the trends of developer discussions. Previous work has focused on analyzing the user activities or the social interactions in Q&A websites. However, analyzing the actual textual content of these websites can help the software engineering community to better understand the thoughts and needs of developers. In the article, we present a methodology to analyze the textual content of Stack Overflow discussions. We use latent Dirichlet allocation (LDA), a statistical topic modeling technique, to automatically discover the main topics present in developer discussions. We analyze these discovered topics, as well as their relationships and trends over time, to gain insights into the development community. Our analysis allows us to make a number of interesting observations, including: the topics of interest to developers range widely from jobs to version control systems to C# syntax; questions in some topics lead to discussions in other topics; and the topics gaining the most popularity over time are web development (especially jQuery), mobile applications (especially Android), Git, and MySQL.",
"title": ""
}
] | [
{
"docid": "9688fbd2b207937bff340ee1cf6878b3",
"text": "AIMS\n(a) To investigate how widespread is the use of long term treatment without improvement amongst clinicians treating individuals with low back pain. (b) To study the beliefs behind the reasons why chiropractors, osteopaths and physiotherapists continue to treat people whose low back pain appears not to be improving.\n\n\nMETHODS\nA mixed methods study, including a questionnaire survey and qualitative analysis of semi-structured interviews. Questionnaire survey; 354/600 (59%) clinicians equally distributed between chiropractic, osteopathy and physiotherapy professions. Interview study; a purposive sample of fourteen clinicians from each profession identified from the survey responses. Methodological techniques ranged from grounded theory analysis to sorting of categories by both the research team and the subjects themselves.\n\n\nRESULTS\nAt least 10% of each of the professions reported that they continued to treat patients with low back pain who showed almost no improvement for over three months. There is some indication that this is an underestimate. reasons for continuing unsuccessful management of low back pain were not found to be primarily monetary in nature; rather it appears to have much more to do with the scope of care that extends beyond issues addressed in the current physical therapy guidelines. The interview data showed that clinicians viewed their role as including health education and counselling rather than a 'cure or refer' approach. Additionally, participants raised concerns that discharging patients from their care meant sending them to into a therapeutic void.\n\n\nCONCLUSION\nLong-term treatment of patients with low back pain without objective signs of improvement is an established practice in a minority of clinicians studied. This approach contrasts with clinical guidelines that encourage self-management, reassurance, re-activation, and involvement of multidisciplinary teams for patients who do not recover. Some of the rationale provided makes a strong case for ongoing contact. However, the practice is also maintained through poor communication with other professions and mistrust of the healthcare system.",
"title": ""
},
{
"docid": "0a0f4f5fc904c12cacb95e87f62005d0",
"text": "This text is intended to provide a balanced introduction to machine vision. Basic concepts are introduced with only essential mathematical elements. The details to allow implementation and use of vision algorithm in practical application are provided, and engineering aspects of techniques are emphasized. This text intentionally omits theories of machine vision that do not have sufficient practical applications at the time.",
"title": ""
},
{
"docid": "ad1000d0975bb0c605047349267c5e47",
"text": "A systematic review of randomized clinical trials was conducted to evaluate the acceptability and usefulness of computerized patient education interventions. The Columbia Registry, MEDLINE, Health, BIOSIS, and CINAHL bibliographic databases were searched. Selection was based on the following criteria: (1) randomized controlled clinical trials, (2) educational patient-computer interaction, and (3) effect measured on the process or outcome of care. Twenty-two studies met the selection criteria. Of these, 13 (59%) used instructional programs for educational intervention. Five studies (22.7%) tested information support networks, and four (18%) evaluated systems for health assessment and history-taking. The most frequently targeted clinical application area was diabetes mellitus (n = 7). All studies, except one on the treatment of alcoholism, reported positive results for interactive educational intervention. All diabetes education studies, in particular, reported decreased blood glucose levels among patients exposed to this intervention. Computerized educational interventions can lead to improved health status in several major areas of care, and appear not to be a substitute for, but a valuable supplement to, face-to-face time with physicians.",
"title": ""
},
{
"docid": "33789f718bc299fa63762f72595dcd77",
"text": "Resource allocation efficiency and energy consumption are among the top concerns to today's Cloud data center. Finding the optimal point where users' multiple job requests can be accomplished timely with minimum electricity and hardware cost is one of the key factors for system designers and managers to optimize the system configurations. Understanding the characteristics of the distribution of user task is an essential step for this purpose. At large-scale Cloud Computing data centers, a precise workload prediction will significantly help designers and operators to schedule hardware/software resources and power supplies in a more efficient manner, and make appropriate decisions to upgrade the Cloud system when the workload grows. While a lot of study has been conducted for hypervisor-based Cloud, container-based virtualization is becoming popular because of the low overhead and high efficiency in utilizing computing resources. In this paper, we have studied a set of real-world container data center traces from part of Google's cluster. We investigated the distribution of job duration, waiting time and machine utilization and the number of jobs submitted in a fix time period. Based on the quantitative study, an Ensemble Workload Prediction (EnWoP) method and a novel prediction evaluation parameter called Cloud Workload Correction Rate (C-Rate) have been proposed. The experimental results have verified that the EnWoP method achieved high prediction accuracy and the C-Rate evaluates the prediction methods more objective.",
"title": ""
},
{
"docid": "79f1473d4eb0c456660543fda3a648f1",
"text": "Weexamine the problem of learning and planning on high-dimensional domains with long horizons and sparse rewards. Recent approaches have shown great successes in many Atari 2600 domains. However, domains with long horizons and sparse rewards, such as Montezuma’s Revenge and Venture, remain challenging for existing methods. Methods using abstraction [5, 13] have shown to be useful in tackling long-horizon problems. We combine recent techniques of deep reinforcement learning with existing model-based approaches using an expert-provided state abstraction. We construct toy domains that elucidate the problem of long horizons, sparse rewards and high-dimensional inputs, and show that our algorithm significantly outperforms previous methods on these domains. Our abstraction-based approach outperforms Deep QNetworks [11] on Montezuma’s Revenge and Venture, and exhibits backtracking behavior that is absent from previous methods.",
"title": ""
},
{
"docid": "6a4a76e48ff8bfa9ad17f116c3258d49",
"text": "Deep domain adaptation has emerged as a new learning technique to address the lack of massive amounts of labeled data. Compared to conventional methods, which learn shared feature subspaces or reuse important source instances with shallow representations, deep domain adaptation methods leverage deep networks to learn more transferable representations by embedding domain adaptation in the pipeline of deep learning. There have been comprehensive surveys for shallow domain adaptation, but few timely reviews the emerging deep learning based methods. In this paper, we provide a comprehensive survey of deep domain adaptation methods for computer vision applications with four major contributions. First, we present a taxonomy of different deep domain adaptation scenarios according to the properties of data that define how two domains are diverged. Second, we summarize deep domain adaptation approaches into several categories based on training loss, and analyze and compare briefly the state-of-the-art methods under these categories. Third, we overview the computer vision applications that go beyond image classification, such as face recognition, semantic segmentation and object detection. Fourth, some potential deficiencies of current methods and several future directions are highlighted.",
"title": ""
},
{
"docid": "c0e70347999c028516eb981a15b8a6c8",
"text": "Many commercial websites use recommender systems to help customers locate products and content. Modern recommenders are based on collaborative filtering: they use patterns learned from users' behavior to make recommendations, usually in the form of related-items lists. The scale and complexity of these systems, along with the fact that their outputs reveal only relationships between items (as opposed to information about users), may suggest that they pose no meaningful privacy risk. In this paper, we develop algorithms which take a moderate amount of auxiliary information about a customer and infer this customer's transactions from temporal changes in the public outputs of a recommender system. Our inference attacks are passive and can be carried out by any Internet user. We evaluate their feasibility using public data from popular websites Hunch, Last. fm, Library Thing, and Amazon.",
"title": ""
},
{
"docid": "9def0e866bb96a64d2629ad2ec208ebc",
"text": "Article history: Received 6 March 2008 Received in revised form 14 August 2008 Accepted 6 October 2008",
"title": ""
},
{
"docid": "0ee3f8fcb319eedbe160e57db8a4b3ed",
"text": "Dissolved gas analysis (DGA) is used to assess the condition of power transformers. It uses the concentrations of various gases dissolved in the transformer oil due to decomposition of the oil and paper insulation. DGA has gained worldwide acceptance as a method for the detection of incipient faults in transformers.",
"title": ""
},
{
"docid": "13529522be402878286138168f264478",
"text": "I. Cantador (), P. Castells Universidad Autónoma de Madrid 28049 Madrid, Spain e-mails: [email protected], [email protected] Abstract An increasingly important type of recommender systems comprises those that generate suggestions for groups rather than for individuals. In this chapter, we revise state of the art approaches on group formation, modelling and recommendation, and present challenging problems to be included in the group recommender system research agenda in the context of the Social Web.",
"title": ""
},
{
"docid": "2002d3a2ed0e9d96c3e68b5b30dc202b",
"text": "This paper summarizes the current knowledge regarding the possible modes of action and nutritional factors involved in the use of essential oils (EOs) for swine and poultry. EOs have recently attracted increased interest as feed additives to be fed to swine and poultry, possibly replacing the use of antibiotic growth promoters which have been prohibited in the European Union since 2006. In general, EOs enhance the production of digestive secretions and nutrient absorption, reduce pathogenic stress in the gut, exert antioxidant properties and reinforce the animal’s immune status, which help to explain the enhanced performance observed in swine and poultry. However, the mechanisms involved in causing this growth promotion are far from being elucidated, since data on the complex gut ecosystem, gut function, in vivo oxidative status and immune system are still lacking. In addition, limited information is available regarding the interaction between EOs and feed ingredients or other feed additives (especially pro- or prebiotics and organic acids). This knowledge may help feed formulators to better utilize EOs when they formulate diets for poultry and swine.",
"title": ""
},
{
"docid": "cefcd78be7922f4349f1bb3aa59d2e1d",
"text": "The paper presents performance analysis of modified SEPIC dc-dc converter with low input voltage and wide output voltage range. The operational analysis and the design is done for the 380W power output of the modified converter. The simulation results of modified SEPIC converter are obtained with PI controller for the output voltage. The results obtained with the modified converter are compared with the basic SEPIC converter topology for the rise time, peak time, settling time and steady state error of the output response for open loop. Voltage tracking curve is also shown for wide output voltage range. I. Introduction Dc-dc converters are widely used in regulated switched mode dc power supplies and in dc motor drive applications. The input to these converters is often an unregulated dc voltage, which is obtained by rectifying the line voltage and it will therefore fluctuate due to variations of the line voltages. Switched mode dc-dc converters are used to convert this unregulated dc input into a controlled dc output at a desired voltage level. The recent growth of battery powered applications and low voltage storage elements are increasing the demand of efficient step-up dc–dc converters. Typical applications are in adjustable speed drives, switch-mode power supplies, uninterrupted power supplies, and utility interface with nonconventional energy sources, battery energy storage systems, battery charging for electric vehicles, and power supplies for telecommunication systems etc.. These applications demand high step-up static gain, high efficiency and reduced weight, volume and cost. The step-up stage normally is the critical point for the design of high efficiency converters due to the operation with high input current and high output voltage [1]. The boost converter topology is highly effective in these applications but at low line voltage in boost converter, the switching losses are high because the input current has the maximum value and the highest step-up conversion is required. The inductor has to be oversized for the large current at low line input. As a result, a boost converter designed for universal-input applications is heavily oversized compared to a converter designed for a narrow range of input ac line voltage [2]. However, recently new non-isolated dc–dc converter topologies with basic boost are proposed, showing that it is possible to obtain high static gain, low voltage stress and low losses, improving the performance with respect to the classical topologies. Some single stage high power factor rectifiers are presented in [3-6]. A new …",
"title": ""
},
{
"docid": "346ce9d0377f94f268479d578b700e9c",
"text": "From a system architecture perspective, 3D technology can satisfy the high memory bandwidth demands that future multicore/manycore architectures require. This article presents a 3D DRAM architecture design and the potential for using 3D DRAM stacking for both L2 cache and main memory in 3D multicore architecture.",
"title": ""
},
{
"docid": "d7780a122b51adc30f08eeb13af78bd1",
"text": "Malware sandboxes, widely used by antivirus companies, mobile application marketplaces, threat detection appliances, and security researchers, face the challenge of environment-aware malware that alters its behavior once it detects that it is being executed on an analysis environment. Recent efforts attempt to deal with this problem mostly by ensuring that well-known properties of analysis environments are replaced with realistic values, and that any instrumentation artifacts remain hidden. For sandboxes implemented using virtual machines, this can be achieved by scrubbing vendor-specific drivers, processes, BIOS versions, and other VM-revealing indicators, while more sophisticated sandboxes move away from emulation-based and virtualization-based systems towards bare-metal hosts. We observe that as the fidelity and transparency of dynamic malware analysis systems improves, malware authors can resort to other system characteristics that are indicative of artificial environments. We present a novel class of sandbox evasion techniques that exploit the \"wear and tear\" that inevitably occurs on real systems as a result of normal use. By moving beyond how realistic a system looks like, to how realistic its past use looks like, malware can effectively evade even sandboxes that do not expose any instrumentation indicators, including bare-metal systems. We investigate the feasibility of this evasion strategy by conducting a large-scale study of wear-and-tear artifacts collected from real user devices and publicly available malware analysis services. The results of our evaluation are alarming: using simple decision trees derived from the analyzed data, malware can determine that a system is an artificial environment and not a real user device with an accuracy of 92.86%. As a step towards defending against wear-and-tear malware evasion, we develop statistical models that capture a system's age and degree of use, which can be used to aid sandbox operators in creating system images that exhibit a realistic wear-and-tear state.",
"title": ""
},
{
"docid": "a826fbbf8919dfdef901b1acc2a8167c",
"text": "This paper proposes a new scheme for multi-image projective reconstruction based on a projective grid space. The projective grid space is defined by two basis views and the fundamental matrix relating these views. Given fundamental matrices relating other views to each of the two basis views, this projective grid space can be related to any view. In the projective grid space as a general space that is related to all images, a projective shape can be reconstructed from all the images of weakly calibrated cameras. The projective reconstruction is one way to reduce the effort of the calibration because it does not need Euclid metric information, but rather only correspondences of several points between the images. For demonstrating the effectiveness of the proposed projective grid definition, we modify the voxel coloring algorithm for the projective voxel scheme. The quality of the virtual view images re-synthesized from the projective shape demonstrates the effectiveness of our proposed scheme for projective reconstruction from a large number of images.",
"title": ""
},
{
"docid": "632a0aa55a7a7a024402de6aa507d36f",
"text": "Emotionally Focused Therapy for Couples (EFT) is a brief evidence-based couple therapy based in attachment theory. Since the development of EFT, efficacy and effectiveness research has accumulated to address a range of couple concerns. EFT meets or exceeds the guidelines for classification as an evidence-based couple therapy outlined for couple and family research. Furthermore, EFT researchers have examined the process of change and predictors of outcome in EFT. Future research in EFT will continue to examine the process of change in EFT and test the efficacy and effectiveness of EFT in new applications and for couples of diverse backgrounds and concerns.",
"title": ""
},
{
"docid": "8aaa4ab4879ad55f43114cf8a0bd3855",
"text": "Photo-based activity on social networking sites has recently been identified as contributing to body image concerns. The present study aimed to investigate experimentally the effect of number of likes accompanying Instagram images on women's own body dissatisfaction. Participants were 220 female undergraduate students who were randomly assigned to view a set of thin-ideal or average images paired with a low or high number of likes presented in an Instagram frame. Results showed that exposure to thin-ideal images led to greater body and facial dissatisfaction than average images. While the number of likes had no effect on body dissatisfaction or appearance comparison, it had a positive effect on facial dissatisfaction. These effects were not moderated by Instagram involvement, but greater investment in Instagram likes was associated with more appearance comparison and facial dissatisfaction. The results illustrate how the uniquely social interactional aspects of social media (e.g., likes) can affect body image.",
"title": ""
},
{
"docid": "7251ff8a3ff1adbf13ddd62ab9a9c9c3",
"text": "The performance of a brushless motor which has a surface-mounted magnet rotor and a trapezoidal back-emf waveform when it is operated in BLDC and BLAC modes is evaluated, in both constant torque and flux-weakening regions, assuming the same torque, the same peak current, and the same rms current. It is shown that although the motor has an essentially trapezoidal back-emf waveform, the output power and torque when operated in the BLAC mode in the flux-weakening region are significantly higher than that can be achieved when operated in the BLDC mode due to the influence of the winding inductance and back-emf harmonics",
"title": ""
},
{
"docid": "484f869fce642b268575d55cb47ebe36",
"text": "Discourse coherence is strongly associated with text quality, making it important to natural language generation and understanding. Yet existing models of coherence focus on measuring individual aspects of coherence (lexical overlap, rhetorical structure, entity centering) in narrow domains. In this paper, we describe domainindependent neural models of discourse coherence that are capable of measuring multiple aspects of coherence in existing sentences and can maintain coherence while generating new sentences. We study both discriminative models that learn to distinguish coherent from incoherent discourse, and generative models that produce coherent text, including a novel neural latentvariable Markovian generative model that captures the latent discourse dependencies between sentences in a text. Our work achieves state-of-the-art performance on multiple coherence evaluations, and marks an initial step in generating coherent texts given discourse contexts.",
"title": ""
},
{
"docid": "522efee981fb9eb26ba31d02230604fa",
"text": "The lack of an integrated medical information service model has been considered as a main issue in ensuring the continuity of healthcare from doctors, healthcare professionals to patients; the resultant unavailable, inaccurate, or unconformable healthcare information services have been recognized as main causes to the annual millions of medication errors. This paper proposes an Internet computing model aimed at providing an affordable, interoperable, ease of integration, and systematic approach to the development of a medical information service network to enable the delivery of continuity of healthcare. Web services, wireless, and advanced automatic identification technologies are fully integrated in the proposed service model. Some preliminary research results are presented.",
"title": ""
}
] | scidocsrr |
4f452ff1503a47b7a94c925f46b3c649 | Bounded Rationality, Abstraction, and Hierarchical Decision-Making: An Information-Theoretic Optimality Principle | [
{
"docid": "4e42d29a924c6e1e11456255c1f6cba0",
"text": "We present a reformulation of the stochastic optimal control problem in terms of KL divergence minimisation, not only providing a unifying perspective of previous approaches in this area, but also demonstrating that the formalism leads to novel practical approaches to the control problem. Specifically, a natural relaxation of the dual formulation gives rise to exact iterative solutions to the finite and infinite horizon stochastic optimal control problem, while direct application of Bayesian inference methods yields instances of risk sensitive control. We furthermore study corresponding formulations in the reinforcement learning setting and present model free algorithms for problems with both discrete and continuous state and action spaces. Evaluation of the proposed methods on the standard Gridworld and Cart-Pole benchmarks verifies the theoretical insights and shows that the proposed methods improve upon current approaches.",
"title": ""
},
{
"docid": "2efd26fc1e584aa5f70bdf9d24e5c2cd",
"text": "Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast and questions notions generally held to be “laws of nature” by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment. 3. There are parts of a system appropriate for the programmer, and other parts that are best left untouched as they have been built by the experts. We introduce the Julia programming language and its design—a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, which is what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can achieve machine performance without sacrificing human convenience.",
"title": ""
}
] | [
{
"docid": "31b26778e230d2ea40f9fe8996e095ed",
"text": "The effects of beverage alcohol (ethanol) on the body are determined largely by the rate at which it and its main breakdown product, acetaldehyde, are metabolized after consumption. The main metabolic pathway for ethanol involves the enzymes alcohol dehydrogenase (ADH) and aldehyde dehydrogenase (ALDH). Seven different ADHs and three different ALDHs that metabolize ethanol have been identified. The genes encoding these enzymes exist in different variants (i.e., alleles), many of which differ by a single DNA building block (i.e., single nucleotide polymorphisms [SNPs]). Some of these SNPs result in enzymes with altered kinetic properties. For example, certain ADH1B and ADH1C variants that are commonly found in East Asian populations lead to more rapid ethanol breakdown and acetaldehyde accumulation in the body. Because acetaldehyde has harmful effects on the body, people carrying these alleles are less likely to drink and have a lower risk of alcohol dependence. Likewise, an ALDH2 variant with reduced activity results in acetaldehyde buildup and also has a protective effect against alcoholism. In addition to affecting drinking behaviors and risk for alcoholism, ADH and ALDH alleles impact the risk for esophageal cancer.",
"title": ""
},
{
"docid": "d48053467e72a6a550de8cb66b005475",
"text": "In Slavic languages, verbal prefixes can be applied to perfective verbs deriving new perfective verbs, and multiple prefixes can occur in a single verb. This well-known type of data has not yet been adequately analyzed within current approaches to the semantics of Slavic verbal prefixes and aspect. The notion “aspect” covers “grammatical aspect”, or “viewpoint aspect” (see Smith 1991/1997), best characterized by the formal perfective vs. imperfective distinction, which is often expressed by inflectional morphology (as in Romance languages), and corresponds to propositional operators at the semantic level of representation. It also covers “lexical aspect”, “situation aspect” (see Smith ibid.), “eventuality types” (Bach 1981, 1986), or “Aktionsart” (as in Hinrichs 1985; Van Valin 1990; Dowty 1999; Paslawska and von Stechow 2002, for example), which regards the telic vs. atelic distinction and its Vendlerian subcategories (activities, accomplishments, achievements and states). It is lexicalized by verbs, encoded by derivational morphology, or by a variety of elements at the level of syntax, among which the direct object argument has a prominent role, however, the subject (external) argument is arguably a contributing factor, as well (see Dowty 1991, for example). These two “aspect” categories are orthogonal to each other and interact in systematic ways (see also Filip 1992, 1997, 1993/99; de Swart 1998; Paslawska and von Stechow 2002; Rothstein 2003, for example). Multiple prefixation and application of verbal prefixes to perfective bases is excluded by the common view of Slavic prefixes, according to which all perfective verbs are telic and prefixes constitute a uniform class of “perfective” markers that that are applied to imperfective verbs that are atelic and derive perfective verbs that are telic. Moreover, this view of perfective verbs and prefixes predicts rampant violations of the intuitive “one delimitation per event” constraint, whenever a prefix is applied to a perfective verb. This intuitive constraint is motivated by the observation that an event expressed within a single predication can be delimited only once: cp. *run a mile for ten minutes, *wash the clothes clean white.",
"title": ""
},
{
"docid": "cf5c6b5593ef5f0fd54c4fc7951e2460",
"text": "Aiming at inferring 3D shapes from 2D images, 3D shape reconstruction has drawn huge attention from researchers in computer vision and deep learning communities. However, it is not practical to assume that 2D input images and their associated ground truth 3D shapes are always available during training. In this paper, we propose a framework for semi-supervised 3D reconstruction. This is realized by our introduced 2D-3D self-consistency, which aligns the predicted 3D models and the projected 2D foreground segmentation masks. Moreover, our model not only enables recovering 3D shapes with the corresponding 2D masks, camera pose information can be jointly disentangled and predicted, even such supervision is never available during training. In the experiments, we qualitatively and quantitatively demonstrate the effectiveness of our model, which performs favorably against state-of-the-art approaches in either supervised or semi-supervised settings.",
"title": ""
},
{
"docid": "1f139fff7af5a49ee0e21f61bdf5a9b8",
"text": "This paper presents a technique for word segmentation for the Urdu OCR system. Word segmentation or word tokenization is a preliminary task for Urdu language processing. Several techniques are available for word segmentation in other languages. A methodology is proposed for word segmentation in this paper which determines the boundaries of words given a sequence of ligatures, based on collocation of ligatures and words in the corpus. Using this technique, word identification rate of 96.10% is achieved, using trigram probabilities normalized over the number of ligatures and words in the sequence.",
"title": ""
},
{
"docid": "d3d6a1793ce81ba0f4f0ffce0477a0ec",
"text": "Portable Document Format (PDF) is one of the widely-accepted document format. However, it becomes one of the most attractive targets for exploitation by malware developers and vulnerability researchers. Malicious PDF files can be used in Advanced Persistent Threats (APTs) targeting individuals, governments, and financial sectors. The existing tools such as intrusion detection systems (IDSs) and antivirus packages are inefficient to mitigate this kind of attacks. This is because these techniques need regular updates with the new malicious PDF files which are increasing every day. In this paper, a new algorithm is presented for detecting malicious PDF files based on data mining techniques. The proposed algorithm consists of feature selection stage and classification stage. The feature selection stage is used to the select the optimum number of features extracted from the PDF file to achieve high detection rate and low false positive rate with small computational overhead. Experimental results show that the proposed algorithm can achieve 99.77% detection rate, 99.84% accuracy, and 0.05% false positive rate.",
"title": ""
},
{
"docid": "df7922bcf3a0ecac69b2ac283505c312",
"text": "With the growing use of distributed information networks, there is an increasing need for algorithmic and system solutions for data-driven knowledge acquisition using distributed, heterogeneous and autonomous data repositories. In many applications, practical constraints require such systems to provide support for data analysis where the data and the computational resources are available. This presents us with distributed learning problems. We precisely formulate a class of distributed learning problems; present a general strategy for transforming traditional machine learning algorithms into distributed learning algorithms; and demonstrate the application of this strategy to devise algorithms for decision tree induction (using a variety of splitting criteria) from distributed data. The resulting algorithms are provably exact in that the decision tree constructed from distributed data is identical to that obtained by the corresponding algorithm when in the batch setting. The distributed decision tree induction algorithms have been implemented as part of INDUS, an agent-based system for data-driven knowledge acquisition from heterogeneous, distributed, autonomous data sources.",
"title": ""
},
{
"docid": "ad2e02fd3b349b2a66ac53877b82e9bb",
"text": "This paper proposes a novel approach for the evolution of artificial creatures which moves in a 3D virtual environment based on the neuroevolution of augmenting topologies (NEAT) algorithm. The NEAT algorithm is used to evolve neural networks that observe the virtual environment and respond to it, by controlling the muscle force of the creature. The genetic algorithm is used to emerge the architecture of creature based on the distance metrics for fitness evaluation. The damaged morphologies of creature are elaborated, and a crossover algorithm is used to control it. Creatures with similar morphological traits are grouped into the same species to limit the complexity of the search space. The motion of virtual creature having 2–3 limbs is recorded at three different angles to check their performance in different types of viscous mediums. The qualitative demonstration of motion of virtual creature represents that improved swimming of virtual creatures is achieved in simulating mediums with viscous drag 1–10 arbitrary unit.",
"title": ""
},
{
"docid": "b82440fdab626e7a2f02c2dc9b7c359a",
"text": "This study formulates a two-objective model to determine the optimal liner routing, ship size, and sailing frequency for container carriers by minimizing shipping costs and inventory costs. First, shipping and inventory cost functions are formulated using an analytical method. Then, based on a trade-off between shipping costs and inventory costs, Pareto optimal solutions of the twoobjective model are determined. Not only can the optimal ship size and sailing frequency be determined for any route, but also the routing decision on whether to route containers through a hub or directly to their destination can be made in objective value space. Finally, the theoretical findings are applied to a case study, with highly reasonable results. The results show that the optimal routing, ship size, and sailing frequency with respect to each level of inventory costs and shipping costs can be determined using the proposed model. The optimal routing decision tends to be shipping the cargo through a hub as the hub charge is decreased or its efficiency improved. In addition, the proposed model not only provides a tool to analyze the trade-off between shipping costs and inventory costs, but it also provides flexibility on the decision-making for container carriers. c © 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d5a7b2c027679d016c7c1ed128e48fd8",
"text": "Figure 3: Example of phase correlation between two microphones. The peak of this function indicates the inter-channel delay. index associated with peak value of f(t). This delay estimator is computationally convenient and more robust to noise and reverberation than other approaches based on cross-correlation or adaptive ltering. In ideal conditions, the output of Equation (5) is a delta function centered on the correct delay. In real applications with a wide band signal, e.g., a speech signal, the outcome is not a perfect delta function. Rather it resembles a correlation function of a random process. The time index associated with the maximum value of the output of Equation (5) provides an estimation of the delay. The system can produce wrong answers when two or more peaks of similar amplitude are present, i.e., in highly reverber-ant conditions. The resolution in delay estimation is limited in discrete systems by the sampling frequency. In order to increase the accuracy, oversampling can be applied in the neighborhood of the peak, to achieve sub-sample precision. Fig. 3 demonstrates an example of the result of a cross-power spectrum time delay estimator. Once the relative delays associated with all considered microphone pairs are known, the source position (x s ; y s) is estimated as the point that would produce the most similar delay values to the observed ones. This optimization is performed by a downhill sim-plex algorithm 6] applied to minimize the Euclidean distance between M observed delays ^ i and the corresponding M theoretical delays i : An analysis of the impulse responses associated with all the microphones, given an acoustic source emitting at a speciic position, has shown that constructive interference phenomena occur in the presence of signiicant reverberation. In some cases, the direct wavefront happens to be weaker than a coincidence of reeections, inducing a wrong estimation of the arrival direction and leading to an incorrect result. Selecting only microphone pairs that show the highest peaks of phase correlation generally alleviates this problem. Location results obtained with this strategy show comparable performance (mean posi-Reverb. Time Average Error 10 mic pairs 4 mic pairs 0.1sec 38.4 cm 29.8 cm 0.6sec 51.3 cm 32.1 cm 1.7sec 105.0 cm 46.4 cm Table 1: Average location error using either all 10 pairs or 4 pairs of microphones. Three reverberation time conditions are considered. tion error of about 0.3 m) at reverberation times of 0.1 s and 0.6 s. …",
"title": ""
},
{
"docid": "0ded64c37e44433f9822650615e0ef7a",
"text": "Transseptal catheterization is a vital component of percutaneous transvenous mitral commissurotomy. Therefore, a well-executed transseptal catheterization is the key to a safe and successful percutaneous transvenous mitral commissurotomy. Two major problems inherent in atrial septal puncture for percutaneous transvenous mitral commissurotomy are cardiac perforation and puncture of an inappropriate atrial septal site. The former may lead to serious complication of cardiac tamponade and the latter to possible difficulty in maneuvering the Inoue balloon catheter across the mitral orifice. This article details atrial septal puncture technique, including landmark selection for optimal septal puncture sites, avoidance of inappropriate puncture sites, and step-by-step description of atrial septal puncture.",
"title": ""
},
{
"docid": "27a0c382d827f920c25f7730ddbacdc0",
"text": "Some new parameters in Vivaldi Notch antennas are debated over in this paper. They can be availed for the bandwidth application amelioration. The aforementioned limiting factors comprise two parameters for the radial stub dislocation, one parameter for the stub opening angle, and one parameter for the stub’s offset angle. The aforementioned parameters are rectified by means of the optimization algorithm to accomplish a better frequency application. The results obtained in this article will eventually be collated with those of the other similar antennas. The best achieved bandwidth in this article is 17.1 GHz.",
"title": ""
},
{
"docid": "39bf990d140eb98fa7597de1b6165d49",
"text": "The Internet of Things (IoT) is expected to substantially support sustainable development of future smart cities. This article identifies the main issues that may prevent IoT from playing this crucial role, such as the heterogeneity among connected objects and the unreliable nature of associated services. To solve these issues, a cognitive management framework for IoT is proposed, in which dynamically changing real-world objects are represented in a virtualized environment, and where cognition and proximity are used to select the most relevant objects for the purpose of an application in an intelligent and autonomic way. Part of the framework is instantiated in terms of building blocks and demonstrated through a smart city scenario that horizontally spans several application domains. This preliminary proof of concept reveals the high potential that self-reconfigurable IoT can achieve in the context of smart cities.",
"title": ""
},
{
"docid": "581c4d11e59dc17e0cb6ecf5fa7bea93",
"text": "This paper describes the three methodologies used by CALCE in their winning entry for the IEEE 2012 PHM Data Challenge competition. An experimental data set from seventeen ball bearings was provided by the FEMTO-ST Institute. The data set consisted of data from six bearings for algorithm training and data from eleven bearings for testing. The authors developed prognostic algorithms based on the data from the training bearings to estimate the remaining useful life of the test bearings. Three methodologies are presented in this paper. Result accuracies of the winning methodology are presented.",
"title": ""
},
{
"docid": "d2af69233bf30376afb81b204b063c81",
"text": "Exploiting the security vulnerabilities in web browsers, web applications and firewalls is a fundamental trait of cross-site scripting (XSS) attacks. Majority of web population with basic web awareness are vulnerable and even expert web users may not notice the attack to be able to respond in time to neutralize the ill effects of attack. Due to their subtle nature, a victimized server, a compromised browser, an impersonated email or a hacked web application tends to keep this form of attacks alive even in the present times. XSS attacks severely offset the benefits offered by Internet based services thereby impacting the global internet community. This paper focuses on defense, detection and prevention mechanisms to be adopted at various network doorways to neutralize XSS attacks using open source tools.",
"title": ""
},
{
"docid": "c01e634ef86002a8b6fa2e78e3e1a32a",
"text": "In an effort to overcome the data deluge in computational biology and bioinformatics and to facilitate bioinformatics research in the era of big data, we identify some of the most influential algorithms that have been widely used in the bioinformatics community. These top data mining and machine learning algorithms cover classification, clustering, regression, graphical model-based learning, and dimensionality reduction. The goal of this study is to guide the focus of scalable computing experts in the endeavor of applying new storage and scalable computation designs to bioinformatics algorithms that merit their attention most, following the engineering maxim of “optimize the common case”.",
"title": ""
},
{
"docid": "13452d0ceb4dfd059f1b48dba6bf5468",
"text": "This paper presents an extension to the technology acceptance model (TAM) and empirically examines it in an enterprise resource planning (ERP) implementation environment. The study evaluated the impact of one belief construct (shared beliefs in the benefits of a technology) and two widely recognized technology implementation success factors (training and communication) on the perceived usefulness and perceived ease of use during technology implementation. Shared beliefs refer to the beliefs that organizational participants share with their peers and superiors on the benefits of the ERP system. Using data gathered from the implementation of an ERP system, we showed that both training and project communication influence the shared beliefs that users form about the benefits of the technology and that the shared beliefs influence the perceived usefulness and ease of use of the technology. Thus, we provided empirical and theoretical support for the use of managerial interventions, such as training and communication, to influence the acceptance of technology, since perceived usefulness and ease of use contribute to behavioral intention to use the technology. # 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "eb7990a677cd3f96a439af6620331400",
"text": "Solving the visual symbol grounding problem has long been a goal of artificial intelligence. The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images. In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure. Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words. By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies. We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation.",
"title": ""
},
{
"docid": "9327a13308cd713bcfb3b4717eaafef0",
"text": "A review of both laboratory and field studies on the effects of setting goals when performing a task found that in 90% of the studies, specific and challenging goals lead to higher performance than easy goals, \"do your best\" goals, or no goals. Goals affect performance by directing attention, mobilizing effort, increasing persistence, and motivating strategy development. Goal setting is most likely to improve task performance when the goals are specific and sufficiently challenging, the subjects have sufficient ability (and ability differences are controlled), feedback is provided to show progress in relation to the goal, rewards such as money are given for goal attainment, the experimenter or manager is supportive, and assigned goals are accepted by the individual. No reliable individual differences have emerged in goal-setting studies, probably because the goals were typically assigned rather than self-set. Need for achievement and self-esteem may be the most promising individual difference variables.",
"title": ""
},
{
"docid": "460e8daf5dfc9e45c3ade5860aa9cc57",
"text": "Combining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL. On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori. However, in complex environments where transition models need to be learned from data, the deficiencies of learned models have limited their utility for planning. To address these challenges, we propose TreeQN, a differentiable, recursive, tree-structured model that serves as a drop-in replacement for any value function network in deep RL with discrete actions. TreeQN dynamically constructs a tree by recursively applying a transition model in a learned abstract state space and then aggregating predicted rewards and state-values using a tree backup to estimate Q-values. We also propose ATreeC, an actor-critic variant that augments TreeQN with a softmax layer to form a stochastic policy network. Both approaches are trained end-to-end, such that the learned model is optimised for its actual use in the planner. We show that TreeQN and ATreeC outperform n-step DQN and A2C on a box-pushing task, as well as n-step DQN and value prediction networks (Oh et al., 2017) on multiple Atari games, with deeper trees often outperforming shallower ones. We also present a qualitative analysis that sheds light on the trees learned by TreeQN.",
"title": ""
}
] | scidocsrr |
c25968e403b102d2bcef809b6d05c7ef | Multimodal Network Embedding via Attention based Multi-view Variational Autoencoder | [
{
"docid": "17dd72e274d25a02e9a8183237092f0c",
"text": "Network representation is the basis of many applications and of extensive interest in various fields, such as information retrieval, social network analysis, and recommendation systems. Most previous methods for network representation only consider the incomplete aspects of a problem, including link structure, node information, and partial integration. The present study introduces a deep network representation model that seamlessly integrates the text information and structure of a network. The model captures highly non-linear relationships between nodes and complex features of a network by exploiting the variational autoencoder (VAE), which is a deep unsupervised generation algorithm. The representation learned with a paragraph vector model is merged with that learned with the VAE to obtain the network representation, which preserves both structure and text information. Comprehensive experiments is conducted on benchmark datasets and find that the introduced model performs better than state-of-the-art techniques.",
"title": ""
},
{
"docid": "a1ef2bce061c11a2d29536d7685a56db",
"text": "This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.",
"title": ""
},
{
"docid": "df163d94fbf0414af1dde4a9e7fe7624",
"text": "This paper introduces a web image dataset created by NUS's Lab for Media Search. The dataset includes: (1) 269,648 images and the associated tags from Flickr, with a total of 5,018 unique tags; (2) six types of low-level features extracted from these images, including 64-D color histogram, 144-D color correlogram, 73-D edge direction histogram, 128-D wavelet texture, 225-D block-wise color moments extracted over 5x5 fixed grid partitions, and 500-D bag of words based on SIFT descriptions; and (3) ground-truth for 81 concepts that can be used for evaluation. Based on this dataset, we highlight characteristics of Web image collections and identify four research issues on web image annotation and retrieval. We also provide the baseline results for web image annotation by learning from the tags using the traditional k-NN algorithm. The benchmark results indicate that it is possible to learn effective models from sufficiently large image dataset to facilitate general image retrieval.",
"title": ""
},
{
"docid": "4337f8c11a71533d38897095e5e6847a",
"text": "A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling “where to look” or visual attention, it is equally important to model “what words to listen to” or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA.1. 1 Introduction Visual Question Answering (VQA) [2, 7, 16, 17, 29] has emerged as a prominent multi-discipline research problem in both academia and industry. To correctly answer visual questions about an image, the machine needs to understand both the image and question. Recently, visual attention based models [20, 23–25] have been explored for VQA, where the attention mechanism typically produces a spatial map highlighting image regions relevant to answering the question. So far, all attention models for VQA in literature have focused on the problem of identifying “where to look” or visual attention. In this paper, we argue that the problem of identifying “which words to listen to” or question attention is equally important. Consider the questions “how many horses are in this image?” and “how many horses can you see in this image?\". They have the same meaning, essentially captured by the first three words. A machine that attends to the first three words would arguably be more robust to linguistic variations irrelevant to the meaning and answer of the question. Motivated by this observation, in addition to reasoning about visual attention, we also address the problem of question attention. Specifically, we present a novel multi-modal attention model for VQA with the following two unique features: Co-Attention: We propose a novel mechanism that jointly reasons about visual attention and question attention, which we refer to as co-attention. Unlike previous works, which only focus on visual attention, our model has a natural symmetry between the image and question, in the sense that the image representation is used to guide the question attention and the question representation(s) are used to guide image attention. Question Hierarchy: We build a hierarchical architecture that co-attends to the image and question at three levels: (a) word level, (b) phrase level and (c) question level. At the word level, we embed the words to a vector space through an embedding matrix. At the phrase level, 1-dimensional convolution neural networks are used to capture the information contained in unigrams, bigrams and trigrams. The source code can be downloaded from https://github.com/jiasenlu/HieCoAttenVQA 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. ar X iv :1 60 6. 00 06 1v 3 [ cs .C V ] 2 6 O ct 2 01 6 Ques%on:\t\r What\t\r color\t\r on\t\r the stop\t\r light\t\r is\t\r lit\t\r up\t\r \t\r ? ...\t\r ... color\t\r stop\t\r light\t\r lit co-‐a7en%on color\t\r ...\t\r stop\t\r \t\r light\t\r \t\r ... What color\t\r ... the stop light light\t\r \t\r ... What color What\t\r color\t\r on\t\r the\t\r stop\t\r light\t\r is\t\r lit\t\r up ...\t\r ... the\t\r stop\t\r light ...\t\r ... stop Image Answer:\t\r green Figure 1: Flowchart of our proposed hierarchical co-attention model. Given a question, we extract its word level, phrase level and question level embeddings. At each level, we apply co-attention on both the image and question. The final answer prediction is based on all the co-attended image and question features. Specifically, we convolve word representations with temporal filters of varying support, and then combine the various n-gram responses by pooling them into a single phrase level representation. At the question level, we use recurrent neural networks to encode the entire question. For each level of the question representation in this hierarchy, we construct joint question and image co-attention maps, which are then combined recursively to ultimately predict a distribution over the answers. Overall, the main contributions of our work are: • We propose a novel co-attention mechanism for VQA that jointly performs question-guided visual attention and image-guided question attention. We explore this mechanism with two strategies, parallel and alternating co-attention, which are described in Sec. 3.3; • We propose a hierarchical architecture to represent the question, and consequently construct image-question co-attention maps at 3 different levels: word level, phrase level and question level. These co-attended features are then recursively combined from word level to question level for the final answer prediction; • At the phrase level, we propose a novel convolution-pooling strategy to adaptively select the phrase sizes whose representations are passed to the question level representation; • Finally, we evaluate our proposed model on two large datasets, VQA [2] and COCO-QA [17]. We also perform ablation studies to quantify the roles of different components in our model.",
"title": ""
}
] | [
{
"docid": "7dde24346f2df846b9dbbe45cd9a99d6",
"text": "The Pemberton Happiness Index (PHI) is a recently developed integrative measure of well-being that includes components of hedonic, eudaimonic, social, and experienced well-being. The PHI has been validated in several languages, but not in Portuguese. Our aim was to cross-culturally adapt the Universal Portuguese version of the PHI and to assess its psychometric properties in a sample of the Brazilian population using online surveys.An expert committee evaluated 2 versions of the PHI previously translated into Portuguese by the original authors using a standardized form for assessment of semantic/idiomatic, cultural, and conceptual equivalence. A pretesting was conducted employing cognitive debriefing methods. In sequence, the expert committee evaluated all the documents and reached a final Universal Portuguese PHI version. For the evaluation of the psychometric properties, the data were collected using online surveys in a cross-sectional study. The study population included healthcare professionals and users of the social network site Facebook from several Brazilian geographic areas. In addition to the PHI, participants completed the Satisfaction with Life Scale (SWLS), Diener and Emmons' Positive and Negative Experience Scale (PNES), Psychological Well-being Scale (PWS), and the Subjective Happiness Scale (SHS). Internal consistency, convergent validity, known-group validity, and test-retest reliability were evaluated. Satisfaction with the previous day was correlated with the 10 items assessing experienced well-being using the Cramer V test. Additionally, a cut-off value of PHI to identify a \"happy individual\" was defined using receiver-operating characteristic (ROC) curve methodology.Data from 1035 Brazilian participants were analyzed (health professionals = 180; Facebook users = 855). Regarding reliability results, the internal consistency (Cronbach alpha = 0.890 and 0.914) and test-retest (intraclass correlation coefficient = 0.814) were both considered adequate. Most of the validity hypotheses formulated a priori (convergent and know-group) was further confirmed. The cut-off value of higher than 7 in remembered PHI was identified (AUC = 0.780, sensitivity = 69.2%, specificity = 78.2%) as the best one to identify a happy individual.We concluded that the Universal Portuguese version of the PHI is valid and reliable for use in the Brazilian population using online surveys.",
"title": ""
},
{
"docid": "eb6572344dbaf8e209388f888fba1c10",
"text": "[Purpose] The present study was performed to evaluate the changes in the scapular alignment, pressure pain threshold and pain in subjects with scapular downward rotation after 4 weeks of wall slide exercise or sling slide exercise. [Subjects and Methods] Twenty-two subjects with scapular downward rotation participated in this study. The alignment of the scapula was measured using radiographic analysis (X-ray). Pain and pressure pain threshold were assessed using visual analogue scale and digital algometer. Patients were assessed before and after a 4 weeks of exercise. [Results] In the within-group comparison, the wall slide exercise group showed significant differences in the resting scapular alignment, pressure pain threshold, and pain after four weeks. The between-group comparison showed that there were significant differences between the wall slide group and the sling slide group after four weeks. [Conclusion] The results of this study found that the wall slide exercise may be effective at reducing pain and improving scapular alignment in subjects with scapular downward rotation.",
"title": ""
},
{
"docid": "5897b87a82d5bc11757e33a8a46b1f21",
"text": "BACKGROUND\nProspective data from over 10 years of follow-up were used to examine neighbourhood deprivation, social fragmentation and trajectories of health.\n\n\nMETHODS\nFrom the third phase (1991-93) of the Whitehall II study of British civil servants, SF-36 health functioning was measured on up to five occasions for 7834 participants living in 2046 census wards. Multilevel linear regression models assessed the Townsend deprivation index and social fragmentation index as predictors of initial health and health trajectories.\n\n\nRESULTS\nIndependent of individual socioeconomic factors, deprivation was inversely associated with initial SF-36 physical component summary (PCS) score. Social fragmentation was not associated with PCS scores. Deprivation and social fragmentation were inversely associated with initial mental component summary (MCS) score. Neighbourhood characteristics were not associated with trajectories of PCS score or MCS score for the whole set. However, restricted analysis on longer term residents revealed that residents in deprived or socially fragmented neighbourhoods had lowest initial and smallest improvements in MCS score.\n\n\nCONCLUSIONS\nThis longitudinal study provides evidence that residence in a deprived or fragmented neighbourhood is associated with poorer mental health and that longer exposure to such neighbourhood environments has incremental effects. Associations between physical health functioning and neighbourhood characteristics were less clear. Mindful of the importance of individual socioeconomic factors, the findings warrant more detailed examination of materially and socially deprived neighbourhoods and their consequences for health.",
"title": ""
},
{
"docid": "a05d87b064ab71549d373599700cfcbf",
"text": "We provide sets of parameters for multiplicative linear congruential generators (MLCGs) of different sizes and good performance with respect to the spectral test. For ` = 8, 9, . . . , 64, 127, 128, we take as a modulus m the largest prime smaller than 2`, and provide a list of multipliers a such that the MLCG with modulus m and multiplier a has a good lattice structure in dimensions 2 to 32. We provide similar lists for power-of-two moduli m = 2`, for multiplicative and non-multiplicative LCGs.",
"title": ""
},
{
"docid": "820ae89c1ce626e52ed1ee6d61ee0aee",
"text": "Induction motor especially three phase induction motor plays vital role in the industry due to their advantages over other electrical motors. Therefore, there is a strong demand for their reliable and safe operation. If any fault and failures occur in the motor it can lead to excessive downtimes and generate great losses in terms of revenue and maintenance. Therefore, an early fault detection is needed for the protection of the motor. In the current scenario, the health monitoring of the induction motor are increasing due to its potential to reduce operating costs, enhance the reliability of operation and improve service to the customers. The health monitoring of induction motor is an emerging technology for online detection of incipient faults. The on-line health monitoring involves taking measurements on a machine while it is in operating conditions in order to detect faults with the aim of reducing both unexpected failure and maintenance costs. In the present paper, a comprehensive survey of induction machine faults, diagnostic methods and future aspects in the health monitoring of induction motor has been discussed.",
"title": ""
},
{
"docid": "9b646ef8c6054f9a4d85cf25e83d415c",
"text": "In this paper, a mobile robot with a tetrahedral shape for its basic structure is presented as a thrown robot for search and rescue robot application. The Tetrahedral Mobile Robot has its body in the center of the whole structure. The driving parts that produce the propelling force are located at each corner. As a driving wheel mechanism, we have developed the \"Omni-Ball\" with one active and two passive rotational axes, which are explained in detail. An actual prototype model has been developed to illustrate the concept and to perform preliminary motion experiments, through which the basic performance of the Tetrahedral Mobile Robot was confirmed",
"title": ""
},
{
"docid": "a3391be7ac84ceb8c024c1d32eb83c6c",
"text": "This paper presents a new approach to find energy-efficient motion plans for mobile robots. Motion planning has two goals: finding the routes and determining the velocities. We model the relationship of motors' speed and their power consumption with polynomials. The velocity of the robot is related to its wheels' velocities by performing a linear transformation. We compare the energy consumption of different routes at different velocities and consider the energy consumed for acceleration and turns. We use experiment-validated simulation to demonstrate up to 51% energy savings for searching an open area.",
"title": ""
},
{
"docid": "66243ce54120d2c61525ad71d501a724",
"text": "Ameloblastic fibrosarcoma is a mixed odontogenic tumor that can originate de novo or from a transformed ameloblastic fibroma. This report describes the case of a 34-year-old woman with a recurrent, rapidly growing, debilitating lesion. This lesion appeared as a large painful mandibular swelling that filled the oral cavity and extended to the infratemporal fossa. The lesion had been previously misdiagnosed as ameloblastoma. Twenty months after final surgery and postoperative chemotherapy, lung metastases were diagnosed after she reported respiratory signs and symptoms.",
"title": ""
},
{
"docid": "061c8e8e9d6a360c36158193afee5276",
"text": "Distribution transformers are one of the most important equipment in power network. Because of, the large number of transformers distributed over a wide area in power electric systems, the data acquisition and condition monitoring is a important issue. This paper presents design and implementation of a mobile embedded system and a novel software to monitor and diagnose condition of transformers, by record key operation indictors of a distribution transformer like load currents, transformer oil, ambient temperatures and voltage of three phases. The proposed on-line monitoring system integrates a Global Service Mobile (GSM) Modem, with stand alone single chip microcontroller and sensor packages. Data of operation condition of transformer receives in form of SMS (Short Message Service) and will be save in computer server. Using the suggested online monitoring system will help utility operators to keep transformers in service for longer of time.",
"title": ""
},
{
"docid": "a4b1a04647b8d4f8a9cc837304c7cbae",
"text": "The human brain automatically attempts to interpret the physical visual inputs from our eyes in terms of plausible motion of the viewpoint and/or of the observed object or scene [Ellis 1938; Graham 1965; Giese and Poggio 2003]. In the physical world, the rules that define plausible motion are set by temporal coherence, parallax, and perspective projection. Our brain, however, refuses to feel constrained by the unrelenting laws of physics in what it deems plausible motion. Image metamorphosis experiments, in which unnatural, impossible in-between images are interpolated, demonstrate that under certain circumstances, we willingly accept chimeric images as plausible transition stages between images of actual, known objects [Beier and Neely 1992; Seitz and Dyer 1996]. Or think of cartoon animations which for the longest time were hand-drawn pieces of art that didn't need to succumb to physical correctness. The goal of our work is to exploit this freedom of perception for space-time interpolation, i.e., to generate transitions between still images that our brain accepts as plausible motion in a moving 3D world.",
"title": ""
},
{
"docid": "36a9f1c016d0e2540460e28c4c846e9a",
"text": "Nowadays PDF documents have become a dominating knowledge repository for both the academia and industry largely because they are very convenient to print and exchange. However, the methods of automated structure information extraction are yet to be fully explored and the lack of effective methods hinders the information reuse of the PDF documents. To enhance the usability for PDF-formatted electronic books, we propose a novel computational framework to analyze the underlying physical structure and logical structure. The analysis is conducted at both page level and document level, including global typographies, reading order, logical elements, chapter/section hierarchy and metadata. Moreover, two characteristics of PDF-based books, i.e., style consistency in the whole book document and natural rendering order of PDF files, are fully exploited in this paper to improve the conventional image-based structure extraction methods. This paper employs the bipartite graph as a common structure for modeling various tasks, including reading order recovery, figure and caption association, and metadata extraction. Based on the graph representation, the optimal matching (OM) method is utilized to find the global optima in those tasks. Extensive benchmarking using real-world data validates the high efficiency and discrimination ability of the proposed method.",
"title": ""
},
{
"docid": "7b55b39902d40295ea14088dddaf77e0",
"text": "Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.",
"title": ""
},
{
"docid": "e22a3cd1887d905fffad0f9d14132ed6",
"text": "Relativistic electron beam generation studies have been carried out in LIA-400 system through explosive electron emission for various cathode materials. This paper presents the emission properties of different cathode materials at peak diode voltages varying from 10 to 220 kV and at peak current levels from 0.5 to 2.2 kA in a single pulse duration of 160-180 ns. The cathode materials used are graphite, stainless steel, and red polymer velvet. The perveance data calculated from experimental waveforms are compared with 1-D Child Langmuir formula to obtain the cathode plasma expansion velocity for various cathode materials. Various diode parameters are subject to shot to shot variation analysis. Velvet cathode proves to be the best electron emitter because of its lower plasma expansion velocity and least shot to shot variability.",
"title": ""
},
{
"docid": "9ebf703bcf5004a74189638514b20313",
"text": "In many real-world tasks, there are abundant unlabeled examples but the number of labeled training examples is limited, because labeling the examples requires human efforts and expertise. So, semi-supervised learning which tries to exploit unlabeled examples to improve learning performance has become a hot topic. Disagreement-based semi-supervised learning is an interesting paradigm, where multiple learners are trained for the task and the disagreements among the learners are exploited during the semi-supervised learning process. This survey article provides an introduction to research advances in this paradigm.",
"title": ""
},
{
"docid": "a3148ce66c9cd871df7f3ec008d7666c",
"text": "This priming study investigates the role of conceptual structure during language production, probing whether English speakers are sensitive to the structure of the event encoded by a prime sentence. In two experiments, participants read prime sentences aloud before describing motion events. Primes differed in 1) syntactic frame, 2) degree of lexical and conceptual overlap with target events, and 3) distribution of event components within frames. Results demonstrate that conceptual overlap between primes and targets led to priming of (a) the information that speakers chose to include in their descriptions of target events, (b) the way that information was mapped to linguistic elements, and (c) the syntactic structures that were built to communicate that information. When there was no conceptual overlap between primes and targets, priming was not successful. We conclude that conceptual structure is a level of representation activated during priming, and that it has implications for both Message Planning and Linguistic Formulation.",
"title": ""
},
{
"docid": "5e86f40cfc3b2e9664ea1f7cc5bf730c",
"text": "Due to a wide range of applications, wireless sensor networks (WSNs) have recently attracted a lot of interest to the researchers. Limited computational capacity and power usage are two major challenges to ensure security in WSNs. Recently, more secure communication or data aggregation techniques have discovered. So, familiarity with the current research in WSN security will benefit researchers greatly. In this paper, security related issues and challenges in WSNs are investigated. We identify the security threats and review proposed security mechanisms for WSNs. Moreover, we provide a brief discussion on the future research direction in WSN security.",
"title": ""
},
{
"docid": "7d43cf2e0fcc795f6af4bdbcfb56d13e",
"text": "Vehicular Ad hoc Networks is a special kind of mobile ad hoc network to provide communication among nearby vehicles and between vehicles and nearby fixed equipments. VANETs are mainly used for improving efficiency and safety of (future) transportation. There are chances of a number of possible attacks in VANET due to open nature of wireless medium. In this paper, we have classified these security attacks and logically organized/represented in a more lucid manner based on the level of effect of a particular security attack on intelligent vehicular traffic. Also, an effective solution is proposed for DOS based attacks which use the redundancy elimination mechanism consists of rate decreasing algorithm and state transition mechanism as its components. This solution basically adds a level of security to its already existing solutions of using various alternative options like channel-switching, frequency-hopping, communication technology switching and multiple-radio transceivers to counter affect the DOS attacks. Proposed scheme enhances the security in VANETs without using any cryptographic scheme.",
"title": ""
},
{
"docid": "9d04b10ebe8a65777aacf20fe37b55cb",
"text": "Over the past decade, Deep Artificial Neural Networks (DNNs) have become the state-of-the-art algorithms in Machine Learning (ML), speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL) and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs). All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS), Quantitative Structure-Activity Relationship (QSAR) research, protein structure prediction and genomics (and other omics) data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron-Astrocyte Networks (DANAN) could overcome the difficulties in architecture design, learning process and scalability of the current ML methods.",
"title": ""
},
{
"docid": "06c1398ba85aa22bf796f3033c1b2d90",
"text": "Detecting and recognizing text in natural scene images is a challenging, yet not completely solved task. In recent years several new systems that try to solve at least one of the two sub-tasks (text detection and text recognition) have been proposed. In this paper we present SEE, a step towards semi-supervised neural networks for scene text detection and recognition, that can be optimized end-to-end. Most existing works consist of multiple deep neural networks and several pre-processing steps. In contrast to this, we propose to use a single deep neural network, that learns to detect and recognize text from natural images, in a semi-supervised way. SEE is a network that integrates and jointly learns a spatial transformer network, which can learn to detect text regions in an image, and a text recognition network that takes the identified text regions and recognizes their textual content. We introduce the idea behind our novel approach and show its feasibility, by performing a range of experiments on standard benchmark datasets, where we achieve competitive results.",
"title": ""
}
] | scidocsrr |
f0949771ac6d5ad74ddfdb859ab79076 | Evaluating Student Satisfaction with Blended Learning in a Gender-Segregated Environment | [
{
"docid": "6ccfe86f2a07dc01f87907855f6cb337",
"text": "H istorically, retention of distance learners has been problematic with dropout rates disproportionably high compared to traditional course settings (Richards & Ridley, 1997; Wetzel, Radtke, & Stern, 1994). Dropout rates of 30 to 50% have been common (Moore & Kearsley, 1996). Students may experience feelings of isolation in distance courses compared to prior faceto-face educational experiences (Shaw & Polovina, 1999). If the distance courses feature limited contact with instructors and fellow students, the result of this isolation can be unfinished courses or degrees (Keegan, 1990). Student satisfaction in traditional learning environments has been overlooked in the past (Astin, 1993; DeBourgh, 1999; Navarro & Shoemaker, 2000). Student satisfaction has also not been given the proper attention in distance learning environments (Biner, Dean, & Mellinger, 1994). Richards and Ridley (1997) suggested further research is necessary to study factors affecting student enrollment and satisfaction. Prior studies in classroom-based courses have shown there is a high correlation between student satisfaction and retention (Astin, 1993; Edwards & Waters, 1982). This high correlation has also been found in studies in which distance learners were the target population (Bailey, Bauman, & Lata, 1998). The purpose of this study was to identify factors influencing student satisfaction in online courses, and to create and validate an instrument to measure student satisfaction in online courses.",
"title": ""
}
] | [
{
"docid": "8850b66d131088dbf99430d2c76f5bca",
"text": "The richness of visual details in most computer graphics images nowadays is largely due to the extensive use of texture mapping techniques. Texture mapping is the main tool in computer graphics to integrate a given shape to a given pattern. Despite its power it has problems and limitations. Current solutions cannot handle complex shapes properly. The de nition of the mapping function and problems like distortions can turn the process into a very cumbersome one for the application programmer and consequently for the nal user. An associated problem is the synthesis of patterns which are used as texture. The available options are usually limited to scanning in real pictures. This document is a PhD proposal to investigate techniques to integrate complex shapes and patterns which will not only overcome problems usually associated with texture mapping but also give us more control and make less ad hoc the task of combining shape and pattern. We break the problem into three parts: modeling of patterns, modeling of shape and integration. The integration step will use common information to drive both the modeling of patterns and shape in an integrated manner. Our approach is inspired by observations on how these processes happen in real life, where there is no pattern without a shape associated with it. The proposed solutions will hopefully extent the generality, applicability and exibility of existing integration methods in computer graphics. iii Table of",
"title": ""
},
{
"docid": "b0ea2ca170a8d0bcf4bd5dc8311c6201",
"text": "A cascade of sigma-delta modulator stages that employ a feedforward architecture to reduce the signal ranges required at the integrator inputs and outputs has been used to implement a broadband, high-resolution oversampling CMOS analog-to-digital converter capable of operating from low-supply voltages. An experimental prototype of the proposed architecture has been integrated in a 0.25-/spl mu/m CMOS technology and operates from an analog supply of only 1.2 V. At a sampling rate of 40 MSamples/sec, it achieves a dynamic range of 96 dB for a 1.25-MHz signal bandwidth. The analog power dissipation is 44 mW.",
"title": ""
},
{
"docid": "2e812c0a44832721fcbd7272f9f6a465",
"text": "Previous research has shown that people differ in their implicit theories about the essential characteristics of intelligence and emotions. Some people believe these characteristics to be predetermined and immutable (entity theorists), whereas others believe that these characteristics can be changed through learning and behavior training (incremental theorists). The present study provides evidence that in healthy adults (N = 688), implicit beliefs about emotions and emotional intelligence (EI) may influence performance on the ability-based Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT). Adults in our sample with incremental theories about emotions and EI scored higher on the MSCEIT than entity theorists, with implicit theories about EI showing a stronger relationship to scores than theories about emotions. Although our participants perceived both emotion and EI as malleable, they viewed emotions as more malleable than EI. Women and young adults in general were more likely to be incremental theorists than men and older adults. Furthermore, we found that emotion and EI theories mediated the relationship of gender and age with ability EI. Our findings suggest that people's implicit theories about EI may influence their emotional abilities, which may have important consequences for personal and professional EI training.",
"title": ""
},
{
"docid": "2cb1c713b8e75e7f2e38be90c1b5a9e6",
"text": "Frequent action video game players often outperform non-gamers on measures of perception and cognition, and some studies find that video game practice enhances those abilities. The possibility that video game training transfers broadly to other aspects of cognition is exciting because training on one task rarely improves performance on others. At first glance, the cumulative evidence suggests a strong relationship between gaming experience and other cognitive abilities, but methodological shortcomings call that conclusion into question. We discuss these pitfalls, identify how existing studies succeed or fail in overcoming them, and provide guidelines for more definitive tests of the effects of gaming on cognition.",
"title": ""
},
{
"docid": "2a34800bc275f062f820c0eb4597d297",
"text": "Construction sites are dynamic and complicated systems. The movement and interaction of people, goods and energy make construction safety management extremely difficult. Due to the ever-increasing amount of information, traditional construction safety management has operated under difficult circumstances. As an effective way to collect, identify and process information, sensor-based technology is deemed to provide new generation of methods for advancing construction safety management. It makes the real-time construction safety management with high efficiency and accuracy a reality and provides a solid foundation for facilitating its modernization, and informatization. Nowadays, various sensor-based technologies have been adopted for construction safety management, including locating sensor-based technology, vision-based sensing and wireless sensor networks. This paper provides a systematic and comprehensive review of previous studies in this field to acknowledge useful findings, identify the research gaps and point out future research directions.",
"title": ""
},
{
"docid": "6974bf94292b51fc4efd699c28c90003",
"text": "We just released an Open Source receiver that is able to decode IEEE 802.11a/g/p Orthogonal Frequency Division Multiplexing (OFDM) frames in software. This is the first Software Defined Radio (SDR) based OFDM receiver supporting channel bandwidths up to 20MHz that is not relying on additional FPGA code. Our receiver comprises all layers from the physical up to decoding the MAC packet and extracting the payload of IEEE 802.11a/g/p frames. In our demonstration, visitors can interact live with the receiver while it is decoding frames that are sent over the air. The impact of moving the antennas and changing the settings are displayed live in time and frequency domain. Furthermore, the decoded frames are fed to Wireshark where the WiFi traffic can be further investigated. It is possible to access and visualize the data in every decoding step from the raw samples, the autocorrelation used for frame detection, the subcarriers before and after equalization, up to the decoded MAC packets. The receiver is completely Open Source and represents one step towards experimental research with SDR.",
"title": ""
},
{
"docid": "187595fb12a5ca3bd665ffbbc9f47465",
"text": "In order to acquire a lexicon, young children must segment speech into words, even though most words are unfamiliar to them. This is a non-trivial task because speech lacks any acoustic analog of the blank spaces between printed words. Two sources of information that might be useful for this task are distributional regularity and phonotactic constraints. Informally, distributional regularity refers to the intuition that sound sequences that occur frequently and in a variety of contexts are better candidates for the lexicon than those that occur rarely or in few contexts. We express that intuition formally by a class of functions called DR functions. We then put forth three hypotheses: First, that children segment using DR functions. Second, that they exploit phonotactic constraints on the possible pronunciations of words in their language. Specifically, they exploit both the requirement that every word must have a vowel and the constraints that languages impose on word-initial and word-final consonant clusters. Third, that children learn which word-boundary clusters are permitted in their language by assuming that all permissible word-boundary clusters will eventually occur at utterance boundaries. Using computational simulation, we investigate the effectiveness of these strategies for segmenting broad phonetic transcripts of child-directed English. The results show that DR functions and phonotactic constraints can be used to significantly improve segmentation. Further, the contributions of DR functions and phonotactic constraints are largely independent, so using both yields better segmentation than using either one alone. Finally, learning the permissible word-boundary clusters from utterance boundaries does not degrade segmentation performance.",
"title": ""
},
{
"docid": "a74ccbf1f9280806a3f21f7ce468a4c7",
"text": "The professional norms of good journalism include in particular the following: truthfulness, objectivity, neutrality and detachment. For Public Relations these norms are at best irrelevant. The only thing that matters is success. And this success is measured in terms ofachieving specific communication aims which are \"externally defined by a client, host organization or particular groups ofstakeholders\" (Hanitzsch, 2007, p. 2). Typical aims are, e.g., to convince the public of the attractiveness of a product, of the justice of one's own political goals or also of the wrongfulness of a political opponent.",
"title": ""
},
{
"docid": "c023633ca0fe1cfc78b1d579d1ae157b",
"text": "A model is proposed that specifies the conditions under which individuals will become internally motivated to perform effectively on their jobs. The model focuses on the interaction among three classes of variables: (a) the psychological states of employees that must be present for internally motivated work behavior to develop; (b) the characteristics of jobs that can create these psychological states; and (c) the attributes of individuals that determine how positively a person will respond to a complex and challenging job. The model was tested for 658 employees who work on 62 different jobs in seven organizations, and results support its validity. A number of special features of the model are discussed (including its use as a basis for the diagnosis of jobs and the evaluation of job redesign projects), and the model is compared to other theories of job design.",
"title": ""
},
{
"docid": "69e0179971396fcaf09c9507735a8d5b",
"text": "In this paper, we describe a statistical approach to both an articulatory-to-acoustic mapping and an acoustic-to-articulatory inversion mapping without using phonetic information. The joint probability density of an articulatory parameter and an acoustic parameter is modeled using a Gaussian mixture model (GMM) based on a parallel acoustic-articulatory speech database. We apply the GMM-based mapping using the minimum mean-square error (MMSE) criterion, which has been proposed for voice conversion, to the two mappings. Moreover, to improve the mapping performance, we apply maximum likelihood estimation (MLE) to the GMM-based mapping method. The determination of a target parameter trajectory having appropriate static and dynamic properties is obtained by imposing an explicit relationship between static and dynamic features in the MLE-based mapping. Experimental results demonstrate that the MLE-based mapping with dynamic features can significantly improve the mapping performance compared with the MMSE-based mapping in both the articulatory-to-acoustic mapping and the inversion mapping.",
"title": ""
},
{
"docid": "8e44d0e60c6460a07d66ba9a90741b86",
"text": "Although graph embedding has been a powerful tool for modeling data intrinsic structures, simply employing all features for data structure discovery may result in noise amplification. This is particularly severe for high dimensional data with small samples. To meet this challenge, this paper proposes a novel efficient framework to perform feature selection for graph embedding, in which a category of graph embedding methods is cast as a least squares regression problem. In this framework, a binary feature selector is introduced to naturally handle the feature cardinality in the least squares formulation. The resultant integral programming problem is then relaxed into a convex Quadratically Constrained Quadratic Program (QCQP) learning problem, which can be efficiently solved via a sequence of accelerated proximal gradient (APG) methods. Since each APG optimization is w.r.t. only a subset of features, the proposed method is fast and memory efficient. The proposed framework is applied to several graph embedding learning problems, including supervised, unsupervised, and semi-supervised graph embedding. Experimental results on several high dimensional data demonstrated that the proposed method outperformed the considered state-of-the-art methods.",
"title": ""
},
{
"docid": "a49abd0b1c03e39c83d9809fc344ba93",
"text": "Controller Area Network (CAN) is the leading serial bus system for embedded control. More than two billion CAN nodes have been sold since the protocol's development in the early 1980s. CAN is a mainstream network and was internationally standardized (ISO 11898–1) in 1993. This paper describes an approach to implementing security services on top of a higher level Controller Area Network (CAN) protocol, in particular, CANopen. Since the CAN network is an open, unsecured network, every node has access to all data on the bus. A system which produces and consumes sensitive data is not well suited for this environment. Therefore, a general-purpose security solution is needed which will allow secure nodes access to the basic security services such as authentication, integrity, and confidentiality.",
"title": ""
},
{
"docid": "18b7dadfec8b02624b6adeb2a65d7223",
"text": "This paper provides a brief introduction to recent work in st atistical parsing and its applications. We highlight succes ses to date, remaining challenges, and promising future work.",
"title": ""
},
{
"docid": "570eca9884edb7e4a03ed95763be20aa",
"text": "Gene expression is a fundamentally stochastic process, with randomness in transcription and translation leading to cell-to-cell variations in mRNA and protein levels. This variation appears in organisms ranging from microbes to metazoans, and its characteristics depend both on the biophysical parameters governing gene expression and on gene network structure. Stochastic gene expression has important consequences for cellular function, being beneficial in some contexts and harmful in others. These situations include the stress response, metabolism, development, the cell cycle, circadian rhythms, and aging.",
"title": ""
},
{
"docid": "1bdb24fb4c85b3aaf8a8e5d71328a920",
"text": "BACKGROUND\nHigh-grade intraepithelial neoplasia is known to progress to invasive squamous-cell carcinoma of the anus. There are limited reports on the rate of progression from high-grade intraepithelial neoplasia to anal cancer in HIV-positive men who have sex with men.\n\n\nOBJECTIVES\nThe purpose of this study was to describe in HIV-positive men who have sex with men with perianal high-grade intraepithelial neoplasia the rate of progression to anal cancer and the factors associated with that progression.\n\n\nDESIGN\nThis was a prospective cohort study.\n\n\nSETTINGS\nThe study was conducted at an outpatient clinic at a tertiary care center in Toronto.\n\n\nPATIENTS\nThirty-eight patients with perianal high-grade anal intraepithelial neoplasia were identified among 550 HIV-positive men who have sex with men.\n\n\nINTERVENTION\nAll of the patients had high-resolution anoscopy for symptoms, screening, or surveillance with follow-up monitoring/treatment.\n\n\nMAIN OUTCOME MEASURES\nWe measured the incidence of anal cancer per 100 person-years of follow-up.\n\n\nRESULTS\nSeven (of 38) patients (18.4%) with perianal high-grade intraepithelial neoplasia developed anal cancer. The rate of progression was 6.9 (95% CI, 2.8-14.2) cases of anal cancer per 100 person-years of follow-up. A diagnosis of AIDS, previously treated anal cancer, and loss of integrity of the lesion were associated with progression. Anal bleeding was more than twice as common in patients who progressed to anal cancer.\n\n\nLIMITATIONS\nThere was the potential for selection bias and patients were offered treatment, which may have affected incidence estimates.\n\n\nCONCLUSIONS\nHIV-positive men who have sex with men should be monitored for perianal high-grade intraepithelial neoplasia. Those with high-risk features for the development of anal cancer may need more aggressive therapy.",
"title": ""
},
{
"docid": "2ccae5b48fc5ac10f948b79fc4fb6ff3",
"text": "Hierarchical attention networks have recently achieved remarkable performance for document classification in a given language. However, when multilingual document collections are considered, training such models separately for each language entails linear parameter growth and lack of cross-language transfer. Learning a single multilingual model with fewer parameters is therefore a challenging but potentially beneficial objective. To this end, we propose multilingual hierarchical attention networks for learning document structures, with shared encoders and/or shared attention mechanisms across languages, using multi-task learning and an aligned semantic space as input. We evaluate the proposed models on multilingual document classification with disjoint label sets, on a large dataset which we provide, with 600k news documents in 8 languages, and 5k labels. The multilingual models outperform monolingual ones in low-resource as well as full-resource settings, and use fewer parameters, thus confirming their computational efficiency and the utility of cross-language transfer.",
"title": ""
},
{
"docid": "56f7c98c85eeb519f80966db3ac26dc6",
"text": "Automatic analysis of human facial expression is a challenging problem with many applications. Most of the existing automated systems for facial expression analysis attempt to recognize a few prototypic emotional expressions, such as anger and happiness. Instead of representing another approach to machine analysis of prototypic facial expressions of emotion, the method presented in this paper attempts to handle a large range of human facial behavior by recognizing facial muscle actions that produce expressions. Virtually all of the existing vision systems for facial muscle action detection deal only with frontal-view face images and cannot handle temporal dynamics of facial actions. In this paper, we present a system for automatic recognition of facial action units (AUs) and their temporal models from long, profile-view face image sequences. We exploit particle filtering to track 15 facial points in an input face-profile sequence, and we introduce facial-action-dynamics recognition from continuous video input using temporal rules. The algorithm performs both automatic segmentation of an input video into facial expressions pictured and recognition of temporal segments (i.e., onset, apex, offset) of 27 AUs occurring alone or in a combination in the input face-profile video. A recognition rate of 87% is achieved.",
"title": ""
},
{
"docid": "e9f6b48b367b4a182ce7fb42cbb59b79",
"text": "We advocate the use of implicit fields for learning generative models of shapes and introduce an implicit field decoder for shape generation, aimed at improving the visual quality of the generated shapes. An implicit field assigns a value to each point in 3D space, so that a shape can be extracted as an iso-surface. Our implicit field decoder is trained to perform this assignment by means of a binary classifier. Specifically, it takes a point coordinate, along with a feature vector encoding a shape, and outputs a value which indicates whether the point is outside the shape or not. By replacing conventional decoders by our decoder for representation learning and generative modeling of shapes, we demonstrate superior results for tasks such as shape autoencoding, generation, interpolation, and single-view 3D reconstruction, particularly in terms of visual quality.",
"title": ""
},
{
"docid": "850a7daa56011e6c53b5f2f3e33d4c49",
"text": "Multi-objective evolutionary algorithms (MOEAs) have achieved great progress in recent decades, but most of them are designed to solve unconstrained multi-objective optimization problems. In fact, many real-world multi-objective problems usually contain a number of constraints. To promote the research of constrained multi-objective optimization, we first propose three primary types of difficulty, which reflect the challenges in the real-world optimization problems, to characterize the constraint functions in CMOPs, including feasibility-hardness, convergencehardness and diversity-hardness. We then develop a general toolkit to construct difficulty adjustable and scalable constrained multi-objective optimization problems (CMOPs) with three types of parameterized constraint functions according to the proposed three primary types of difficulty. In fact, combination of the three primary constraint functions with different parameters can lead to construct a large variety of CMOPs, whose difficulty can be uniquely defined by a triplet with each of its parameter specifying the level of each primary difficulty type respectively. Furthermore, the number of objectives in this toolkit are able to scale to more than two. Based on this toolkit, we suggest nine difficulty adjustable and scalable CMOPs named DAS-CMOP1-9. To evaluate the proposed test problems, two popular CMOEAs MOEA/D-CDP and NSGA-II-CDP are adopted to test their performances on DAS-CMOP1-9 with different difficulty triplets. The experiment results demonstrate that none of them can solve these problems efficiently, which stimulate us to develop new constrained MOEAs to solve the suggested DAS-CMOPs.",
"title": ""
},
{
"docid": "83041d4927d8bff8acd2524441dbd227",
"text": "In this paper, we introduce a novel stereo-monocular fusion approach to on-road localization and tracking of vehicles. Utilizing a calibrated stereo-vision rig, the proposed approach combines monocular detection with stereo-vision for on-road vehicle localization and tracking for driver assistance. The system initially acquires synchronized monocular frames and calculates depth maps from the stereo rig. The system then detects vehicles in the image plane using an active learning-based monocular vision approach. Using the image coordinates of detected vehicles, the system then localizes the vehicles in real-world coordinates using the calculated depth map. The vehicles are tracked both in the image plane, and in real-world coordinates, fusing information from both the monocular and stereo modalities. Vehicles' states are estimated and tracked using Kalman filtering. Quantitative analysis of tracks is provided. The full system takes 46ms to process a single frame.",
"title": ""
}
] | scidocsrr |
86ebeb55ab6917b38de09b1cfc566ec3 | Game User Experience Evaluation | [
{
"docid": "d362b36e0c971c43856a07b7af9055f3",
"text": "s (New York: ACM), pp. 1617 – 20. MASLOW, A.H., 1954,Motivation and personality (New York: Harper). MCDONAGH, D., HEKKERT, P., VAN ERP, J. and GYI, D. (Eds), 2003, Design and Emotion: The Experience of Everyday Things (London: Taylor & Francis). MILLARD, N., HOLE, L. and CROWLE, S., 1999, Smiling through: motivation at the user interface. In Proceedings of the HCI International’99, Volume 2 (pp. 824 – 8) (Mahwah, NJ, London: Lawrence Erlbaum Associates). NORMAN, D., 2004a, Emotional design: Why we love (or hate) everyday things (New York: Basic Books). NORMAN, D., 2004b, Introduction to this special section on beauty, goodness, and usability. Human Computer Interaction, 19, pp. 311 – 18. OVERBEEKE, C.J., DJAJADININGRAT, J.P., HUMMELS, C.C.M. and WENSVEEN, S.A.G., 2002, Beauty in Usability: Forget about ease of use! In Pleasure with products: Beyond usability, W. Green and P. Jordan (Eds), pp. 9 – 18 (London: Taylor & Francis). 96 M. Hassenzahl and N. Tractinsky D ow nl oa de d by [ M as se y U ni ve rs ity L ib ra ry ] at 2 1: 34 2 3 Ju ly 2 01 1 PICARD, R., 1997, Affective computing (Cambridge, MA: MIT Press). PICARD, R. and KLEIN, J., 2002, Computers that recognise and respond to user emotion: theoretical and practical implications. Interacting with Computers, 14, pp. 141 – 69. POSTREL, V., 2002, The substance of style (New York: Harper Collins). SELIGMAN, M.E.P. and CSIKSZENTMIHALYI, M., 2000, Positive Psychology: An Introduction. American Psychologist, 55, pp. 5 – 14. SHELDON, K.M., ELLIOT, A.J., KIM, Y. and KASSER, T., 2001, What is satisfying about satisfying events? Testing 10 candidate psychological needs. Journal of Personality and Social Psychology, 80, pp. 325 – 39. SINGH, S.N. and DALAL, N.P., 1999, Web home pages as advertisements. Communications of the ACM, 42, pp. 91 – 8. SUH, E., DIENER, E. and FUJITA, F., 1996, Events and subjective well-being: Only recent events matter. Journal of Personality and Social Psychology,",
"title": ""
}
] | [
{
"docid": "a95328b8210e8c6fcd628cb48618ebee",
"text": "Separation of video clips into foreground and background components is a useful and important technique, making recognition, classification, and scene analysis more efficient. In this paper, we propose a motion-assisted matrix restoration (MAMR) model for foreground-background separation in video clips. In the proposed MAMR model, the backgrounds across frames are modeled by a low-rank matrix, while the foreground objects are modeled by a sparse matrix. To facilitate efficient foreground-background separation, a dense motion field is estimated for each frame, and mapped into a weighting matrix which indicates the likelihood that each pixel belongs to the background. Anchor frames are selected in the dense motion estimation to overcome the difficulty of detecting slowly moving objects and camouflages. In addition, we extend our model to a robust MAMR model against noise for practical applications. Evaluations on challenging datasets demonstrate that our method outperforms many other state-of-the-art methods, and is versatile for a wide range of surveillance videos.",
"title": ""
},
{
"docid": "42c890832d861ad2854fd1f56b13eb45",
"text": "We apply deep learning to the problem of discovery and detection of characteristic patterns of physiology in clinical time series data. We propose two novel modifications to standard neural net training that address challenges and exploit properties that are peculiar, if not exclusive, to medical data. First, we examine a general framework for using prior knowledge to regularize parameters in the topmost layers. This framework can leverage priors of any form, ranging from formal ontologies (e.g., ICD9 codes) to data-derived similarity. Second, we describe a scalable procedure for training a collection of neural networks of different sizes but with partially shared architectures. Both of these innovations are well-suited to medical applications, where available data are not yet Internet scale and have many sparse outputs (e.g., rare diagnoses) but which have exploitable structure (e.g., temporal order and relationships between labels). However, both techniques are sufficiently general to be applied to other problems and domains. We demonstrate the empirical efficacy of both techniques on two real-world hospital data sets and show that the resulting neural nets learn interpretable and clinically relevant features.",
"title": ""
},
{
"docid": "323c9caac8b04b1531071acf74eb189b",
"text": "Many electronic feedback systems have been proposed for writing support. However, most of these systems only aim at supporting writing to communicate instead of writing to learn, as in the case of literature review writing. Trigger questions are potentially forms of support for writing to learn, but current automatic question generation approaches focus on factual question generation for reading comprehension or vocabulary assessment. This article presents a novel Automatic Question Generation (AQG) system, called G-Asks, which generates specific trigger questions as a form of support for students' learning through writing. We conducted a large-scale case study, including 24 human supervisors and 33 research students, in an Engineering Research Method course and compared questions generated by G-Asks with human generated questions. The results indicate that G-Asks can generate questions as useful as human supervisors (‘useful’ is one of five question quality measures) while significantly outperforming Human Peer and Generic Questions in most quality measures after filtering out questions with grammatical and semantic errors. Furthermore, we identified the most frequent question types, derived from the human supervisors’ questions and discussed how the human supervisors generate such questions from the source text. General Terms: Automatic Question Generation, Natural Language Processing, Academic Writing Support",
"title": ""
},
{
"docid": "a9d948498c0ad0d99759636ea3ba4d1a",
"text": "Recently, Real Time Location Systems (RTLS) have been designed to provide location information of positioning target. The kernel of RTLS is localization algorithm, range-base localization algorithm is concerned as high precision. This paper introduces real-time range-based indoor localization algorithms, including Time of Arrival, Time Difference of Arrival, Received Signal Strength Indication, Time of Flight, and Symmetrical Double Sided Two Way Ranging. Evaluation criteria are proposed for assessing these algorithms, namely positioning accuracy, scale, cost, energy efficiency, and security. We also introduce the latest some solution, compare their Strengths and weaknesses. Finally, we give a recommendation about selecting algorithm from the viewpoint of the practical application need.",
"title": ""
},
{
"docid": "b8df66893d35839f1f4acec9c74467ad",
"text": "This paper presents the development of control circuit for single phase inverter using Atmel microcontroller. The attractiveness of this configuration is the elimination of a microcontroller to generate sinusoidal pulse width modulation (SPWM) pulses. The Atmel microcontroller is able to store all the commands to generate the necessary waveforms to control the frequency of the inverter through proper design of switching pulse. In this paper concept of the single phase inverter and it relation with the microcontroller is reviewed first. Subsequently approach and methods and dead time control are discussed. Finally simulation results and experimental results are discussed.",
"title": ""
},
{
"docid": "b2120881f15885cdb610d231f514bc9f",
"text": "In this work we do an analysis of Bitcoin’s price and volatility. Particularly, we look at Granger-causation relationships among the pairs of time series: Bitcoin price and the S&P 500, Bitcoin price and the VIX, Bitcoin realized volatility and the S&P 500, and Bitcoin realized volatility and the VIX. Additionally, we explored the relationship between Bitcoin weekly price and public enthusiasm for Blockchain, the technology behind Bitcoin, as measured by Google Trends data. we explore the Granger-causality relationships between Bitcoin weekly price and Blockchain Google Trend time series. We conclude that there exists a bidirectional Granger-causality relationship between Bitcoin realized volatility and the VIX at the 5% significance level, that we cannot reject the hypothesis that Bitcoin weekly price do not Granger-causes Blockchain trends and that we cannot reject the hypothesis that Bitcoin realized volatility do not Granger-causes S&P 500.",
"title": ""
},
{
"docid": "a2a77d422bbc8073390d6008978303a0",
"text": "As computing becomes more pervasive, the nature of applications must change accordingly. In particular, applications must become more flexible in order to respond to highly dynamic computing environments, and more autonomous, to reflect the growing ratio of applications to users and the corresponding decline in the attention a user can devote to each. That is, applications must become more context-aware. To facilitate the programming of such applications, infrastructure is required to gather, manage, and disseminate context information to applications. This paper is concerned with the development of appropriate context modeling concepts for pervasive computing, which can form the basis for such a context management infrastructure. This model overcomes problems associated with previous context models, including their lack of formality and generality, and also tackles issues such as wide variations in information quality, the existence of complex relationships amongst context information and temporal aspects of context.",
"title": ""
},
{
"docid": "309080fa2ef4f959951c08527ec1980d",
"text": "Complete scene understanding has been an aspiration of computer vision since its very early days. It has applications in autonomous navigation, aerial imaging, surveillance, human-computer interaction among several other active areas of research. While many methods since the advent of deep learning have taken performance in several scene understanding tasks to respectable levels, the tasks are far from being solved. One problem that plagues scene understanding is low-resolution. Convolutional Neural Networks that achieve impressive results on high resolution struggle when confronted with low resolution because of the inability to learn hierarchical features and weakening of signal with depth. In this thesis, we study the low resolution and suggest approaches that can overcome its consequences on three popular tasks object detection, in-the-wild face recognition, and semantic segmentation. The popular object detectors were designed for, trained, and benchmarked on datasets that have a strong bias towards medium and large sized objects. When these methods are finetuned and tested on a dataset of small objects, they perform miserably. The most successful detection algorithms follow a two-stage pipeline: the first which quickly generates regions of interest that are likely to contain the object and the second, which classifies these proposal regions. We aim to adapt both these stages for the case of small objects; the first by modifying anchor box generation based on theoretical considerations, and the second using a simple-yet-effective super-resolution step. Motivated by the success of being able to detect small objects, we study the problem of detecting and recognising objects with huge variations in resolution, in the problem of face recognition in semistructured scenes. Semi-structured scenes like social settings are more challenging than regular ones: there are several more faces of vastly different scales, there are large variations in illumination, pose and expression, and the existing datasets do not capture these variations. We address the unique challenges in this setting by (i) benchmarking popular methods for the problem of face detection, and (ii) proposing a method based on resolution-specific networks to handle different scales. Semantic segmentation is a more challenging localisation task where the goal is to assign a semantic class label to every pixel in the image. Solving such a problem is crucial for self-driving cars where we need sharper boundaries for roads, obstacles and paraphernalia. For want of a higher receptive field and a more global view of the image, CNN networks forgo resolution. This results in poor segmentation of complex boundaries, small and thin objects. We propose prefixing a super-resolution step before semantic segmentation. Through experiments, we show that a performance boost can be obtained on the popular streetview segmentation dataset, CityScapes.",
"title": ""
},
{
"docid": "87199b3e7def1db3159dc6b5989638aa",
"text": "We describe a completely automated large scale visual recommendation system for fashion. Our focus is to efficiently harness the availability of large quantities of online fashion images and their rich meta-data. Specifically, we propose two classes of data driven models in the Deterministic Fashion Recommenders (DFR) and Stochastic Fashion Recommenders (SFR) for solving this problem. We analyze relative merits and pitfalls of these algorithms through extensive experimentation on a large-scale data set and baseline them against existing ideas from color science. We also illustrate key fashion insights learned through these experiments and show how they can be employed to design better recommendation systems. The industrial applicability of proposed models is in the context of mobile fashion shopping. Finally, we also outline a large-scale annotated data set of fashion images Fashion-136K) that can be exploited for future research in data driven visual fashion.",
"title": ""
},
{
"docid": "0879399fcb38c103a0e574d6d9010215",
"text": "We present a content-based method for recommending citations in an academic paper draft. We embed a given query document into a vector space, then use its nearest neighbors as candidates, and rerank the candidates using a discriminative model trained to distinguish between observed and unobserved citations. Unlike previous work, our method does not require metadata such as author names which can be missing, e.g., during the peer review process. Without using metadata, our method outperforms the best reported results on PubMed and DBLP datasets with relative improvements of over 18% in F1@20 and over 22% in MRR. We show empirically that, although adding metadata improves the performance on standard metrics, it favors selfcitations which are less useful in a citation recommendation setup. We release an online portal for citation recommendation based on our method,1 and a new dataset OpenCorpus of 7 million research articles to facilitate future research on this task.",
"title": ""
},
{
"docid": "bf5874dc1fc1c968d7c41eb573d8d04a",
"text": "As creativity is increasingly recognised as a vital component of entrepreneurship, researchers and educators struggle to reform enterprise pedagogy. To help in this effort, we use a personality test and open-ended interviews to explore creativity between two groups of entrepreneurship masters’ students: one at a business school and one at an engineering school. The findings indicate that both groups had high creative potential, but that engineering students channelled this into practical and incremental efforts whereas the business students were more speculative and had a clearer market focus. The findings are drawn on to make some suggestions for entrepreneurship education.",
"title": ""
},
{
"docid": "f810dbe1e656fe984b4b6498c1c27bcb",
"text": "Information-maximization clustering learns a probabilistic classifier in an unsupervised manner so that mutual information between feature vectors and cluster assignments is maximized. A notable advantage of this approach is that it involves only continuous optimization of model parameters, which is substantially simpler than discrete optimization of cluster assignments. However, existing methods still involve nonconvex optimization problems, and therefore finding a good local optimal solution is not straightforward in practice. In this letter, we propose an alternative information-maximization clustering method based on a squared-loss variant of mutual information. This novel approach gives a clustering solution analytically in a computationally efficient way via kernel eigenvalue decomposition. Furthermore, we provide a practical model selection procedure that allows us to objectively optimize tuning parameters included in the kernel function. Through experiments, we demonstrate the usefulness of the proposed approach.",
"title": ""
},
{
"docid": "5eb9c6540de63be3e7c645286f263b4d",
"text": "Inductive Power Transfer (IPT) is a practical method for recharging Electric Vehicles (EVs) because is it safe, efficient and convenient. Couplers or Power Pads are the power transmitters and receivers used with such contactless charging systems. Due to improvements in power electronic components, the performance and efficiency of an IPT system is largely determined by the coupling or flux linkage between these pads. Conventional couplers are based on circular pad designs and due to their geometry have fundamentally limited magnetic flux above the pad. This results in poor coupling at any realistic spacing between the ground pad and the vehicle pickup mounted on the chassis. Performance, when added to the high tolerance to misalignment required for a practical EV charging system, necessarily results in circular pads that are large, heavy and expensive. A new pad topology termed a flux pipe is proposed in this paper that overcomes difficulties associated with conventional circular pads. Due to the magnetic structure, the topology has a significantly improved flux path making more efficient and compact IPT charging systems possible.",
"title": ""
},
{
"docid": "2f185de66075fcba898afc052c820d98",
"text": "Owing to the complexity of the photovoltaic system structure and their environment, especially under the partial shadows environment, the output characteristics of photovoltaic arrays are greatly affected. Under the partial shadows environment, power-voltage (P-V) characteristics curve is of multi-peak. This makes that it is a difficult task to track the actual maximum power point. In addition, most programs are not able to get the maximum power point under these conditions. In this paper, we study the P-V curves under both uniform illumination and partial shadows environments, and then design an algorithm to track the maximum power point and select the strategy to deal with the MPPT algorithm by DSP chips and DC-DC converters. It is simple and easy to allow solar panels to maintain the best solar energy utilization resulting in increasing output at all times. Meanwhile, in order to track local peak point and improve the tracking speed, the algorithm proposed DC-DC converters operating feed-forward control scheme. Compared with the conventional controller, this controller costs much less time. This paper focuses mainly on specific processes of the algorithm, and being the follow-up basis for implementation of control strategies.",
"title": ""
},
{
"docid": "26f2b200bf22006ab54051c9288420e8",
"text": "Emotion keyword spotting approach can detect emotion well for explicit emotional contents while it obviously cannot compare to supervised learning approaches for detecting emotional contents of particular events. In this paper, we target earthquake situations in Japan as the particular events for emotion analysis because the affected people often show their states and emotions towards the situations via social networking sites. Additionally, tracking crowd emotions in the Internet during the earthquakes can help authorities to quickly decide appropriate assistance policies without paying the cost as the traditional public surveys. Our three main contributions in this paper are: a) the appropriate choice of emotions; b) the novel proposal of two classification methods for determining the earthquake related tweets and automatically identifying the emotions in Twitter; c) tracking crowd emotions during different earthquake situations, a completely new application of emotion analysis research. Our main analysis results show that Twitter users show their Fear and Anxiety right after the earthquakes occurred while Calm and Unpleasantness are not showed clearly during the small earthquakes but in the large tremor.",
"title": ""
},
{
"docid": "ec5aac01866a1e4ca3f4e906990d5d8e",
"text": "But, as we look to the horizon of a decade hence, we see no silver bullet. There is no single development, in either technology or in management technique, that by itself promises even one orderof-magnitude improvement in productivity, in reliability, in simplicity. In this article, I shall try to show why, by examining both the nature of the software problem and the properties of the bullets proposed.",
"title": ""
},
{
"docid": "0745755e5347c370cdfbeca44dc6d288",
"text": "For many decades correlation and power spectrum have been primary tools for digital signal processing applications in the biomedical area. The information contained in the power spectrum is essentially that of the autocorrelation sequence; which is sufficient for complete statistical descriptions of Gaussian signals of known means. However, there are practical situations where one needs to look beyond autocorrelation of a signal to extract information regarding deviation from Gaussianity and the presence of phase relations. Higher order spectra, also known as polyspectra, are spectral representations of higher order statistics, i.e. moments and cumulants of third order and beyond. HOS (higher order statistics or higher order spectra) can detect deviations from linearity, stationarity or Gaussianity in the signal. Most of the biomedical signals are non-linear, non-stationary and non-Gaussian in nature and therefore it can be more advantageous to analyze them with HOS compared to the use of second-order correlations and power spectra. In this paper we have discussed the application of HOS for different bio-signals. HOS methods of analysis are explained using a typical heart rate variability (HRV) signal and applications to other signals are reviewed.",
"title": ""
},
{
"docid": "d70a4fb982aeb2bd502519fb0a7d5c7b",
"text": "We introduce a notion of algorithmic stability of learning algorithms—that we term hypothesis stability—that captures stability of the hypothesis output by the learning algorithm in the normed space of functions from which hypotheses are selected. e main result of the paper bounds the generalization error of any learning algorithm in terms of its hypothesis stability. e bounds are based on martingale inequalities in the Banach space to which the hypotheses belong. We apply the general bounds to bound the performance of some learning algorithms based on empirical risk minimization and stochastic gradient descent. Parts of the work were done when Tongliang Liu was a visiting PhD student at Pompeu Fabra University. School of Information Technologies, Faculty Engineering and Information Technologies, University of Sydney, Sydney, Australia, [email protected], [email protected] Department of Economics and Business, Pompeu Fabra University, Barcelona, Spain, [email protected] ICREA, Pg. Llus Companys 23, 08010 Barcelona, Spain Barcelona Graduate School of Economics AI group, DTIC, Universitat Pompeu Fabra, Barcelona, Spain, [email protected] 1",
"title": ""
},
{
"docid": "e7f668483c8c0d1fbf6ef2c208e1a225",
"text": "A new capacitive pressure sensor with very large dynamic range is introduced. The sensor is based on a new technique for substantially changing the surface area of the electrodes, rather than the inter-electrode spacing as commonly done at the present. The prototype device has demonstrated a change in capacitance of approximately 2500 pF over a pressure range of 10 kPa.",
"title": ""
}
] | scidocsrr |
b026c22ef03caa1381fa639d5de6c8ba | Going Spear Phishing: Exploring Embedded Training and Awareness | [
{
"docid": "40fbee18e4b0eca3f2b9ad69119fec5d",
"text": "Phishing attacks, in which criminals lure Internet users to websites that impersonate legitimate sites, are occurring with increasing frequency and are causing considerable harm to victims. In this paper we describe the design and evaluation of an embedded training email system that teaches people about phishing during their normal use of email. We conducted lab experiments contrasting the effectiveness of standard security notices about phishing with two embedded training designs we developed. We found that embedded training works better than the current practice of sending security notices. We also derived sound design principles for embedded training systems.",
"title": ""
}
] | [
{
"docid": "95b48a41d796aec0a1f23b3fc0879ed9",
"text": "Action anticipation aims to detect an action before it happens. Many real world applications in robotics and surveillance are related to this predictive capability. Current methods address this problem by first anticipating visual representations of future frames and then categorizing the anticipated representations to actions. However, anticipation is based on a single past frame’s representation, which ignores the history trend. Besides, it can only anticipate a fixed future time. We propose a Reinforced Encoder-Decoder (RED) network for action anticipation. RED takes multiple history representations as input and learns to anticipate a sequence of future representations. One salient aspect of RED is that a reinforcement module is adopted to provide sequence-level supervision; the reward function is designed to encourage the system to make correct predictions as early as possible. We test RED on TVSeries, THUMOS-14 and TV-Human-Interaction datasets for action anticipation and achieve state-of-the-art performance on all datasets.",
"title": ""
},
{
"docid": "30fa14e4cfa8e33d863295c4f14ee671",
"text": "Approximate computing can decrease the design complexity with an increase in performance and power efficiency for error resilient applications. This brief deals with a new design approach for approximation of multipliers. The partial products of the multiplier are altered to introduce varying probability terms. Logic complexity of approximation is varied for the accumulation of altered partial products based on their probability. The proposed approximation is utilized in two variants of 16-bit multipliers. Synthesis results reveal that two proposed multipliers achieve power savings of 72% and 38%, respectively, compared to an exact multiplier. They have better precision when compared to existing approximate multipliers. Mean relative error figures are as low as 7.6% and 0.02% for the proposed approximate multipliers, which are better than the previous works. Performance of the proposed multipliers is evaluated with an image processing application, where one of the proposed models achieves the highest peak signal to noise ratio.",
"title": ""
},
{
"docid": "3c79c23036ed7c9a5542670264310141",
"text": "This paper investigates possible improvements in grid voltage stability and transient stability with wind energy converter units using modified P/Q control. The voltage source converter (VSC) in modern variable speed wind turbines is utilized to achieve this enhancement. The findings show that using only available hardware for variable-speed turbines improvements could be obtained in all cases. Moreover, it was found that power system stability improvement is often larger when the control is modified for a given variable speed wind turbine rather than when standard variable speed turbines are used instead of fixed speed turbines. To demonstrate that the suggested modifications can be incorporated in real installations, a real situation is presented where short-term voltage stability is improved as an additional feature of an existing VSC high voltage direct current (HVDC) installation",
"title": ""
},
{
"docid": "112026af056b3350eceed0c6d0035260",
"text": "This paper presents a short-baseline real-time stereo vision system that is capable of the simultaneous and robust estimation of the ego-motion and of the 3D structure and the independent motion of thousands of points of the environment. Kalman filters estimate the position and velocity of world points in 3D Euclidean space. The six degrees of freedom of the ego-motion are obtained by minimizing the projection error of the current and previous clouds of static points. Experimental results with real data in indoor and outdoor environments demonstrate the robustness, accuracy and efficiency of our approach. Since the baseline is as short as 13cm, the device is head-mountable, and can be used by a visually impaired person. Our proposed system can be used to augment the perception of the user in complex dynamic environments.",
"title": ""
},
{
"docid": "687dbb03f675f0bf70e6defa9588ae23",
"text": "This paper presents a novel method for discovering causal relations between events encoded in text. In order to determine if two events from the same sentence are in a causal relation or not, we first build a graph representation of the sentence that encodes lexical, syntactic, and semantic information. In a second step, we automatically extract multiple graph patterns (or subgraphs) from such graph representations and sort them according to their relevance in determining the causality between two events from the same sentence. Finally, in order to decide if these events are causal or not, we train a binary classifier based on what graph patterns can be mapped to the graph representation associated with the two events. Our experimental results show that capturing the feature dependencies of causal event relations using a graph representation significantly outperforms an existing method that uses a flat representation of features.",
"title": ""
},
{
"docid": "baaff0e771e784304202ad7a0c987ef8",
"text": "This paper presents a simple end-to-end model for speech recognition, combining a convolutional network based acoustic model and a graph decoding. It is trained to output letters, with transcribed speech, without the need for force alignment of phonemes. We introduce an automatic segmentation criterion for training from sequence annotation without alignment that is on par with CTC [6] while being simpler. We show competitive results in word error rate on the Librispeech corpus [18] with MFCC features, and promising results from raw waveform.",
"title": ""
},
{
"docid": "e0580a51b7991f86559a7a3aa8b26204",
"text": "A new ultra-wideband monocycle pulse generator with good performance is designed and demonstrated. The pulse generator circuits employ SRD(step recovery diode), Schottky diode, and simple RC coupling and decoupling circuit, and are completely fabricated on the planar microstrip structure, which have the characteristic of low cost and small size. Through SRD modeling, the accuracy of the simulation is improved, which save the design period greatly. The generated monocycle pulse has the peak-to-peak amplitude 1.3V, pulse width 370ps and pulse repetition rate of 10MHz, whose waveform features are symmetric well and low ringing level. Good agreement between the measured and calculated results is achieved.",
"title": ""
},
{
"docid": "2525c33c5b06a2864eb44e390ce802d8",
"text": "The energy landscape theory of protein folding is a statistical description of a protein's potential surface. It assumes that folding occurs through organizing an ensemble of structures rather than through only a few uniquely defined structural intermediates. It suggests that the most realistic model of a protein is a minimally frustrated heteropolymer with a rugged funnel-like landscape biased toward the native structure. This statistical description has been developed using tools from the statistical mechanics of disordered systems, polymers, and phase transitions of finite systems. We review here its analytical background and contrast the phenomena in homopolymers, random heteropolymers, and protein-like heteropolymers that are kinetically and thermodynamically capable of folding. The connection between these statistical concepts and the results of minimalist models used in computer simulations is discussed. The review concludes with a brief discussion of how the theory helps in the interpretation of results from fast folding experiments and in the practical task of protein structure prediction.",
"title": ""
},
{
"docid": "1490331d46b8c19fce0a94e072bff502",
"text": "We explore the reliability and validity of a self-report measure of procrastination and conscientiousness designed for use with thirdto fifth-grade students. The responses of 120 students are compared with teacher and parent ratings of the student. Confirmatory and exploratory factor analyses were also used to examine the structure of the scale. Procrastination and conscientiousness are highly correlated (inversely); evidence suggests that procrastination and conscientiousness are aspects of the same construct. Procrastination and conscientiousness are correlated with the Physiological Anxiety subscale of the Revised Children’s Manifest Anxiety Scale, and with the Task (Mastery) and Avoidance (Task Aversiveness) subscales of Skaalvik’s (1997) Goal Orientation Scales. Both theoretical implications and implications for interventions are discussed. © 2002 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "3823975ea2bcda029c3c3cda2b0472be",
"text": "by Dimitrios Tzionas for the degree of Doctor rerum naturalium Hand motion capture with an RGB-D sensor gained recently a lot of research attention, however, even most recent approaches focus on the case of a single isolated hand. We focus instead on hands that interact with other hands or with a rigid or articulated object. Our framework successfully captures motion in such scenarios by combining a generative model with discriminatively trained salient points, collision detection and physics simulation to achieve a low tracking error with physically plausible poses. All components are unified in a single objective function that can be optimized with standard optimization techniques. We initially assume a-priori knowledge of the object’s shape and skeleton. In case of unknown object shape there are existing 3d reconstruction methods that capitalize on distinctive geometric or texture features. These methods though fail for textureless and highly symmetric objects like household articles, mechanical parts or toys. We show that extracting 3d hand motion for in-hand scanning e↵ectively facilitates the reconstruction of such objects and we fuse the rich additional information of hands into a 3d reconstruction pipeline. Finally, although shape reconstruction is enough for rigid objects, there is a lack of tools that build rigged models of articulated objects that deform realistically using RGB-D data. We propose a method that creates a fully rigged model consisting of a watertight mesh, embedded skeleton and skinning weights by employing a combination of deformable mesh tracking, motion segmentation based on spectral clustering and skeletonization based on mean curvature flow. To my family Maria, Konstantinos, Glyka. In the loving memory of giagià 'Olga & pappo‘c Giànnhc. (Olga Matoula & Ioannis Matoulas) πste ô yuqò πsper ô qe–r ‚stin· ka» gÄr ô qe»r ÓrganÏn ‚stin Êrgànwn, ka» Â no‹c e⁄doc e d¿n ka» ô a“sjhsic e⁄doc a sjht¿n.",
"title": ""
},
{
"docid": "23f2f6e5dd50942809aece136c26e549",
"text": "Paraphrases extracted from parallel corpora by the pivot method (Bannard and Callison-Burch, 2005) constitute a valuable resource for multilingual NLP applications. In this study, we analyse the semantics of unigram pivot paraphrases and use a graph-based sense induction approach to unveil hidden sense distinctions in the paraphrase sets. The comparison of the acquired senses to gold data from the Lexical Substitution shared task (McCarthy and Navigli, 2007) demonstrates that sense distinctions exist in the paraphrase sets and highlights the need for a disambiguation step in applications using this resource.",
"title": ""
},
{
"docid": "44e527e6078a01abd79a5f1f74fa1b78",
"text": "A transformer provides galvanic isolation and grounding of the photovoltaic (PV) array in a PV-fed grid-connected inverter. Inclusion of the transformer, however, may increase the cost and/or bulk of the system. To overcome this drawback, a single-phase, single-stage [no extra converter for voltage boost or maximum power point tracking (MPPT)], doubly grounded, transformer-less PV interface, based on the buck-boost principle, is presented. The configuration is compact and uses lesser components. Only one (undivided) PV source and one buck-boost inductor are used and shared between the two half cycles, which prevents asymmetrical operation and parameter mismatch problems. Total harmonic distortion and DC component of the current supplied to the grid is low, compared to existing topologies and conform to standards like IEEE 1547. A brief review of the existing, transformer-less, grid-connected inverter topologies is also included. It is demonstrated that, as compared to the split PV source topology, the proposed configuration is more effective in MPPT and array utilization. Design and analysis of the inverter in discontinuous conduction mode is carried out. Simulation and experimental results are presented.",
"title": ""
},
{
"docid": "8de25881e8a5f12f891656f271c44d4d",
"text": "Forest fires play a critical role in landscape transformation, vegetation succession, soil degradation and air quality. Improvements in fire risk estimation are vital to reduce the negative impacts of fire, either by lessen burn severity or intensity through fuel management, or by aiding the natural vegetation recovery using post-fire treatments. This paper presents the methods to generate the input variables and the risk integration developed within the Firemap project (funded under the Spanish Ministry of Science and Technology) to map wildland fire risk for several regions of Spain. After defining the conceptual scheme for fire risk assessment, the paper describes the methods used to generate the risk parameters, and presents",
"title": ""
},
{
"docid": "beff5ce5202460e736af0f06d5d75f83",
"text": "MOTIVATION\nDuring the past decade, the new focus on genomics has highlighted a particular challenge: to integrate the different views of the genome that are provided by various types of experimental data.\n\n\nRESULTS\nThis paper describes a computational framework for integrating and drawing inferences from a collection of genome-wide measurements. Each dataset is represented via a kernel function, which defines generalized similarity relationships between pairs of entities, such as genes or proteins. The kernel representation is both flexible and efficient, and can be applied to many different types of data. Furthermore, kernel functions derived from different types of data can be combined in a straightforward fashion. Recent advances in the theory of kernel methods have provided efficient algorithms to perform such combinations in a way that minimizes a statistical loss function. These methods exploit semidefinite programming techniques to reduce the problem of finding optimizing kernel combinations to a convex optimization problem. Computational experiments performed using yeast genome-wide datasets, including amino acid sequences, hydropathy profiles, gene expression data and known protein-protein interactions, demonstrate the utility of this approach. A statistical learning algorithm trained from all of these data to recognize particular classes of proteins--membrane proteins and ribosomal proteins--performs significantly better than the same algorithm trained on any single type of data.\n\n\nAVAILABILITY\nSupplementary data at http://noble.gs.washington.edu/proj/sdp-svm",
"title": ""
},
{
"docid": "d3f35e91d5d022de5fe816cf1234e415",
"text": "Rock mass description and characterisation is a basic task for exploration, mining work-flows and ground-water studies. Rock analysis can be performed using borehole logs that are created using a televiewer. Planar discontinuities in the rock appear as sinusoidal curves in borehole logs. The aim of this project is to develop a fast algorithm to analyse borehole imagery using image processing techniques, to identify and trace the discontinuities, and to perform quantitative analysis on their distribution.",
"title": ""
},
{
"docid": "9a27c676b5d356d5feb91850e975a336",
"text": "Joseph Goldstein has written in this journal that creation (through invention) and revelation (through discovery) are two different routes to advancement in the biomedical sciences1. In my work as a phytochemist, particularly during the period from the late 1960s to the 1980s, I have been fortunate enough to travel both routes. I graduated from the Beijing Medical University School of Pharmacy in 1955. Since then, I have been involved in research on Chinese herbal medicine in the China Academy of Chinese Medical Sciences (previously known as the Academy of Traditional Chinese Medicine). From 1959 to 1962, I was released from work to participate in a training course in Chinese medicine that was especially designed for professionals with backgrounds in Western medicine. The 2.5-year training guided me to the wonderful treasure to be found in Chinese medicine and toward understanding the beauty in the philosophical thinking that underlies a holistic view of human beings and the universe.",
"title": ""
},
{
"docid": "595052e154117ce66202a1a82e0a4072",
"text": "This paper presents the design of a new haptic feedback device for transradial myoelectric upper limb prosthesis that allows the amputee person to perceive the sensation of force-gripping and object-sliding. The system designed has three mechanical-actuator units to convey the sensation of force, and one vibrotactile unit to transmit the sensation of object sliding. The device designed will be placed on the user's amputee forearm. In order to validate the design of the structure, a stress analysis through Finite Element Method (FEM) is conducted.",
"title": ""
},
{
"docid": "ae7fb63bb4a70aa508fab8500e451402",
"text": "Dynamic Optimization Problems (DOPs) have been widely studied using Evolutionary Algorithms (EAs). Yet, a clear and rigorous definition of DOPs is lacking in the Evolutionary Dynamic Optimization (EDO) community. In this paper, we propose a unified definition of DOPs based on the idea of multiple-decision-making discussed in the Reinforcement Learning (RL) community. We draw a connection between EDO and RL by arguing that both of them are studying DOPs according to our definition of DOPs. We point out that existing EDO or RL research has been mainly focused on some types of DOPs. A conceptualized benchmark problem, which is aimed at the systematic study of various DOPs, is then developed. Some interesting experimental studies on the benchmark reveal that EDO and RL methods are specialized in certain types of DOPs and more importantly new algorithms for DOPs can be developed by combining the strength of both EDO and RL methods.",
"title": ""
},
{
"docid": "a41dfbce4138a8422bc7ddfac830e557",
"text": "This paper is the second part in a series that provides a comprehensive survey of the problems and techniques of tracking maneuvering targets in the absence of the so-called measurement-origin uncertainty. It surveys motion models of ballistic targets used for target tracking. Models for all three phases (i.e., boost, coast, and reentry) of motion are covered.",
"title": ""
}
] | scidocsrr |
42c5ebd88bc77fbaab6795a44f86e514 | Developing a Knowledge Management Strategy: Reflections from an Action Research Project | [
{
"docid": "a2047969c4924a1e93b805b4f7d2402c",
"text": "Knowledge is a resource that is valuable to an organization's ability to innovate and compete. It exists within the individual employees, and also in a composite sense within the organization. According to the resourcebased view of the firm (RBV), strategic assets are the critical determinants of an organization's ability to maintain a sustainable competitive advantage. This paper will combine RBV theory with characteristics of knowledge to show that organizational knowledge is a strategic asset. Knowledge management is discussed frequently in the literature as a mechanism for capturing and disseminating the knowledge that exists within the organization. This paper will also explain practical considerations for implementation of knowledge management principles.",
"title": ""
},
{
"docid": "ca6b556eb4de9a8f66aefd5505c20f3d",
"text": "Knowledge is a broad and abstract notion that has defined epistemological debate in western philosophy since the classical Greek era. In the past Richard Watson was the accepting senior editor for this paper. MISQ Review articles survey, conceptualize, and synthesize prior MIS research and set directions for future research. For more details see http://www.misq.org/misreview/announce.html few years, however, there has been a growing interest in treating knowledge as a significant organizational resource. Consistent with the interest in organizational knowledge and knowledge management (KM), IS researchers have begun promoting a class of information systems, referred to as knowledge management systems (KMS). The objective of KMS is to support creation, transfer, and application of knowledge in organizations. Knowledge and knowledge management are complex and multi-faceted concepts. Thus, effective development and implementation of KMS requires a foundation in several rich",
"title": ""
}
] | [
{
"docid": "1dcbd0c9fad30fcc3c0b6f7c79f5d04c",
"text": "Anvil is a tool for the annotation of audiovisual material containing multimodal dialogue. Annotation takes place on freely definable, multiple layers (tracks) by inserting time-anchored elements that hold a number of typed attribute-value pairs. Higher-level elements (suprasegmental) consist of a sequence of elements. Attributes contain symbols or cross-level links to arbitrary other elements. Anvil is highly generic (usable with different annotation schemes), platform-independent, XMLbased and fitted with an intuitive graphical user interface. For project integration, Anvil offers the import of speech transcription and export of text and table data for further statistical processing.",
"title": ""
},
{
"docid": "7f5e6c0061351ab064aa7fd25d076a1b",
"text": "Guadua angustifolia Kunth was successfully propagated in vitro from axillary buds. Culture initiation, bud sprouting, shoot and plant multiplication, rooting and acclimatization, were evaluated. Best results were obtained using explants from greenhouse-cultivated plants, following a disinfection procedure that comprised the sequential use of an alkaline detergent, a mixture of the fungicide Benomyl and the bactericide Agri-mycin, followed by immersion in sodium hypochlorite (1.5% w/v) for 10 min, and culturing on Murashige and Skoog medium containing 2 ml l−1 of Plant Preservative Mixture®. Highest bud sprouting in original explants was observed when 3 mg l−1 N6-benzylaminopurine (BAP) was incorporated into the culture medium. Production of lateral shoots in in vitro growing plants increased with BAP concentration in culture medium, up to 5 mg l−1, the highest concentration assessed. After six subcultures, clumps of 8–12 axes were obtained, and their division in groups of 3–5 axes allowed multiplication of the plants. Rooting occurred in vitro spontaneously in 100% of the explants that produced lateral shoots. Successful acclimatization of well-rooted clumps of 5–6 axes was achieved in the greenhouse under mist watering in a mixture of soil, sand and rice hulls (1:1:1).",
"title": ""
},
{
"docid": "32e864c7f9ee7258091ecc4604c7e346",
"text": "\"The second edition is clearer and adds more examples on how to use STL in a practical environment. Moreover, it is more concerned with performance and tools for its measurement. Both changes are very welcome.\"--Lawrence Rauchwerger, Texas A&M University \"So many algorithms, so little time! The generic algorithms chapter with so many more examples than in the previous edition is delightful! The examples work cumulatively to give a sense of comfortable competence with the algorithms, containers, and iterators used.\"--Max A. Lebow, Software Engineer, Unisys Corporation The STL Tutorial and Reference Guide is highly acclaimed as the most accessible, comprehensive, and practical introduction to the Standard Template Library (STL). Encompassing a set of C++ generic data structures and algorithms, STL provides reusable, interchangeable components adaptable to many different uses without sacrificing efficiency. Written by authors who have been instrumental in the creation and practical application of STL, STL Tutorial and Reference Guide, Second Edition includes a tutorial, a thorough description of each element of the library, numerous sample applications, and a comprehensive reference. You will find in-depth explanations of iterators, generic algorithms, containers, function objects, and much more. Several larger, non-trivial applications demonstrate how to put STL's power and flexibility to work. This book will also show you how to integrate STL with object-oriented programming techniques. In addition, the comprehensive and detailed STL reference guide will be a constant and convenient companion as you learn to work with the library. This second edition is fully updated to reflect all of the changes made to STL for the final ANSI/ISO C++ language standard. It has been expanded with new chapters and appendices. Many new code examples throughout the book illustrate individual concepts and techniques, while larger sample programs demonstrate the use of the STL in real-world C++ software development. An accompanying Web site, including source code and examples referenced in the text, can be found at http://www.cs.rpi.edu/~musser/stl-book/index.html.",
"title": ""
},
{
"docid": "416a3d01c713a6e751cb7893c16baf21",
"text": "BACKGROUND\nAnaemia is associated with poor cancer control, particularly in patients undergoing radiotherapy. We investigated whether anaemia correction with epoetin beta could improve outcome of curative radiotherapy among patients with head and neck cancer.\n\n\nMETHODS\nWe did a multicentre, double-blind, randomised, placebo-controlled trial in 351 patients (haemoglobin <120 g/L in women or <130 g/L in men) with carcinoma of the oral cavity, oropharynx, hypopharynx, or larynx. Patients received curative radiotherapy at 60 Gy for completely (R0) and histologically incomplete (R1) resected disease, or 70 Gy for macroscopically incompletely resected (R2) advanced disease (T3, T4, or nodal involvement) or for primary definitive treatment. All patients were assigned to subcutaneous placebo (n=171) or epoetin beta 300 IU/kg (n=180) three times weekly, from 10-14 days before and continuing throughout radiotherapy. The primary endpoint was locoregional progression-free survival. We assessed also time to locoregional progression and survival. Analysis was by intention to treat.\n\n\nFINDINGS\n148 (82%) patients given epoetin beta achieved haemoglobin concentrations higher than 140 g/L (women) or 150 g/L (men) compared with 26 (15%) given placebo. However, locoregional progression-free survival was poorer with epoetin beta than with placebo (adjusted relative risk 1.62 [95% CI 1.22-2.14]; p=0.0008). For locoregional progression the relative risk was 1.69 (1.16-2.47, p=0.007) and for survival was 1.39 (1.05-1.84, p=0.02).\n\n\nINTERPRETATION\nEpoetin beta corrects anaemia but does not improve cancer control or survival. Disease control might even be impaired. Patients receiving curative cancer treatment and given erythropoietin should be studied in carefully controlled trials.",
"title": ""
},
{
"docid": "8738ec0c6e265f0248d7fa65de4cdd05",
"text": "BACKGROUND\nCaring traditionally has been at the center of nursing. Effectively measuring the process of nurse caring is vital in nursing research. A short, less burdensome dimensional instrument for patients' use is needed for this purpose.\n\n\nOBJECTIVES\nTo derive and validate a shorter Caring Behaviors Inventory (CBI) within the context of the 42-item CBI.\n\n\nMETHODS\nThe responses to the 42-item CBI from 362 hospitalized patients were used to develop a short form using factor analysis. A test-retest reliability study was conducted by administering the shortened CBI to new samples of patients (n = 64) and nurses (n = 42).\n\n\nRESULTS\nFactor analysis yielded a 24-item short form (CBI-24) that (a) covers the four major dimensions assessed by the 42-item CBI, (b) has internal consistency (alpha =.96) and convergent validity (r =.62) similar to the 42-item CBI, (c) reproduces at least 97% of the variance of the 42 items in patients and nurses, (d) provides statistical conclusions similar to the 42-item CBI on scoring for caring behaviors by patients and nurses, (e) has similar sensitivity in detecting between-patient difference in perceptions, (f) obtains good test-retest reliability (r = .88 for patients and r=.82 for nurses), and (g) confirms high internal consistency (alpha >.95) as a stand-alone instrument administered to the new samples.\n\n\nCONCLUSION\nCBI-24 appears to be equivalent to the 42-item CBI in psychometric properties, validity, reliability, and scoring for caring behaviors among patients and nurses. These results recommend the use of CBI-24 to reduce response burden and research costs.",
"title": ""
},
{
"docid": "09e882927b53708eef7648d16e6ec380",
"text": "The main aim of the current paper is to develop a high-order numerical scheme to solve the space–time tempered fractional diffusion-wave equation. The convergence order of the proposed method is O(τ2 + h4). Also, we prove the unconditional stability and convergence of the developed method. The numerical results show the efficiency of the provided numerical scheme. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a934474bb38e37e8246ff561efd74bd3",
"text": "While it is possible to understand utopias and dystopias as particular kinds of sociopolitical systems, in this text we argue that utopias and dystopias can also be understood as particular kinds of information systems in which data is received, stored, generated, processed, and transmitted by the minds of human beings that constitute the system’s ‘nodes’ and which are connected according to specific network topologies. We begin by formulating a model of cybernetic information-processing properties that characterize utopias and dystopias. It is then shown that the growing use of neuroprosthetic technologies for human enhancement is expected to radically reshape the ways in which human minds access, manipulate, and share information with one another; for example, such technologies may give rise to posthuman ‘neuropolities’ in which human minds can interact with their environment using new sensorimotor capacities, dwell within shared virtual cyberworlds, and link with one another to form new kinds of social organizations, including hive minds that utilize communal memory and decision-making. Drawing on our model, we argue that the dynamics of such neuropolities will allow (or perhaps even impel) the creation of new kinds of utopias and dystopias that were previously impossible to realize. Finally, we suggest that it is important that humanity begin thoughtfully exploring the ethical, social, and political implications of realizing such technologically enabled societies by studying neuropolities in a place where they have already been ‘pre-engineered’ and provisionally exist: in works of audiovisual science fiction such as films, television series, and role-playing games",
"title": ""
},
{
"docid": "039dddd12a436dc8ab8a36eef2d2ff6d",
"text": "Despite significant accuracy improvement in convolutional neural networks (CNN) based object detectors, they often require prohibitive runtimes to process an image for real-time applications. State-of-the-art models often use very deep networks with a large number of floating point operations. Efforts such as model compression learn compact models with fewer number of parameters, but with much reduced accuracy. In this work, we propose a new framework to learn compact and fast object detection networks with improved accuracy using knowledge distillation [20] and hint learning [34]. Although knowledge distillation has demonstrated excellent improvements for simpler classification setups, the complexity of detection poses new challenges in the form of regression, region proposals and less voluminous labels. We address this through several innovations such as a weighted cross-entropy loss to address class imbalance, a teacher bounded loss to handle the regression component and adaptation layers to better learn from intermediate teacher distributions. We conduct comprehensive empirical evaluation with different distillation configurations over multiple datasets including PASCAL, KITTI, ILSVRC and MS-COCO. Our results show consistent improvement in accuracy-speed trade-offs for modern multi-class detection models.",
"title": ""
},
{
"docid": "dbfbdd4866d7fd5e34620c82b8124c3a",
"text": "Searle (1989) posits a set of adequacy criteria for any account of the meaning and use of performative verbs, such as order or promise. Central among them are: (a) performative utterances are performances of the act named by the performative verb; (b) performative utterances are self-verifying; (c) performative utterances achieve (a) and (b) in virtue of their literal meaning. He then argues that the fundamental problem with assertoric accounts of performatives is that they fail (b), and hence (a), because being committed to having an intention does not guarantee having that intention. Relying on a uniform meaning for verbs on their reportative and performative uses, we propose an assertoric analysis of performative utterances that does not require an actual intention for deriving (b), and hence can meet (a) and (c). Explicit performative utterances are those whose illocutionary force is made explicit by the verbs appearing in them (Austin 1962): (1) I (hereby) promise you to be there at five. (is a promise) (2) I (hereby) order you to be there at five. (is an order) (3) You are (hereby) ordered to report to jury duty. (is an order) (1)–(3) look and behave syntactically like declarative sentences in every way. Hence there is no grammatical basis for the once popular claim that I promise/ order spells out a ‘performative prefix’ that is silent in all other declaratives. Such an analysis, in any case, leaves unanswered the question of how illocutionary force is related to compositional meaning and, consequently, does not explain how the first person and present tense are special, so that first-person present tense forms can spell out performative prefixes, while others cannot. Minimal variations in person or tense remove the ‘performative effect’: (4) I promised you to be there at five. (is not a promise) (5) He promises to be there at five. (is not a promise) An attractive idea is that utterances of sentences like those in (1)–(3) are asser∗ The names of the authors appear in alphabetical order. 150 Condoravdi & Lauer tions, just like utterances of other declaratives, whose truth is somehow guaranteed. In one form or another, this basic strategy has been pursued by a large number of authors ever since Austin (1962) (Lemmon 1962; Hedenius 1963; Bach & Harnish 1979; Ginet 1979; Bierwisch 1980; Leech 1983; among others). One type of account attributes self-verification to meaning proper. Another type, most prominently exemplified by Bach & Harnish (1979), tries to derive the performative effect by means of an implicature-like inference that the hearer may draw based on the utterance of the explicit performative. Searle’s (1989) Challenge Searle (1989) mounts an argument against analyses of explicit performative utterances as self-verifying assertions. He takes the argument to show that an assertoric account is impossible. Instead, we take it to pose a challenge that can be met, provided one supplies the right semantics for the verbs involved. Searle’s argument is based on the following desiderata he posits for any theory of explicit performatives: (a) performative utterances are performances of the act named by the performative verb; (b) performative utterances are self-guaranteeing; (c) performative utterances achieve (a) and (b) in virtue of their literal meaning, which, in turn, ought to be based on a uniform lexical meaning of the verb across performative and reportative uses. According to Searle’s speech act theory, making a promise requires that the promiser intend to do so, and similarly for other performative verbs (the sincerity condition). It follows that no assertoric account can meet (a-c): An assertion cannot ensure that the speaker has the necessary intention. “Such an assertion does indeed commit the speaker to the existence of the intention, but the commitment to having the intention doesn’t guarantee the actual presence of the intention.” Searle (1989: 546) Hence assertoric accounts must fail on (b), and, a forteriori, on (a) and (c).1 Although Searle’s argument is valid, his premise that for truth to be guaranteed the speaker must have a particular intention is questionable. In the following, we give an assertoric account that delivers on (a-c). We aim for an 1 It should be immediately clear that inference-based accounts cannot meet (a-c) above. If the occurrence of the performative effect depends on the hearer drawing an inference, then such sentences could not be self-verifying, for the hearer may well fail to draw the inference. Performative Verbs and Performative Acts 151 account on which the assertion of the explicit performative is the performance of the act named by the performative verb. No hearer inferences are necessary. 1 Reportative and Performative Uses What is the meaning of the word order, then, so that it can have both reportative uses – as in (6) – and performative uses – as in (7)? (6) A ordered B to sign the report. (7) [A to B] I order you to sign the report now. The general strategy in this paper will be to ask what the truth conditions of reportative uses of performative verbs are, and then see what happens if these verbs are put in the first person singular present tense. The reason to start with the reportative uses is that speakers have intuitions about their truth conditions. This is not true for performative uses, because these are always true when uttered, obscuring the truth-conditional content of the declarative sentence.2 An assertion of (6) takes for granted that A presumed to have authority over B and implies that there was a communicative act from A to B. But what kind of communicative act? (7) or, in the right context, (8a-c) would suffice. (8) a. Sign the report now! b. You must sign the report now! c. I want you to sign the report now! What do these sentences have in common? We claim it is this: In the right context they commit A to a particular kind of preference for B signing the report immediately. If B accepts the utterance, he takes on a commitment to act as though he, too, prefers signing the report. If the report is co-present with A and B, he will sign it, if the report is in his office, he will leave to go there immediately, and so on. To comply with an order to p is to act as though one prefers p. One need not actually prefer it, but one has to act as if one did. The authority mentioned above amounts to this acceptance being socially or institutionally mandated. Of course, B has the option to refuse to take on this commitment, in either of two ways: (i) he can deny A’s authority, (ii) while accepting the authority, he can refuse to abide by it, thereby violating the institutional or social mandate. Crucially, in either case, (6) will still be true, as witnessed by the felicity of: 2 Szabolcsi (1982), in one of the earliest proposals for a compositional semantics of performative utterances, already pointed out the importance of reportative uses. 152 Condoravdi & Lauer (9) a. (6), but B refused to do it. b. (6), but B questioned his authority. Not even uptake by the addressee is necessary for order to be appropriate, as seen in (10) and the naturally occurring (11):3 (10) (6), but B did not hear him. (11) He ordered Kornilov to desist but either the message failed to reach the general or he ignored it.4 What is necessary is that the speaker expected uptake to happen, arguably a minimal requirement for an act to count as a communicative event. To sum up, all that is needed for (6) to be true and appropriate is that (i) there is a communicative act from A to B which commits A to a preference for B signing the report immediately and (ii) A presumes to have authority over B. The performative effect arises precisely when the utterance itself is a witness for the existential claim in (i). There are two main ingredients in the meaning of order informally outlined above: the notion of a preference, in particular a special kind of preference that guides action, and the notion of a commitment. The next two sections lay some conceptual groundwork before we spell out our analysis in section 4. 2 Representing Preferences To represent preferences that guide action, we need a way to represent preferences of different strength. Kratzer’s (1981) theory of modality is not suitable for this purpose. Suppose, for instance, that Sven desires to finish his paper and that he also wants to lie around all day, doing nothing. Modeling his preferences in the style of Kratzer, the propositions expressed by (12) and (13) would have to be part of Sven’s bouletic ordering source assigned to the actual world: (12) Sven finishes his paper. (13) Sven lies around all day, doing nothing. But then, Sven should be equally happy if he does nothing as he is if he finishes his paper. We want to be able to explain why, given his knowledge that (12) and (13) are incompatible, he works on his paper. Intuitively, it is because the preference expressed by (12) is more important than that expressed by (13). 3 We owe this observation to Lauri Karttunen. 4 https://tspace.library.utoronto.ca/citd/RussianHeritage/12.NR/NR.12.html Performative Verbs and Performative Acts 153 Preference Structures Definition 1. A preference structure relative to an information state W is a pair 〈P,≤〉, where P⊆℘(W ) and ≤ is a (weak) partial order on P. We can now define a notion of consistency that is weaker than requiring that all propositions in the preference structure be compatible: Definition 2. A preference structure 〈P,≤〉 is consistent iff for any p,q ∈ P such that p∩q = / 0, either p < q or q < p. Since preference structures are defined relative to an information state W , consistency will require not only logically but also contextually incompatible propositions to be strictly ranked. For example, if W is Sven’s doxastic state, and he knows that (12) and (13) are incompatible, for a bouletic preference structure of his to be consistent it must strictly rank the two propositions. In general, bouletic preference ",
"title": ""
},
{
"docid": "e934c6e5797148d9cfa6cff5e3bec698",
"text": "Ego level is a broad construct that summarizes individual differences in personality development 1 . We examine ego level as it is represented in natural language, using a composite sample of four datasets comprising nearly 44,000 responses. We find support for a developmental sequence in the structure of correlations between ego levels, in analyses of Linguistic Inquiry and Word Count (LIWC) categories 2 and in an examination of the individual words that are characteristic of each level. The LIWC analyses reveal increasing complexity and, to some extent, increasing breadth of perspective with higher levels of development. The characteristic language of each ego level suggests, for example, a shift from consummatory to appetitive desires at the lowest stages, a dawning of doubt at the Self-aware stage, the centrality of achievement motivation at the Conscientious stage, an increase in mutuality and intellectual growth at the Individualistic stage and some renegotiation of life goals and reflection on identity at the highest levels of development. Continuing empirical analysis of ego level and language will provide a deeper understanding of ego development, its relationship with other models of personality and individual differences, and its utility in characterizing people, texts and the cultural contexts that produce them. A linguistic analysis of nearly 44,000 responses to the Washington University Sentence Completion Test elucidates the construct of ego development (personality development through adulthood) and identifies unique linguistic markers of each level of development.",
"title": ""
},
{
"docid": "ff429302ec983dd1203ac6dd97506ef8",
"text": "Financial crises have occurred for many centuries. They are often preceded by a credit boom and a rise in real estate and other asset prices, as in the current crisis. They are also often associated with severe disruption in the real economy. This paper surveys the theoretical and empirical literature on crises. The first explanation of banking crises is that they are a panic. The second is that they are part of the business cycle. Modeling crises as a global game allows the two to be unified. With all the liquidity problems in interbank markets that have occurred during the current crisis, there is a growing literature on this topic. Perhaps the most serious market failure associated with crises is contagion, and there are many papers on this important topic. The relationship between asset price bubbles, particularly in real estate, and crises is discussed at length. Disciplines Economic Theory | Finance | Finance and Financial Management This journal article is available at ScholarlyCommons: http://repository.upenn.edu/fnce_papers/403 Financial Crises: Theory and Evidence Franklin Allen University of Pennsylvania Ana Babus Cambridge University Elena Carletti European University Institute",
"title": ""
},
{
"docid": "932088f443c5f0f3e239ed13032e56d7",
"text": "Hydro Muscles are linear actuators resembling ordinary biological muscles in terms of active dynamic output, passive material properties and appearance. The passive and dynamic characteristics of the latex based Hydro Muscle are addressed. The control tests of modular muscles are presented together with a muscle model relating sensed quantities with net force. Hydro Muscles are discussed in the context of conventional actuators. The hypothesis that Hydro Muscles have greater efficiency than McKibben Muscles is experimentally verified. Hydro Muscle peak efficiency with (without) back flow consideration was 88% (27%). Possible uses of Hydro Muscles are illustrated by relevant robotics projects at WPI. It is proposed that Hydro Muscles can also be an excellent educational tool for moderate-budget robotics classrooms and labs; the muscles are inexpensive (in the order of standard latex tubes of comparable size), made of off-the-shelf elements in less than 10 minutes, easily customizable, lightweight, biologically inspired, efficient, compliant soft linear actuators that are adept for power-augmentation. Moreover, a single source can actuate many muscles by utilizing control of flow and/or pressure. Still further, these muscles can utilize ordinary tap water and successfully operate within a safe range of pressures not overly exceeding standard water household pressure of about 0.59 MPa (85 psi).",
"title": ""
},
{
"docid": "1e4cb8960a99ad69e54e8c44fb21e855",
"text": "Over the last decade, the endocannabinoid system has emerged as a pivotal mediator of acute and chronic liver injury, with the description of the role of CB1 and CB2 receptors and their endogenous lipidic ligands in various aspects of liver pathophysiology. A large number of studies have demonstrated that CB1 receptor antagonists represent an important therapeutic target, owing to beneficial effects on lipid metabolism and in light of its antifibrogenic properties. Unfortunately, the brain-penetrant CB1 antagonist rimonabant, initially approved for the management of overweight and related cardiometabolic risks, was withdrawn because of an alarming rate of mood adverse effects. However, the efficacy of peripherally-restricted CB1 antagonists with limited brain penetrance has now been validated in preclinical models of NAFLD, and beneficial effects on fibrosis and its complications are anticipated. CB2 receptor is currently considered as a promising anti-inflammatory and antifibrogenic target, although clinical development of CB2 agonists is still awaited. In this review, we highlight the latest advances on the impact of the endocannabinoid system on the key steps of chronic liver disease progression and discuss the therapeutic potential of molecules targeting cannabinoid receptors.",
"title": ""
},
{
"docid": "397f6c39825a5d8d256e0cc2fbba5d15",
"text": "This paper presents a video-based motion modeling technique for capturing physically realistic human motion from monocular video sequences. We formulate the video-based motion modeling process in an image-based keyframe animation framework. The system first computes camera parameters, human skeletal size, and a small number of 3D key poses from video and then uses 2D image measurements at intermediate frames to automatically calculate the \"in between\" poses. During reconstruction, we leverage Newtonian physics, contact constraints, and 2D image measurements to simultaneously reconstruct full-body poses, joint torques, and contact forces. We have demonstrated the power and effectiveness of our system by generating a wide variety of physically realistic human actions from uncalibrated monocular video sequences such as sports video footage.",
"title": ""
},
{
"docid": "f7ce06365e2c74ccbf8dcc04277cfb9d",
"text": "In this paper, we propose an enhanced method for detecting light blobs (LBs) for intelligent headlight control (IHC). The main function of the IHC system is to automatically convert high-beam headlights to low beam when vehicles are found in the vicinity. Thus, to implement the IHC, it is necessary to detect preceding or oncoming vehicles. Generally, this process of detecting vehicles is done by detecting LBs in the images. Previous works regarding LB detection can largely be categorized into two approaches by the image type they use: low-exposure (LE) images or autoexposure (AE) images. While they each have their own strengths and weaknesses, the proposed method combines them by integrating the use of the partial region of the AE image confined by the lane detection information and the LE image. Consequently, the proposed method detects headlights at various distances and taillights at close distances using LE images while handling taillights at distant locations by exploiting the confined AE images. This approach enhances the performance of detecting the distant LBs while maintaining low false detections.",
"title": ""
},
{
"docid": "1ee540a265f71c1bf4b92c169556eaa3",
"text": "Guided by the aim to construct light fields with spin-like orbital angular momentum (OAM), that is light fields with a uniform and intrinsic OAM density, we investigate the OAM of arrays of optical vortices with rectangular symmetry. We find that the OAM per unit cell depends on the choice of unit cell and can even change sign when the unit cell is translated. This is the case even if the OAM in each unit cell is intrinsic, that is independent of the choice of measurement axis. We show that spin-like OAM can be found only if the OAM per unit cell vanishes. Our results are applicable to the z component of the angular momentum of any x- and y-periodic momentum distribution in the xy plane, and can also be applied other periodic light beams, arrays of rotating massive objects and periodic motion of liquids.",
"title": ""
},
{
"docid": "8eb5e5d7c224782506aba37dcb91614f",
"text": "With adolescents’ frequent use of social media, electronic bullying has emerged as a powerful platform for peer victimization. The present two studies explore how adolescents perceive electronic vs. traditional bullying in emotional impact and strategic responses. In Study 1, 97 adolescents (mean age = 15) viewed hypothetical peer victimization scenarios, in parallel electronic and traditional forms, with female characters experiencing indirect relational aggression and direct verbal aggression. In Study 2, 47 adolescents (mean age = 14) viewed the direct verbal aggression scenario from Study 1, and a new scenario, involving male characters in the context of direct verbal aggression. Participants were asked to imagine themselves as the victim in all scenarios and then rate their emotional reactions, strategic responses, and goals for the outcome. Adolescents reported significant negative emotions and disruptions in typical daily activities as the victim across divergent bullying scenarios. In both studies few differences emerged when comparing electronic to traditional bullying, suggesting that online and off-line bullying are subtypes of peer victimization. There were expected differences in strategic responses that fit the medium of the bullying. Results also suggested that embarrassment is a common and highly relevant negative experience in both indirect relational and direct verbal aggression among",
"title": ""
},
{
"docid": "7bf959cd3d5ffaf845510ce0eb69c6d6",
"text": "This paper describes the approach that was developed for SemEval 2018 Task 2 (Multilingual Emoji Prediction) by the DUTH Team. First, we employed a combination of preprocessing techniques to reduce the noise of tweets and produce a number of features. Then, we built several N-grams, to represent the combination of word and emojis. Finally, we trained our system with a tuned LinearSVC classifier. Our approach in the leaderboard ranked 18th amongst 48 teams.",
"title": ""
},
{
"docid": "c543f7a65207e7de9cc4bc6fa795504a",
"text": "Compressive sensing (CS) is an emerging approach for the acquisition of signals having a sparse or compressible representation in some basis. While the CS literature has mostly focused on problems involving 1-D signals and 2-D images, many important applications involve multidimensional signals; the construction of sparsifying bases and measurement systems for such signals is complicated by their higher dimensionality. In this paper, we propose the use of Kronecker product matrices in CS for two purposes. First, such matrices can act as sparsifying bases that jointly model the structure present in all of the signal dimensions. Second, such matrices can represent the measurement protocols used in distributed settings. Our formulation enables the derivation of analytical bounds for the sparse approximation of multidimensional signals and CS recovery performance, as well as a means of evaluating novel distributed measurement schemes.",
"title": ""
},
{
"docid": "49f955fb928955da09a3bfe08efe78bc",
"text": "A novel macro model approach for modeling ESD MOS snapback is introduced. The macro model consists of standard components only. It includes a MOS transistor modeled by BSIM3v3, a bipolar transistor modeled by VBIC, and a resistor for substrate resistance. No external current source, which is essential in most publicly reported macro models, is included since both BSIM3vs and VBIC have formulations built in to model the relevant effects. The simplicity of the presented macro model makes behavior languages, such as Verilog-A, and special ESD equations not necessary in model implementation. This offers advantages of high simulation speed, wider availability, and less convergence issues. Measurement and simulation of the new approach indicates that good silicon correlation can be achieved.",
"title": ""
}
] | scidocsrr |
4ea2b3d5bd3a9f626da0053bab0ba924 | High-Spectral-Efficiency Optical Modulation Formats | [
{
"docid": "f818a1cab06c4650a0aa250c076f5f88",
"text": "Shannon’s determination of the capacity of the linear Gaussian channel has posed a magnificent challenge to succeeding generations of researchers. This paper surveys how this challenge has been met during the past half century. Orthogonal minimumbandwidth modulation techniques and channel capacity are discussed. Binary coding techniques for low-signal-to-noise ratio (SNR) channels and nonbinary coding techniques for high-SNR channels are reviewed. Recent developments, which now allow capacity to be approached on any linear Gaussian channel, are surveyed. These new capacity-approaching techniques include turbo coding and decoding, multilevel coding, and combined coding/precoding for intersymbol-interference channels.",
"title": ""
}
] | [
{
"docid": "a11c3f75f6ced7f43e3beeb795948436",
"text": "A new concept of building the controller of a thyristor based three-phase dual converter is presented in this paper. The controller is implemented using mixed mode digital-analog circuitry to achieve optimized performance. The realtime six state pulse patterns needed for the converter are generated by a specially designed ROM based circuit synchronized to the power frequency by a phase-locked-loop. The phase angle and other necessary commands for the converter are managed by an AT89C51 microcontroller. The proposed architecture offers 128-steps in the phase angle control, a resolution sufficient for most converter applications. Because of the hybrid nature of the implementation, the controller can change phase angles online smoothly. The computation burden on the microcontroller is nominal and hence it can easily undertake the tasks of monitoring diagnostic data like overload, loss of excitation and phase sequence. Thus a full fledged system is realizable with only one microcontroller chip, making the control system economic, reliable and efficient.",
"title": ""
},
{
"docid": "13c3e0c082bc89aa5dc9e6e7b7a13119",
"text": "We study the problem of Key Exchange (KE), where authentication is two-factor and based on both electronically stored long keys and human-supplied credentials (passwords or biometrics). The latter credential has low entropy and may be adversarily mistyped. Our main contribution is the first formal treatment of mistyping in this setting. Ensuring security in presence of mistyping is subtle. We show mistypingrelated limitations of previous KE definitions and constructions (of Boyen et al. [7, 6, 10] and Kolesnikov and Rackoff [16]). We concentrate on the practical two-factor authenticated KE setting where servers exchange keys with clients, who use short passwords (memorized) and long cryptographic keys (stored on a card). Our work is thus a natural generalization of Halevi-Krawczyk [15] and Kolesnikov-Rackoff [16]. We discuss the challenges that arise due to mistyping. We propose the first KE definitions in this setting, and formally discuss their guarantees. We present efficient KE protocols and prove their security.",
"title": ""
},
{
"docid": "4310a55c8e96f26f060ec8ded7647d8c",
"text": "Chronotherapeutics aim at treating illnesses according to the endogenous biologic rhythms, which moderate xenobiotic metabolism and cellular drug response. The molecular clocks present in individual cells involve approximately fifteen clock genes interconnected in regulatory feedback loops. They are coordinated by the suprachiasmatic nuclei, a hypothalamic pacemaker, which also adjusts the circadian rhythms to environmental cycles. As a result, many mechanisms of diseases and drug effects are controlled by the circadian timing system. Thus, the tolerability of nearly 500 medications varies by up to fivefold according to circadian scheduling, both in experimental models and/or patients. Moreover, treatment itself disrupted, maintained, or improved the circadian timing system as a function of drug timing. Improved patient outcomes on circadian-based treatments (chronotherapy) have been demonstrated in randomized clinical trials, especially for cancer and inflammatory diseases. However, recent technological advances have highlighted large interpatient differences in circadian functions resulting in significant variability in chronotherapy response. Such findings advocate for the advancement of personalized chronotherapeutics through interdisciplinary systems approaches. Thus, the combination of mathematical, statistical, technological, experimental, and clinical expertise is now shaping the development of dedicated devices and diagnostic and delivery algorithms enabling treatment individualization. In particular, multiscale systems chronopharmacology approaches currently combine mathematical modeling based on cellular and whole-body physiology to preclinical and clinical investigations toward the design of patient-tailored chronotherapies. We review recent systems research works aiming to the individualization of disease treatment, with emphasis on both cancer management and circadian timing system-resetting strategies for improving chronic disease control and patient outcomes.",
"title": ""
},
{
"docid": "71c6c714535ae1bfd749cbb8bbb34f5e",
"text": "This paper tackles the problem of relative pose estimation between two monocular camera images in textureless scenes. Due to a lack of point matches, point-based approaches such as the 5-point algorithm often fail when used in these scenarios. Therefore we investigate relative pose estimation from line observations. We propose a new approach in which the relative pose estimation from lines is extended by a 3D line direction estimation step. The estimated line directions serve to improve the robustness and the efficiency of all processing phases: they enable us to guide the matching of line features and allow an efficient calculation of the relative pose. First, we describe in detail the novel 3D line direction estimation from a single image by clustering of parallel lines in the world. Secondly, we propose an innovative guided matching in which only clusters of lines with corresponding 3D line directions are considered. Thirdly, we introduce the new relative pose estimation based on 3D line directions. Finally, we combine all steps to a visual odometry system. We evaluate the different steps on synthetic and real sequences and demonstrate that in the targeted scenarios we outperform the state-of-the-art in both accuracy and computation time.",
"title": ""
},
{
"docid": "6a1fa32d9a716b57a321561dfce83879",
"text": "Most successful computational approaches for protein function prediction integrate multiple genomics and proteomics data sources to make inferences about the function of unknown proteins. The most accurate of these algorithms have long running times, making them unsuitable for real-time protein function prediction in large genomes. As a result, the predictions of these algorithms are stored in static databases that can easily become outdated. We propose a new algorithm, GeneMANIA, that is as accurate as the leading methods, while capable of predicting protein function in real-time. We use a fast heuristic algorithm, derived from ridge regression, to integrate multiple functional association networks and predict gene function from a single process-specific network using label propagation. Our algorithm is efficient enough to be deployed on a modern webserver and is as accurate as, or more so than, the leading methods on the MouseFunc I benchmark and a new yeast function prediction benchmark; it is robust to redundant and irrelevant data and requires, on average, less than ten seconds of computation time on tasks from these benchmarks. GeneMANIA is fast enough to predict gene function on-the-fly while achieving state-of-the-art accuracy. A prototype version of a GeneMANIA-based webserver is available at http://morrislab.med.utoronto.ca/prototype .",
"title": ""
},
{
"docid": "321b5e5f05344b25605b289bcc5fab94",
"text": "We revisit a pioneer unsupervised learning technique called archetypal analysis, [5] which is related to successful data analysis methods such as sparse coding [18] and non-negative matrix factorization [19]. Since it was proposed, archetypal analysis did not gain a lot of popularity even though it produces more interpretable models than other alternatives. Because no efficient implementation has ever been made publicly available, its application to important scientific problems may have been severely limited. Our goal is to bring back into favour archetypal analysis. We propose a fast optimization scheme using an active-set strategy, and provide an efficient open-source implementation interfaced with Matlab, R, and Python. Then, we demonstrate the usefulness of archetypal analysis for computer vision tasks, such as codebook learning, signal classification, and large image collection visualization.",
"title": ""
},
{
"docid": "ea87bfc0d6086e367e8950b445529409",
"text": " Queue stability (Chapter 2.1) Scheduling for stability, capacity regions (Chapter 2.3) Linear programs (Chapter 2.3, Chapter 3) Energy optimality (Chapter 3.2) Opportunistic scheduling (Chapter 2.3, Chapter 3, Chapter 4.6) Lyapunov drift and optimization (Chapter 4.1.0-4.1.2, 4.2, 4.3) Inequality constraints and virtual queues (Chapter 4.4) Drift-plus-penalty algorithm (Chapter 4.5) Performance and delay tradeoffs (Chapter 3.2, 4.5) Backpressure routing (Ex. 4.16, Chapter 5.2, 5.3)",
"title": ""
},
{
"docid": "25058c265e505ed15910dd30dfe03119",
"text": "Endowing machines with sensing capabilities similar to those of humans is a prevalent quest in engineering and computer science. In the pursuit of making computers sense their surroundings, a huge effort has been conducted to allow machines and computers to acquire, process, analyze and understand their environment in a human-like way. Focusing on the sense of hearing, the ability of computers to sense their acoustic environment as humans do goes by the name of machine hearing. To achieve this ambitious aim, the representation of the audio signal is of paramount importance. In this paper, we present an up-to-date review of the most relevant audio feature extraction techniques developed to analyze the most usual audio signals: speech, music and environmental sounds. Besides revisiting classic approaches for completeness, we include the latest advances in the field based on new domains of analysis together with novel bio-inspired proposals. These approaches are described following a taxonomy that organizes them according to their physical or perceptual basis, being subsequently divided depending on the domain of computation (time, frequency, wavelet, image-based, cepstral, or other domains). The description of the approaches is accompanied with recent examples of their application to machine hearing related problems.",
"title": ""
},
{
"docid": "5de29983943c3cfa30bb1e94854b606d",
"text": "Designing reliable user authentication on mobile phones is becoming an increasingly important task to protect users' private information and data. Since biometric approaches can provide many advantages over the traditional authentication methods, they have become a significant topic for both academia and industry. The major goal of biometric user authentication is to authenticate legitimate users and identify impostors based on physiological and behavioral characteristics. In this paper, we survey the development of existing biometric authentication techniques on mobile phones, particularly on touch-enabled devices, with reference to 11 biometric approaches (five physiological and six behavioral). We present a taxonomy of existing efforts regarding biometric authentication on mobile phones and analyze their feasibility of deployment on touch-enabled mobile phones. In addition, we systematically characterize a generic biometric authentication system with eight potential attack points and survey practical attacks and potential countermeasures on mobile phones. Moreover, we propose a framework for establishing a reliable authentication mechanism through implementing a multimodal biometric user authentication in an appropriate way. Experimental results are presented to validate this framework using touch dynamics, and the results show that multimodal biometrics can be deployed on touch-enabled phones to significantly reduce the false rates of a single biometric system. Finally, we identify challenges and open problems in this area and suggest that touch dynamics will become a mainstream aspect in designing future user authentication on mobile phones.",
"title": ""
},
{
"docid": "9fc7f8ef20cf9c15f9d2d2ce5661c865",
"text": "This paper presents a new iris database that contains images with noise. This is in contrast with the existing databases, that are noise free. UBIRIS is a tool for the development of robust iris recognition algorithms for biometric proposes. We present a detailed description of the many characteristics of UBIRIS and a comparison of several image segmentation approaches used in the current iris segmentation methods where it is evident their small tolerance to noisy images.",
"title": ""
},
{
"docid": "5eccbb19af4a1b19551ce4c93c177c07",
"text": "This paper presents the design and development of a microcontroller based heart rate monitor using fingertip sensor. The device uses the optical technology to detect the flow of blood through the finger and offers the advantage of portability over tape-based recording systems. The important feature of this research is the use of Discrete Fourier Transforms to analyse the ECG signal in order to measure the heart rate. Evaluation of the device on real signals shows accuracy in heart rate estimation, even under intense physical activity. The performance of HRM device was compared with ECG signal represented on an oscilloscope and manual pulse measurement of heartbeat, giving excellent results. Our proposed Heart Rate Measuring (HRM) device is economical and user friendly.",
"title": ""
},
{
"docid": "09806e0fdb434c181d9bceed140fed6c",
"text": "Localization of chess-board vertices is a common task in computer vision, underpinning many applications, but relatively little work focusses on designing a specific feature detector that is fast, accurate and robust. In this paper the “Chess-board Extraction by Subtraction and Summation” (ChESS) feature detector, designed to exclusively respond to chess-board vertices, is presented. The method proposed is robust against noise, poor lighting and poor contrast, requires no prior knowledge of the extent of the chessboard pattern, is computationally very efficient, and provides a strength measure of detected features. Such a detector has significant application both in the key field of camera calibration, as well as in Structured Light 3D reconstruction. Evidence is presented showing its robustness, accuracy, and efficiency in comparison to other commonly used detectors both under simulation and in experimental 3D reconstruction of flat plate and cylindrical objects.",
"title": ""
},
{
"docid": "2d0765e6b695348dea8822f695dcbfa1",
"text": "Social networks are currently gaining increasing impact especially in the light of the ongoing growth of web-based services like facebook.com. A central challenge for the social network analysis is the identification of key persons within a social network. In this context, the article aims at presenting the current state of research on centrality measures for social networks. In view of highly variable findings about the quality of various centrality measures, we also illustrate the tremendous importance of a reflected utilization of existing centrality measures. For this purpose, the paper analyzes five common centrality measures on the basis of three simple requirements for the behavior of centrality measures.",
"title": ""
},
{
"docid": "3848b727cfda3031742cec04abd74608",
"text": "This paper presents SemFrame, a system that induces frame semantic verb classes from WordNet and LDOCE. Semantic frames are thought to have significant potential in resolving the paraphrase problem challenging many languagebased applications. When compared to the handcrafted FrameNet, SemFrame achieves its best recall-precision balance with 83.2% recall (based on SemFrame's coverage of FrameNet frames) and 73.8% precision (based on SemFrame verbs’ semantic relatedness to frame-evoking verbs). The next best performing semantic verb classes achieve 56.9% recall and 55.0% precision.",
"title": ""
},
{
"docid": "fd69e05a9be607381c4b8cd69d758f41",
"text": "The increase in electronically mediated self-servic e technologies in the banking industry has impacted on the way banks service consumers. Despit e a large body of research on electronic banking channels, no study has been undertaken to e xplor the fit between electronic banking channels and banking tasks. Nor has there been rese a ch into how the ‘task-channel fit’ and other factors impact on consumers’ intention to use elect ronic banking channels. This paper proposes a theoretical model addressing these gaps. An explora tory study was first conducted, investigating industry experts’ perceptions towards the concept o f ‘task-channel fit’ and its relationship to other electronic banking channel variables. The findings demonstrated that the concept was perceived as being highly relevant by bank managers. A resear ch model was then developed drawing on the existing literature. To evaluate the research mode l quantitatively, a survey will be developed and validated, administered to a sample of consumers, a nd the resulting data used to test both measurement and structural aspects of the research model.",
"title": ""
},
{
"docid": "ecbdb56c52a59f26cf8e33fc533d608f",
"text": "The ethical nature of transformational leadership has been hotly debated. This debate is demonstrated in the range of descriptors that have been used to label transformational leaders including narcissistic, manipulative, and self-centred, but also ethical, just and effective. Therefore, the purpose of the present research was to address this issue directly by assessing the statistical relationship between perceived leader integrity and transformational leadership using the Perceived Leader Integrity Scale (PLIS) and the Multi-Factor Leadership Questionnaire (MLQ). In a national sample of 1354 managers a moderate to strong positive relationship was found between perceived integrity and the demonstration of transformational leadership behaviours. A similar relationship was found between perceived integrity and developmental exchange leadership. A systematic leniency bias was identified when respondents rated subordinates vis-à-vis peer ratings. In support of previous findings, perceived integrity was also found to correlate positively with leader and organisational effectiveness measures.",
"title": ""
},
{
"docid": "9fd5e182851ff0be67e8865c336a1f77",
"text": "Following the developments of wireless and mobile communication technologies, mobile-commerce (M-commerce) has become more and more popular. However, most of the existing M-commerce protocols do not consider the user anonymity during transactions. This means that it is possible to trace the identity of a payer from a M-commerce transaction. Luo et al. in 2014 proposed an NFC-based anonymous mobile payment protocol. It used an NFC-enabled smartphone and combined a built-in secure element (SE) as a trusted execution environment to build an anonymous mobile payment service. But their scheme has several problems and cannot be functional in practice. In this paper, we introduce a new NFC-based anonymous mobile payment protocol. Our scheme has the following features:(1) Anonymity. It prevents the disclosure of user's identity by using virtual identities instead of real identity during the transmission. (2) Efficiency. Confidentiality is achieved by symmetric key cryptography instead of public key cryptography so as to increase the performance. (3) Convenience. The protocol is based on NFC and is EMV compatible. (4) Security. All the transaction is either encrypted or signed by the sender so the confidentiality and authenticity are preserved.",
"title": ""
},
{
"docid": "6cfedfc45ea1b3db23d022b06c46743a",
"text": "This study examined the relationship between financial knowledge and credit card behavior of college students. The widespread availability of credit cards has raised concerns over how college students might use those cards given the negative consequences (both immediate and long-term) associated with credit abuse and mismanagement. Using a sample of 1,354 students from a major southeastern university, results suggest that financial knowledge is a significant factor in the credit card decisions of college students. Students with higher scores on a measure of personal financial knowledge are more likely to engage in more responsible credit card use. Specific behaviors chosen have been associated with greater costs of borrowing and adverse economic consequences in the past.",
"title": ""
},
{
"docid": "a4fa2faf888728e4861cd47377dd8fd8",
"text": "Fully-automatic facial expression recognition (FER) is a key component of human behavior analysis. Performing FER from still images is a challenging task as it involves handling large interpersonal morphological differences, and as partial occlusions can occasionally happen. Furthermore, labelling expressions is a time-consuming process that is prone to subjectivity, thus the variability may not be fully covered by the training data. In this work, we propose to train random forests upon spatially-constrained random local subspaces of the face. The output local predictions form a categorical expression-driven high-level representation that we call local expression predictions (LEPs). LEPs can be combined to describe categorical facial expressions as well as action units (AUs). Furthermore, LEPs can be weighted by confidence scores provided by an autoencoder network. Such network is trained to locally capture the manifold of the non-occluded training data in a hierarchical way. Extensive experiments show that the proposed LEP representation yields high descriptive power for categorical expressions and AU occurrence prediction, and leads to interesting perspectives towards the design of occlusion-robust and confidence-aware FER systems.",
"title": ""
},
{
"docid": "44ae81b3961a682b9b881c8077fb9506",
"text": "Osteoarthritis is a common disease, clinically manifested by joint pain, swelling and progressive loss of function. The severity of disease manifestations can vary but most of the patients only need intermittent symptom relief without major interventions. However, there is a group of patients that shows fast progression of the disease process leading to disability and ultimately joint replacement. Apart from symptom relief, no treatments have been identified that arrest or reverse the disease process. Therefore, there has been increasing attention devoted to the understanding of the mechanisms that are driving the disease process. Among these mechanisms, the biology of the cartilage-subchondral bone unit has been highlighted as key in osteoarthritis, and pathways that involve both cartilage and bone formation and turnover have become prime targets for modulation, and thus therapeutic intervention. Studies in developmental, genetic and joint disease models indicate that Wnt signaling is critically involved in these processes. Consequently, targeting Wnt signaling in a selective and tissue specific manner is an exciting opportunity for the development of disease modifying drugs for osteoarthritis.",
"title": ""
}
] | scidocsrr |
ee4f68d1700841990534552514471aa3 | Mental health awareness: The Indian scenario | [
{
"docid": "c5bc51e3e2ad5aedccfa17095ec1d7ed",
"text": "CONTEXT\nLittle is known about the extent or severity of untreated mental disorders, especially in less-developed countries.\n\n\nOBJECTIVE\nTo estimate prevalence, severity, and treatment of Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) mental disorders in 14 countries (6 less developed, 8 developed) in the World Health Organization (WHO) World Mental Health (WMH) Survey Initiative.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nFace-to-face household surveys of 60 463 community adults conducted from 2001-2003 in 14 countries in the Americas, Europe, the Middle East, Africa, and Asia.\n\n\nMAIN OUTCOME MEASURES\nThe DSM-IV disorders, severity, and treatment were assessed with the WMH version of the WHO Composite International Diagnostic Interview (WMH-CIDI), a fully structured, lay-administered psychiatric diagnostic interview.\n\n\nRESULTS\nThe prevalence of having any WMH-CIDI/DSM-IV disorder in the prior year varied widely, from 4.3% in Shanghai to 26.4% in the United States, with an interquartile range (IQR) of 9.1%-16.9%. Between 33.1% (Colombia) and 80.9% (Nigeria) of 12-month cases were mild (IQR, 40.2%-53.3%). Serious disorders were associated with substantial role disability. Although disorder severity was correlated with probability of treatment in almost all countries, 35.5% to 50.3% of serious cases in developed countries and 76.3% to 85.4% in less-developed countries received no treatment in the 12 months before the interview. Due to the high prevalence of mild and subthreshold cases, the number of those who received treatment far exceeds the number of untreated serious cases in every country.\n\n\nCONCLUSIONS\nReallocation of treatment resources could substantially decrease the problem of unmet need for treatment of mental disorders among serious cases. Structural barriers exist to this reallocation. Careful consideration needs to be given to the value of treating some mild cases, especially those at risk for progressing to more serious disorders.",
"title": ""
}
] | [
{
"docid": "c5b80d54e6b50a56ab5a6d5e0111df81",
"text": "By understanding how real users have employed reliable multicast in real distributed systems, we can develop insight concerning the degree to which this technology has matched expectations. This paper reviews a number of applications with that goal in mind. Our findings point to tradeoffs between the form of reliability used by a system and its scalability and performance. We also find that to reach a broad user community (and a commercially interesting market) the technology must be better integrated with component and object-oriented systems architectures. Looking closely at these architectures, however, we identify some assumptions about failure handling which make reliable multicast difficult to exploit. Indeed, the major failures of reliable multicast are associated wit failures. The broader opportunity appears to involve relatively visible embeddings of these tools int h attempts to position it within object oriented systems in ways that focus on transparent recovery from server o object-oriented architectures enabling knowledgeable users to make tradeoffs. Fault-tolerance through transparent server replication may be better viewed as an unachievable holy grail.",
"title": ""
},
{
"docid": "6a9c7da90fe8de2ad6f3819df07f8642",
"text": "We define Quality of Service (QoS) and cost model for communications in Systems on Chip (SoC), and derive related Network on Chip (NoC) architecture and design process. SoC inter-module communication traffic is classified into four classes of service: signaling (for inter-module control signals); real-time (representing delay-constrained bit streams); RD/WR (modeling short data access) and block-transfer (handling large data bursts). Communication traffic of the target SoC is analyzed (by means of analytic calculations and simulations), and QoS requirements (delay and throughput) for each service class are derived. A customized Quality-of-Service NoC (QNoC) architecture is derived by modifying a generic network architecture. The customization process minimizes the network cost (in area and power) while maintaining the required QoS. The generic network is based on a two-dimensional planar mesh and fixed shortest path (X–Y based) multi-class wormhole routing. Once communication requirements of the target SoC are identified, the network is customized as follows: The SoC modules are placed so as to minimize spatial traffic density, unnecessary mesh links and switching nodes are removed, and bandwidth is allocated to the remaining links and switches according to their relative load so that link utilization is balanced. The result is a low cost customized QNoC for the target SoC which guarantees that QoS requirements are met. 2003 Elsevier B.V. All rights reserved. IDT: Network on chip; QoS architecture; Wormhole switching; QNoC design process; QNoC",
"title": ""
},
{
"docid": "ad2e02fd3b349b2a66ac53877b82e9bb",
"text": "This paper proposes a novel approach for the evolution of artificial creatures which moves in a 3D virtual environment based on the neuroevolution of augmenting topologies (NEAT) algorithm. The NEAT algorithm is used to evolve neural networks that observe the virtual environment and respond to it, by controlling the muscle force of the creature. The genetic algorithm is used to emerge the architecture of creature based on the distance metrics for fitness evaluation. The damaged morphologies of creature are elaborated, and a crossover algorithm is used to control it. Creatures with similar morphological traits are grouped into the same species to limit the complexity of the search space. The motion of virtual creature having 2–3 limbs is recorded at three different angles to check their performance in different types of viscous mediums. The qualitative demonstration of motion of virtual creature represents that improved swimming of virtual creatures is achieved in simulating mediums with viscous drag 1–10 arbitrary unit.",
"title": ""
},
{
"docid": "2f54746f666befe19af1391f1d90aca8",
"text": "The Internet of Things has drawn lots of research attention as the growing number of devices connected to the Internet. Long Term Evolution-Advanced (LTE-A) is a promising technology for wireless communication and it's also promising for IoT. The main challenge of incorporating IoT devices into LTE-A is a large number of IoT devices attempting to access the network in a short period which will greatly reduce the network performance. In order to improve the network utilization, we adopted a hierarchy architecture using a gateway for connecting the devices to the eNB and proposed a multiclass resource allocation algorithm for LTE based IoT communication. Simulation results show that the proposed algorithm can provide good performance both on data rate and latency for different QoS applications both in saturated and unsaturated environment.",
"title": ""
},
{
"docid": "8a21ff7f3e4d73233208d5faa70eb7ce",
"text": "Achieving robustness and energy efficiency in nanoscale CMOS process technologies is made challenging due to the presence of process, temperature, and voltage variations. Traditional fault-tolerance techniques such as N-modular redundancy (NMR) employ deterministic error detection and correction, e.g., majority voter, and tend to be power hungry. This paper proposes soft NMR that nontrivially extends NMR by consciously exploiting error statistics caused by nanoscale artifacts in order to design robust and energy-efficient systems. In contrast to conventional NMR, soft NMR employs Bayesian detection techniques in the voter. Soft voter algorithms are obtained through optimization of appropriate application aware cost functions. Analysis indicates that, on average, soft NMR outperforms conventional NMR. Furthermore, unlike NMR, in many cases, soft NMR is able to generate a correct output even when all N replicas are in error. This increase in robustness is then traded-off through voltage scaling to achieve energy efficiency. The design of a discrete cosine transform (DCT) image coder is employed to demonstrate the benefits of the proposed technique. Simulations in a commercial 45 nm, 1.2 V, CMOS process show that soft NMR provides up to 10× improvement in robustness, and 35 percent power savings over conventional NMR.",
"title": ""
},
{
"docid": "097da6ee2d13e0b4b2f84a26752574f4",
"text": "Objective A sound theoretical foundation to guide practice is enhanced by the ability of nurses to critique research. This article provides a structured route to questioning the methodology of nursing research. Primary Argument Nurses may find critiquing a research paper a particularly daunting experience when faced with their first paper. Knowing what questions the nurse should be asking is perhaps difficult to determine when there may be unfamiliar research terms to grasp. Nurses may benefit from a structured approach which helps them understand the sequence of the text and the subsequent value of a research paper. Conclusion A framework is provided within this article to assist in the analysis of a research paper in a systematic, logical order. The questions presented in the framework may lead the nurse to conclusions about the strengths and weaknesses of the research methods presented in a research article. The framework does not intend to separate quantitative or qualitative paradigms but to assist the nurse in making broad observations about the nature of the research.",
"title": ""
},
{
"docid": "72ad5d0f9e6b07d4392e7a4b53bdf17f",
"text": "This paper surveys current text and speech summarization evaluation approaches. It discusses advantages and disadv ant ges of these, with the goal of identifying summarization techni ques most suitable to speech summarization. Precision/recall s hemes, as well as summary accuracy measures which incorporate weig htings based on multiple human decisions, are suggested as par ticularly suitable in evaluating speech summaries.",
"title": ""
},
{
"docid": "91059e16806c0c2b3e7b39859ba2b6a5",
"text": "Online users tend to select claims that adhere to their system of beliefs and to ignore dissenting information. Confirmation bias, indeed, plays a pivotal role in viral phenomena. Furthermore, the wide availability of content on the web fosters the aggregation of likeminded people where debates tend to enforce group polarization. Such a configuration might alter the public debate and thus the formation of the public opinion. In this paper we provide a mathematical model to study online social debates and the related polarization dynamics. We assume the basic updating rule of the Bounded Confidence Model (BCM) and we develop two variations a) the Rewire with Bounded Confidence Model (RBCM), in which discordant links are broken until convergence is reached; and b) the Unbounded Confidence Model, under which the interaction among discordant pairs of users is allowed even with a negative feedback, either with the rewiring step (RUCM) or without it (UCM). From numerical simulations we find that the new models (UCM and RUCM), unlike the BCM, are able to explain the coexistence of two stable final opinions, often observed in reality. Lastly, we present a mean field approximation of the newly introduced models.",
"title": ""
},
{
"docid": "3e52520779e75997947d9538a6513ef4",
"text": "This article presents a reproducible research workflow for amplicon-based microbiome studies in personalized medicine created using Bioconductor packages and the knitr markdown interface.We show that sometimes a multiplicity of choices and lack of consistent documentation at each stage of the sequential processing pipeline used for the analysis of microbiome data can lead to spurious results. We propose its replacement with reproducible and documented analysis using R packages dada2, knitr, and phyloseq. This workflow implements both key stages of amplicon analysis: the initial filtering and denoising steps needed to construct taxonomic feature tables from error-containing sequencing reads (dada2), and the exploratory and inferential analysis of those feature tables and associated sample metadata (phyloseq). This workow facilitates reproducible interrogation of the full set of choices required in microbiome studies. We present several examples in which we leverage existing packages for analysis in a way that allows easy sharing and modification by others, and give pointers to articles that depend on this reproducible workflow for the study of longitudinal and spatial series analyses of the vaginal microbiome in pregnancy and the oral microbiome in humans with healthy dentition and intra-oral tissues.",
"title": ""
},
{
"docid": "e6a332a8dab110262beb1fc52b91945c",
"text": "Models are crucial in the engineering design process because they can be used for both the optimization of design parameters and the prediction of performance. Thus, models can significantly reduce design, development and optimization costs. This paper proposes a novel equivalent electrical model for Darrieus-type vertical axis wind turbines (DTVAWTs). The proposed model was built from the mechanical description given by the Paraschivoiu double-multiple streamtube model and is based on the analogy between mechanical and electrical circuits. This work addresses the physical concepts and theoretical formulations underpinning the development of the model. After highlighting the working principle of the DTVAWT, the step-by-step development of the model is presented. For assessment purposes, simulations of aerodynamic characteristics and those of corresponding electrical components are performed and compared.",
"title": ""
},
{
"docid": "5a13c741e9e907a0d4d8a794c5363b0c",
"text": "Quinoa (Chenopodium quinoa Willd.), which is considered a pseudocereal or pseudograin, has been recognized as a complete food due to its protein quality. It has remarkable nutritional properties; not only from its protein content (15%) but also from its great amino acid balance. It is an important source of minerals and vitamins, and has also been found to contain compounds like polyphenols, phytosterols, and flavonoids with possible nutraceutical benefits. It has some functional (technological) properties like solubility, water-holding capacity (WHC), gelation, emulsifying, and foaming that allow diversified uses. Besides, it has been considered an oil crop, with an interesting proportion of omega-6 and a notable vitamin E content. Quinoa starch has physicochemical properties (such as viscosity, freeze stability) which give it functional properties with novel uses. Quinoa has a high nutritional value and has recently been used as a novel functional food because of all these properties; it is a promising alternative cultivar.",
"title": ""
},
{
"docid": "3486bfa46d0e43317f32b1fb51309715",
"text": "Every arti cial-intelligence research project needs a working de nition of \\intelligence\", on which the deepest goals and assumptions of the research are based. In the project described in the following chapters, \\intelligence\" is de ned as the capacity to adapt under insu cient knowledge and resources. Concretely, an intelligent system should be nite and open, and should work in real time. If these criteria are used in the design of a reasoning system, the result is NARS, a non-axiomatic reasoning system. NARS uses a term-oriented formal language, characterized by the use of subject{ predicate sentences. The language has an experience-grounded semantics, according to which the truth value of a judgment is determined by previous experience, and the meaning of a term is determined by its relations with other terms. Several di erent types of uncertainty, such as randomness, fuzziness, and ignorance, can be represented in the language in a single way. The inference rules of NARS are based on three inheritance relations between terms. With di erent combinations of premises, revision, deduction, induction, abduction, exempli cation, comparison, and analogy can all be carried out in a uniform format, the major di erence between these types of inference being that di erent functions are used to calculate the truth value of the conclusion from the truth values of the premises. viii ix Since it has insu cient space{time resources, the system needs to distribute them among its tasks very carefully, and to dynamically adjust the distribution as the situation changes. This leads to a \\controlled concurrency\" control mechanism, and a \\bag-based\" memory organization. A recent implementation of the NARS model, with examples, is discussed. The system has many interesting properties that are shared by human cognition, but are absent from conventional computational models of reasoning. This research sheds light on several notions in arti cial intelligence and cognitive science, including symbol-grounding, induction, categorization, logic, and computation. These are discussed to show the implications of the new theory of intelligence. Finally, the major results of the research are summarized, a preliminary evaluation of the working de nition of intelligence is given, and the limitations and future extensions of the research are discussed.",
"title": ""
},
{
"docid": "b231da0ff32e823bb245328929bdebf3",
"text": "BACKGROUND\nCultivated bananas and plantains are giant herbaceous plants within the genus Musa. They are both sterile and parthenocarpic so the fruit develops without seed. The cultivated hybrids and species are mostly triploid (2n = 3x = 33; a few are diploid or tetraploid), and most have been propagated from mutants found in the wild. With a production of 100 million tons annually, banana is a staple food across the Asian, African and American tropics, with the 15 % that is exported being important to many economies.\n\n\nSCOPE\nThere are well over a thousand domesticated Musa cultivars and their genetic diversity is high, indicating multiple origins from different wild hybrids between two principle ancestral species. However, the difficulty of genetics and sterility of the crop has meant that the development of new varieties through hybridization, mutation or transformation was not very successful in the 20th century. Knowledge of structural and functional genomics and genes, reproductive physiology, cytogenetics, and comparative genomics with rice, Arabidopsis and other model species has increased our understanding of Musa and its diversity enormously.\n\n\nCONCLUSIONS\nThere are major challenges to banana production from virulent diseases, abiotic stresses and new demands for sustainability, quality, transport and yield. Within the genepool of cultivars and wild species there are genetic resistances to many stresses. Genomic approaches are now rapidly advancing in Musa and have the prospect of helping enable banana to maintain and increase its importance as a staple food and cash crop through integration of genetical, evolutionary and structural data, allowing targeted breeding, transformation and efficient use of Musa biodiversity in the future.",
"title": ""
},
{
"docid": "0bb2798c21d9f7420ea47c717578e94d",
"text": "Blockchain has drawn attention as the next-generation financial technology due to its security that suits the informatization era. In particular, it provides security through the authentication of peers that share virtual cash, encryption, and the generation of hash value. According to the global financial industry, the market for security-based blockchain technology is expected to grow to about USD 20 billion by 2020. In addition, blockchain can be applied beyond the Internet of Things (IoT) environment; its applications are expected to expand. Cloud computing has been dramatically adopted in all IT environments for its efficiency and availability. In this paper, we discuss the concept of blockchain technology and its hot research trends. In addition, we will study how to adapt blockchain security to cloud computing and its secure solutions in detail.",
"title": ""
},
{
"docid": "0da299fb53db5980a10e0ae8699d2209",
"text": "Modern heuristics or metaheuristics are optimization algorithms that have been increasingly used during the last decades to support complex decision-making in a number of fields, such as logistics and transportation, telecommunication networks, bioinformatics, finance, and the like. The continuous increase in computing power, together with advancements in metaheuristics frameworks and parallelization strategies, are empowering these types of algorithms as one of the best alternatives to solve rich and real-life combinatorial optimization problems that arise in a number of financial and banking activities. This article reviews some of the works related to the use of metaheuristics in solving both classical and emergent problems in the finance arena. A non-exhaustive list of examples includes rich portfolio optimization, index tracking, enhanced indexation, credit risk, stock investments, financial project scheduling, option pricing, feature selection, bankruptcy and financial distress prediction, and credit risk assessment. This article also discusses some open opportunities for researchers in the field, and forecast the evolution of metaheuristics to include real-life uncertainty conditions into the optimization problems being considered.",
"title": ""
},
{
"docid": "5542f4693a4251edcf995e7608fbda56",
"text": "This paper investigates the antecedents and consequences of customer loyalty in an online business-to-consumer (B2C) context. We identify eight factors (the 8Cs—customization, contact interactivity, care, community, convenience, cultivation, choice, and character) that potentially impact e-loyalty and develop scales to measure these factors. Data collected from 1,211 online customers demonstrate that all these factors, except convenience, impact e-loyalty. The data also reveal that e-loyalty has an impact on two customer-related outcomes: word-ofmouth promotion and willingness to pay more. © 2002 by New York University. All rights reserved.",
"title": ""
},
{
"docid": "c772bc43f2b8c76aa3e096405cd1b824",
"text": "Application programmers increasingly prefer distributed storage systems with strong consistency and distributed transactions (e.g., Google's Spanner) for their strong guarantees and ease of use. Unfortunately, existing transactional storage systems are expensive to use -- in part because they require costly replication protocols, like Paxos, for fault tolerance. In this paper, we present a new approach that makes transactional storage systems more affordable: we eliminate consistency from the replication protocol while still providing distributed transactions with strong consistency to applications.\n We present TAPIR -- the Transactional Application Protocol for Inconsistent Replication -- the first transaction protocol to use a novel replication protocol, called inconsistent replication, that provides fault tolerance without consistency. By enforcing strong consistency only in the transaction protocol, TAPIR can commit transactions in a single round-trip and order distributed transactions without centralized coordination. We demonstrate the use of TAPIR in a transactional key-value store, TAPIR-KV. Compared to conventional systems, TAPIR-KV provides better latency and throughput.",
"title": ""
},
{
"docid": "d15072fd8776d17e8a3b8b89af5fed08",
"text": "PsV: psoriasis vulgaris INTRODUCTION Pityriasis amiantacea is a rare clinical condition characterized by masses of waxy and sticky scales that adhere to the scalp and tenaciously attach to hair bundles. Pityriasis amiantacea can be associated with psoriasis vulgaris (PsV).We examined a patient with pityriasis amiantacea caused by PsV who also had keratotic horns on the scalp, histopathologically fibrokeratomas. To the best of our knowledge, this is the first case of scalp fibrokeratoma stimulated by pityriasis amiantacea and PsV.",
"title": ""
},
{
"docid": "7fc35d2bb27fb35b5585aad8601a0cbd",
"text": "We introduce Anita: a flexible and intelligent Text Adaptation tool for web content that provides Text Simplification and Text Enhancement modules. Anita’s simplification module features a state-of-the-art system that adapts texts according to the needs of individual users, and its enhancement module allows the user to search for a word’s definitions, synonyms, translations, and visual cues through related images. These utilities are brought together in an easy-to-use interface of a freely available web browser extension.",
"title": ""
},
{
"docid": "fc164dc2d55cec2867a99436d37962a1",
"text": "We address the text-to-text generation problem of sentence-level paraphrasing — a phenomenon distinct from and more difficult than wordor phrase-level paraphrasing. Our approach applies multiple-sequence alignment to sentences gathered from unannotated comparable corpora: it learns a set of paraphrasing patterns represented by word lattice pairs and automatically determines how to apply these patterns to rewrite new sentences. The results of our evaluation experiments show that the system derives accurate paraphrases, outperforming baseline systems.",
"title": ""
}
] | scidocsrr |
4d14f8c90632d703b3564aee1ae15fcc | Disassembling gamification: the effects of points and meaning on user motivation and performance | [
{
"docid": "fd6b7a0e915a32fe172a757b5a08e5ef",
"text": "More Americans now play video games than go to the movies (NPD Group, 2009). The meteoric rise in popularity of video games highlights the need for research approaches that can deepen our scientific understanding of video game engagement. This article advances a theory-based motivational model for examining and evaluating the ways by which video game engagement shapes psychological processes and influences well-being. Rooted in self-determination theory (Deci & Ryan, 2000; Ryan & Deci, 2000a), our approach suggests that both the appeal and well-being effects of video games are based in their potential to satisfy basic psychological needs for competence, autonomy, and relatedness. We review recent empirical evidence applying this perspective to a number of topics including need satisfaction in games and short-term well-being, the motivational appeal of violent game content, motivational sources of postplay aggression, the antecedents and consequences of disordered patterns of game engagement, and the determinants and effects of immersion. Implications of this model for the future study of game motivation and the use of video games in interventions are discussed.",
"title": ""
},
{
"docid": "1a2afe6610c82c512a94e16ff42f6a27",
"text": "We conduct a natural field experiment that explores the relationship between the “meaningfulness” of a task and people’s willingness to work. Our study uses workers from Amazon’s Mechanical Turk (MTurk), an online marketplace for task-based work. All participants are given an identical task of labeling medical images. However, the task is presented differently depending on treatment. Subjects assigned to the meaningful treatment are told they would be helping researchers label tumor cells, whereas subjects in the zero-context treatment are not told the purpose of their task and only told that they would be labeling “objects of interest”. Our experimental design specifically hires US and Indian workers in order to test for heterogeneous effects. We find that US, but not Indian, workers are induced to work at a higher proportion when given cues that their task was meaningful. However, conditional on working, whether a task was framed as meaningful does not induce greater or higher quality output in either the US or in India.",
"title": ""
},
{
"docid": "f1c00253a57236ead67b013e7ce94a5e",
"text": "A meta-analysis of 128 studies examined the effects of extrinsic rewards on intrinsic motivation. As predicted, engagement-contingent, completion-contingent, and performance-contingent rewards significantly undermined free-choice intrinsic motivation (d = -0.40, -0.36, and -0.28, respectively), as did all rewards, all tangible rewards, and all expected rewards. Engagement-contingent and completion-contingent rewards also significantly undermined self-reported interest (d = -0.15, and -0.17), as did all tangible rewards and all expected rewards. Positive feedback enhanced both free-choice behavior (d = 0.33) and self-reported interest (d = 0.31). Tangible rewards tended to be more detrimental for children than college students, and verbal rewards tended to be less enhancing for children than college students. The authors review 4 previous meta-analyses of this literature and detail how this study's methods, analyses, and results differed from the previous ones.",
"title": ""
}
] | [
{
"docid": "95b48a41d796aec0a1f23b3fc0879ed9",
"text": "Action anticipation aims to detect an action before it happens. Many real world applications in robotics and surveillance are related to this predictive capability. Current methods address this problem by first anticipating visual representations of future frames and then categorizing the anticipated representations to actions. However, anticipation is based on a single past frame’s representation, which ignores the history trend. Besides, it can only anticipate a fixed future time. We propose a Reinforced Encoder-Decoder (RED) network for action anticipation. RED takes multiple history representations as input and learns to anticipate a sequence of future representations. One salient aspect of RED is that a reinforcement module is adopted to provide sequence-level supervision; the reward function is designed to encourage the system to make correct predictions as early as possible. We test RED on TVSeries, THUMOS-14 and TV-Human-Interaction datasets for action anticipation and achieve state-of-the-art performance on all datasets.",
"title": ""
},
{
"docid": "59bc11cd78549304225ab630ef0f5701",
"text": "This study presents and examines SamEx, a mobile learning system used by 305 students in formal and informal learning in a primary school in Singapore. Students use SamEx in situ to capture media such as pictures, video clips and audio recordings, comment on them, and share them with their peers. In this paper we report on the experiences of students in using the application throughout a one-year period with a focus on self-directedness, quality of contributions, and answers to contextual question prompts. We examine how the usage of tools such as SamEx predicts students' science examination results, discuss the role of badges as an extrinsic motivational tool, and explore how individual and collaborative learning emerge. Our research shows that the quantity and quality of contributions provided by the students in SamEx predict the end-year assessment score. With respect to specific system features, contextual answers given by the students and the overall likes received by students are also correlated with the end-year assessment score. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "335912dc59a4043dee983ec40434273f",
"text": "People increasingly use smartwatches in tandem with other devices such as smartphones, laptops or tablets. This allows for novel cross-device applications that use the watch as both input device and output display. However, despite the increasing availability of smartwatches, prototyping cross-device watch-centric applications remains a challenging task. Developers are limited in the applications they can explore as available toolkits provide only limited access to different types of input sensors for cross-device interactions. To address this problem, we introduce WatchConnect, a toolkit for rapidly prototyping cross-device applications and interaction techniques with smartwatches. The toolkit provides developers with (i) an extendable hardware platform that emulates a smartwatch, (ii) a UI framework that integrates with an existing UI builder, and (iii) a rich set of input and output events using a range of built-in sensor mappings. We demonstrate the versatility and design space of the toolkit with five interaction techniques and applications.",
"title": ""
},
{
"docid": "41c35407c55878910f5dfc2dfe083955",
"text": "This work deals with several aspects concerning the formal verification of SN P systems and the computing power of some variants. A methodology based on the information given by the transition diagram associated with an SN P system is presented. The analysis of the diagram cycles codifies invariants formulae which enable us to establish the soundness and completeness of the system with respect to the problem it tries to resolve. We also study the universality of asynchronous and sequential SN P systems and the capability these models have to generate certain classes of languages. Further, by making a slight modification to the standard SN P systems, we introduce a new variant of SN P systems with a special I/O mode, called SN P modules, and study their computing power. It is demonstrated that, as string language acceptors and transducers, SN P modules can simulate several types of computing devices such as finite automata, a-finite transducers, and systolic trellis automata.",
"title": ""
},
{
"docid": "57a974ecdbc1911f161e17f8fad7c173",
"text": "This paper reviews the technology trends of BCD (Bipolar-CMOS-DMOS) technology in terms of voltage capability, switching speed of power transistor, and high integration of logic CMOS for SoC (System-on-Chip) solution requiring high-voltage devices. Recent trends such like modularity of the process, power metal routing, and high-density NVM (Non-Volatile Memory) are also discussed.",
"title": ""
},
{
"docid": "32ed0f6d7dd3b5cc3c1613685eb76de7",
"text": "Images captured in low-light conditions usually suffer from very low contrast, which increases the difficulty of subsequent computer vision tasks in a great extent. In this paper, a low-light image enhancement model based on convolutional neural network and Retinex theory is proposed. Firstly, we show that multi-scale Retinex is equivalent to a feedforward convolutional neural network with different Gaussian convolution kernels. Motivated by this fact, we consider a Convolutional Neural Network(MSR-net) that directly learns an end-to-end mapping between dark and bright images. Different fundamentally from existing approaches, low-light image enhancement in this paper is regarded as a machine learning problem. In this model, most of the parameters are optimized by back-propagation, while the parameters of traditional models depend on the artificial setting. Experiments on a number of challenging images reveal the advantages of our method in comparison with other state-of-the-art methods from the qualitative and quantitative perspective.",
"title": ""
},
{
"docid": "0aab03fe46d4f04b2bb8d10fa32ce049",
"text": "Nowadays, World Wide Web (WWW) surfing is becoming a risky task with the Web becoming rich in all sorts of attack. Websites are the main source of many scams, phishing attacks, identity theft, SPAM commerce and malware. Nevertheless, browsers, blacklists, and popup blockers are not enough to protect users. According to this, fast and accurate systems still to be needed with the ability to detect new malicious content. By taking into consideration, researchers have developed various Malicious Website detection techniques in recent years. Analyzing those works available in the literature can provide good knowledge on this topic and also, it will lead to finding the recent problems in Malicious Website detection. Accordingly, I have planned to do a comprehensive study with the literature of Malicious Website detection techniques. To categorize the techniques, all articles that had the word “malicious detection” in its title or as its keyword published between January 2003 to august 2016, is first selected from the scientific journals: IEEE, Elsevier, Springer and international journals. After the collection of research articles, we discuss every research paper. In addition, this study gives an elaborate idea about malicious detection.",
"title": ""
},
{
"docid": "6eb7bb6f623475f7ca92025fd00dbc27",
"text": "Support vector machines (SVMs) have been recognized as one o f th most successful classification methods for many applications including text classific ation. Even though the learning ability and computational complexity of training in support vector machines may be independent of the dimension of the feature space, reducing computational com plexity is an essential issue to efficiently handle a large number of terms in practical applicat ions of text classification. In this paper, we adopt novel dimension reduction methods to reduce the dim nsion of the document vectors dramatically. We also introduce decision functions for the centroid-based classification algorithm and support vector classifiers to handle the classification p r blem where a document may belong to multiple classes. Our substantial experimental results sh ow t at with several dimension reduction methods that are designed particularly for clustered data, higher efficiency for both training and testing can be achieved without sacrificing prediction accu ra y of text classification even when the dimension of the input space is significantly reduced.",
"title": ""
},
{
"docid": "6cb46b57b657a90fb5b4b91504cdfd8f",
"text": "One of the themes of Emotion and Decision-Making Explained (Rolls, 2014c) is that there are multiple routes to emotionrelated responses, with some illustrated in Fig. 1. Brain systems involved in decoding stimuli in terms of whether they are instrumental reinforcers so that goal directed actions may be performed to obtain or avoid the stimuli are emphasized as being important for emotional states, for an intervening state may be needed to bridge the time gap between the decoding of a goal-directed stimulus, and the actions that may need to be set into train and directed to obtain or avoid the emotionrelated stimulus. In contrast, when unconditioned or classically conditioned responses such as autonomic responses, freezing, turning away etc. are required, there is no need for intervening states such as emotional states. These points are covered in Chapters 2e4 and 10 of the book. Ono and Nishijo (2014) raise the issue of the extent to which subcortical pathways are involved in the elicitation of some of these emotion-related responses. They describe interesting research that pulvinar neurons in macaques may respond to snakes, and may provide a route that does not require cortical processing for some probably innately specified visual stimuli to produce responses. With respect to Fig. 1, the pathway is that some of the inputs labeled as primary reinforcers may reach brain regions including the amygdala by a subcortical route. LeDoux (2012) provides evidence in the same direction, in his case involving a ‘low road’ for auditory stimuli such as tones (which do not required cortical processing) to reach, via a subcortical pathway, the amygdala, where classically conditioned e.g., freezing and autonomic responses may be learned. Consistently, there is evidence (Chapter 4) that humans with damage to the primary visual cortex who describe themselves as blind do nevertheless show some responses to stimuli such as a face expression (de Gelder, Vroomen, Pourtois, & Weiskrantz, 1999; Tamietto et al., 2009; Tamietto & de Gelder, 2010). I agree that the elicitation of unconditioned and conditioned responses to these particular types of stimuli (LeDoux, 2014) is of interest (Rolls, 2014a). However, in Emotion and Decision-Making Explained, I emphasize that there aremassive cortical inputs to structures involved in emotion such as the amygdala and orbitofrontal cortex, and that neurons in both structures can have viewinvariant responses to visual stimuli including faces which specify face identity, and can have responses that are selective for particular emotional expressions (Leonard, Rolls, Wilson, & Baylis, 1985; Rolls, 1984, 2007, 2011, 2012; Rolls, Critchley, Browning, & Inoue, 2006) which reflect the neuronal responses found in the temporal cortical and related visual areas, as we discovered (Perrett, Rolls, & Caan, 1982; Rolls, 2007, 2008a, 2011, 2012; Sanghera, Rolls, & Roper-Hall, 1979). View invariant representations are important for",
"title": ""
},
{
"docid": "7e6182248b3c3d7dedce16f8bfa58b28",
"text": "In this paper, we aim to provide a point-of-interests (POI) recommendation service for the rapid growing location-based social networks (LBSNs), e.g., Foursquare, Whrrl, etc. Our idea is to explore user preference, social influence and geographical influence for POI recommendations. In addition to deriving user preference based on user-based collaborative filtering and exploring social influence from friends, we put a special emphasis on geographical influence due to the spatial clustering phenomenon exhibited in user check-in activities of LBSNs. We argue that the geographical influence among POIs plays an important role in user check-in behaviors and model it by power law distribution. Accordingly, we develop a collaborative recommendation algorithm based on geographical influence based on naive Bayesian. Furthermore, we propose a unified POI recommendation framework, which fuses user preference to a POI with social influence and geographical influence. Finally, we conduct a comprehensive performance evaluation over two large-scale datasets collected from Foursquare and Whrrl. Experimental results with these real datasets show that the unified collaborative recommendation approach significantly outperforms a wide spectrum of alternative recommendation approaches.",
"title": ""
},
{
"docid": "e2fd61cef4ec32c79b059552e7820092",
"text": "This paper describes a general framework for learning Higher-Order Network Embeddings (HONE) from graph data based on network motifs. The HONE framework is highly expressive and flexible with many interchangeable components. The experimental results demonstrate the effectiveness of learning higher-order network representations. In all cases, HONE outperforms recent embedding methods that are unable to capture higher-order structures with a mean relative gain in AUC of 19% (and up to 75% gain) across a wide variety of networks and embedding methods.",
"title": ""
},
{
"docid": "dfa1269878b384b24c7ba6aea6a11373",
"text": "Transfer printing represents a set of techniques for deterministic assembly of micro-and nanomaterials into spatially organized, functional arrangements with two and three-dimensional layouts. Such processes provide versatile routes not only to test structures and vehicles for scientific studies but also to high-performance, heterogeneously integrated functional systems, including those in flexible electronics, three-dimensional and/or curvilinear optoelectronics, and bio-integrated sensing and therapeutic devices. This article summarizes recent advances in a variety of transfer printing techniques, ranging from the mechanics and materials aspects that govern their operation to engineering features of their use in systems with varying levels of complexity. A concluding section presents perspectives on opportunities for basic and applied research, and on emerging use of these methods in high throughput, industrial-scale manufacturing.",
"title": ""
},
{
"docid": "b838cd18098a4824e8ae16d55c297cfb",
"text": "While deep learning has had significant successes in computer vision thanks to the abundance of visual data, collecting sufficiently large real-world datasets for robot learning can be costly. To increase the practicality of these techniques on real robots, we propose a modular deep reinforcement learning method capable of transferring models trained in simulation to a real-world robotic task. We introduce a bottleneck between perception and control, enabling the networks to be trained independently, but then merged and fine-tuned in an end-to-end manner to further improve hand-eye coordination. On a canonical, planar visually-guided robot reaching task a fine-tuned accuracy of 1.6 pixels is achieved, a significant improvement over naive transfer (17.5 pixels), showing the potential for more complicated and broader applications. Our method provides a technique for more efficient learning and transfer of visuomotor policies for real robotic systems without relying entirely on large real-world robot datasets.",
"title": ""
},
{
"docid": "b79b3497ae4987e00129eab9745e1398",
"text": "The automata-theoretic approach to linear temporal logic uses the theory of automata as a unifying paradigm for program specification, verification, and synthesis. Both programs and specifications are in essence descriptions of computations. These computations can be viewed as words over some alphabet. Thus,programs and specificationscan be viewed as descriptions of languagesover some alphabet. The automata-theoretic perspective considers the relationships between programs and their specifications as relationships between languages.By translating programs and specifications to automata, questions about programs and their specifications can be reduced to questions about automata. More specifically, questions such as satisfiability of specifications and correctness of programs with respect to their specifications can be reduced to questions such as nonemptiness and containment of automata. Unlike classical automata theory, which focused on automata on finite words, the applications to program specification, verification, and synthesis, use automata on infinite words, since the computations in which we are interested are typically infinite. This paper provides an introduction to the theory of automata on infinite words and demonstrates its applications to program specification, verification, and synthesis.",
"title": ""
},
{
"docid": "51c5dbc32d37777614936a77a10e42bc",
"text": "During the last decade, the applications of signal processing have drastically improved with deep learning. However areas of affecting computing such as emotional speech synthesis or emotion recognition from spoken language remains challenging. In this paper, we investigate the use of a neural Automatic Speech Recognition (ASR) as a feature extractor for emotion recognition. We show that these features outperform the eGeMAPS feature set to predict the valence and arousal emotional dimensions, which means that the audio-to-text mapping learned by the ASR system contains information related to the emotional dimensions in spontaneous speech. We also examine the relationship between first layers (closer to speech) and last layers (closer to text) of the ASR and valence/arousal.",
"title": ""
},
{
"docid": "83728a9b746c7d3c3ea1e89ef01f9020",
"text": "This paper presents the design of the robot AILA, a mobile dual-arm robot system developed as a research platform for investigating aspects of the currently booming multidisciplinary area of mobile manipulation. The robot integrates and allows in a single platform to perform research in most of the areas involved in autonomous robotics: navigation, mobile and dual-arm manipulation planning, active compliance and force control strategies, object recognition, scene representation, and semantic perception. AILA has 32 degrees of freedom, including 7-DOF arms, 4-DOF torso, 2-DOF head, and a mobile base equipped with six wheels, each of them with two degrees of freedom. The primary design goal was to achieve a lightweight arm construction with a payload-to-weight ratio greater than one. Besides, an adjustable body should sustain the dual-arm system providing an extended workspace. In addition, mobility is provided by means of a wheel-based mobile base. As a result, AILA's arms can lift 8kg and weigh 5.5kg, thus achieving a payload-to-weight ratio of 1.45. The paper will provide an overview of the design, especially in the mechatronics area, as well as of its realization, the sensors incorporated in the system, and its control software.",
"title": ""
},
{
"docid": "9d2583618e9e00333d044ac53da65ceb",
"text": "The phosphor deposits of the β-sialon:Eu2+ mixed with various amounts (0-1 g) of the SnO₂ nanoparticles were fabricated by the electrophoretic deposition (EPD) process. The mixed SnO₂ nanoparticles was observed to cover onto the particle surfaces of the β-sialon:Eu2+ as well as fill in the voids among the phosphor particles. The external and internal quantum efficiencies (QEs) of the prepared deposits were found to be dependent on the mixing amount of the SnO₂: by comparing with the deposit without any mixing (48% internal and 38% external QEs), after mixing the SnO₂ nanoparticles, the both QEs were improved to 55% internal and 43% external QEs at small mixing amount (0.05 g); whereas, with increasing the mixing amount to 0.1 and 1 g, they were reduced to 36% and 29% for the 0.1 g addition and 15% and 12% l QEs for the 1 g addition. More interestingly, tunable color appearances of the deposits prepared by the EPD process were achieved, from yellow green to blue, by varying the addition amount of the SnO₂, enabling it as an alternative technique instead of altering the voltage and depositing time for the color appearance controllability.",
"title": ""
},
{
"docid": "a4dea5e491657e1ba042219401ebcf39",
"text": "Beam scanning arrays typically suffer from scan loss; an increasing degradation in gain as the beam is scanned from broadside toward the horizon in any given scan plane. Here, a metasurface is presented that reduces the effects of scan loss for a leaky-wave antenna (LWA). The metasurface is simple, being composed of an ultrathin sheet of subwavelength split-ring resonators. The leaky-wave structure is balanced, scanning from the forward region, through broadside, and into the backward region, and designed to scan in the magnetic plane. The metasurface is effectively invisible at broadside, where balanced LWAs are most sensitive to external loading. It is shown that the introduction of the metasurface results in increased directivity, and hence, gain, as the beam is scanned off broadside, having an increasing effect as the beam is scanned to the horizon. Simulations show that the metasurface improves the effective aperture distribution at higher scan angles, resulting in a more directive main beam, while having a negligible impact on cross-polarization gain. Experimental validation results show that the scan range of the antenna is increased from $-39 {^{\\circ }} \\leq \\theta \\leq +32 {^{\\circ }}$ to $-64 {^{\\circ }} \\leq \\theta \\leq +70 {^{\\circ }}$ , when loaded with the metasurface, demonstrating a flattened gain profile over a 135° range centered about broadside. Moreover, this scan range occurs over a frequency band spanning from 9 to 15.5 GHz, demonstrating a relative bandwidth of 53% for the metasurface.",
"title": ""
},
{
"docid": "8624bdce9b571418f88f4adb52984462",
"text": "Video-based traffic flow monitoring is a fast emerging field based on the continuous development of computer vision. A survey of the state-of-the-art video processing techniques in traffic flow monitoring is presented in this paper. Firstly, vehicle detection is the first step of video processing and detection methods are classified into background modeling based methods and non-background modeling based methods. In particular, nighttime detection is more challenging due to bad illumination and sensitivity to light. Then tracking techniques, including 3D model-based, region-based, active contour-based and feature-based tracking, are presented. A variety of algorithms including MeanShift algorithm, Kalman Filter and Particle Filter are applied in tracking process. In addition, shadow detection and vehicles occlusion bring much trouble into vehicle detection, tracking and so on. Based on the aforementioned video processing techniques, discussion on behavior understanding including traffic incident detection is carried out. Finally, key challenges in traffic flow monitoring are discussed.",
"title": ""
},
{
"docid": "42a6b6ac31383046cf11bcf16da3207e",
"text": "Epigenome-wide association studies represent one means of applying genome-wide assays to identify molecular events that could be associated with human phenotypes. The epigenome is especially intriguing as a target for study, as epigenetic regulatory processes are, by definition, heritable from parent to daughter cells and are found to have transcriptional regulatory properties. As such, the epigenome is an attractive candidate for mediating long-term responses to cellular stimuli, such as environmental effects modifying disease risk. Such epigenomic studies represent a broader category of disease -omics, which suffer from multiple problems in design and execution that severely limit their interpretability. Here we define many of the problems with current epigenomic studies and propose solutions that can be applied to allow this and other disease -omics studies to achieve their potential for generating valuable insights.",
"title": ""
}
] | scidocsrr |
ec7e6e749851018e569eb28bb6ac9dab | Adaptability of Neural Networks on Varying Granularity IR Tasks | [
{
"docid": "79ad27cffbbcbe3a49124abd82c6e477",
"text": "In this paper we address the following problem in web document and information retrieval (IR): How can we use long-term context information to gain better IR performance? Unlike common IR methods that use bag of words representation for queries and documents, we treat them as a sequence of words and use long short term memory (LSTM) to capture contextual dependencies. To the best of our knowledge, this is the first time that LSTM is applied to information retrieval tasks. Unlike training traditional LSTMs, the training strategy is different due to the special nature of information retrieval problem. Experimental evaluation on an IR task derived from the Bing web search demonstrates the ability of the proposed method in addressing both lexical mismatch and long-term context modelling issues, thereby, significantly outperforming existing state of the art methods for web document retrieval task.",
"title": ""
},
{
"docid": "f7a21cf633a5b0d76d7ae09e6d3e8822",
"text": "We apply a general deep learning framework to address the non-factoid question answering task. Our approach does not rely on any linguistic tools and can be applied to different languages or domains. Various architectures are presented and compared. We create and release a QA corpus and setup a new QA task in the insurance domain. Experimental results demonstrate superior performance compared to the baseline methods and various technologies give further improvements. For this highly challenging task, the top-1 accuracy can reach up to 65.3% on a test set, which indicates a great potential for practical use.",
"title": ""
},
{
"docid": "121daac04555fd294eef0af9d0fb2185",
"text": "In this paper, we apply a general deep learning (DL) framework for the answer selection task, which does not depend on manually defined features or linguistic tools. The basic framework is to build the embeddings of questions and answers based on bidirectional long short-term memory (biLSTM) models, and measure their closeness by cosine similarity. We further extend this basic model in two directions. One direction is to define a more composite representation for questions and answers by combining convolutional neural network with the basic framework. The other direction is to utilize a simple but efficient attention mechanism in order to generate the answer representation according to the question context. Several variations of models are provided. The models are examined by two datasets, including TREC-QA and InsuranceQA. Experimental results demonstrate that the proposed models substantially outperform several strong baselines.",
"title": ""
},
{
"docid": "1a6ece40fa87e787f218902eba9b89f7",
"text": "Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3\\% absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers.",
"title": ""
}
] | [
{
"docid": "e4ac08754a0c31881364f0603f24b0df",
"text": "Software Defined Network (SDN) facilitates network programmers with easier network monitoring, identification of anomalies, instant implementation of changes, central control to the whole network in a cost effective and efficient manner. These features could be beneficial for securing and maintaining entire network. Being a promising network paradigm, it draws a lot of attention from researchers in security domain. But it's logically centralized control tends to single point of failure, increasing the risk of attacks such as Distributed Denial of Service (DDoS) attack. In this paper, we have tried to identify various possibilities of DDoS attacks in SDN environment with the help of attack tree and an attack model. Further, an attempt to analyze the impact of various traditional DDoS attacks on SDN components is done. Such analysis helps in identifying the type of DDoS attacks that impose bigger threat on SDN architecture and also the features that could play important role in identification of these attacks are deduced.",
"title": ""
},
{
"docid": "486c00c39bee410113dc2caf7e98a6bc",
"text": "We investigated teacher versus student seat selection in the context of group and individual seating arrangements. Disruptive behavior during group seating occurred at twice the rate when students chose their seats than when the teacher chose. During individual seating, disruptive behavior occurred more than three times as often when the students chose their seats. The results are discussed in relation to choice and the matching law.",
"title": ""
},
{
"docid": "6bc942f7f78c8549d60cc4be5e0b467a",
"text": "In this study, we propose a novel, lightweight approach to real-time detection of vehicles using parts at intersections. Intersections feature oncoming, preceding, and cross traffic, which presents challenges for vision-based vehicle detection. Ubiquitous partial occlusions further complicate the vehicle detection task, and occur when vehicles enter and leave the camera's field of view. To confront these issues, we independently detect vehicle parts using strong classifiers trained with active learning. We match part responses using a learned matching classification. The learning process for part configurations leverages user input regarding full vehicle configurations. Part configurations are evaluated using Support Vector Machine classification. We present a comparison of detection results using geometric image features and appearance-based features. The full vehicle detection by parts has been evaluated on real-world data, runs in real time, and shows promise for future work in urban driver assistance.",
"title": ""
},
{
"docid": "e1c5199830d2de7c7f8f2ae28d84090b",
"text": "Once generated, neurons are thought to permanently exit the cell cycle and become irreversibly differentiated. However, neither the precise point at which this post-mitotic state is attained nor the extent of its irreversibility is clearly defined. Here we report that newly born neurons from the upper layers of the mouse cortex, despite initiating axon and dendrite elongation, continue to drive gene expression from the neural progenitor tubulin α1 promoter (Tα1p). These observations suggest an ambiguous post-mitotic neuronal state. Whole transcriptome analysis of sorted upper cortical neurons further revealed that neurons continue to express genes related to cell cycle progression long after mitotic exit until at least post-natal day 3 (P3). These genes are however down-regulated thereafter, associated with a concomitant up-regulation of tumor suppressors at P5. Interestingly, newly born neurons located in the cortical plate (CP) at embryonic day 18-19 (E18-E19) and P3 challenged with calcium influx are found in S/G2/M phases of the cell cycle, and still able to undergo division at E18-E19 but not at P3. At P5 however, calcium influx becomes neurotoxic and leads instead to neuronal loss. Our data delineate an unexpected flexibility of cell cycle control in early born neurons, and describe how neurons transit to a post-mitotic state.",
"title": ""
},
{
"docid": "4254ad134a2359d42dea2bcf64d6bdce",
"text": "Radio Frequency Identification (RFID) systems aim to identify objects in open environments with neither physical nor visual contact. They consist of transponders inserted into objects, of readers, and usually of a database which contains information about the objects. The key point is that authorised readers must be able to identify tags without an adversary being able to trace them. Traceability is often underestimated by advocates of the technology and sometimes exaggerated by its detractors. Whatever the true picture, this problem is a reality when it blocks the deployment of this technology and some companies, faced with being boycotted, have already abandoned its use. Using cryptographic primitives to thwart the traceability issues is an approach which has been explored for several years. However, the research carried out up to now has not provided satisfactory results as no universal formalism has been defined. In this paper, we propose an adversarial model suitable for RFID environments. We define the notions of existential and universal untraceability and we model the access to the communication channels from a set of oracles. We show that our formalisation fits the problem being considered and allows a formal analysis of the protocols in terms of traceability. We use our model on several well-known RFID protocols and we show that most of them have weaknesses and are vulnerable to traceability.",
"title": ""
},
{
"docid": "4f5272a35c9991227a6d098209de8d6c",
"text": "This is an investigation of \" Online Creativity. \" I will present a new account of the cognitive and social mechanisms underlying complex thinking of creative scientists as they work on significant problems in contemporary science. I will lay out an innovative methodology that I have developed for investigating creative and complex thinking in a real-world context. Using this method, I have discovered that there are a number of strategies that are used in contemporary science that increase the likelihood of scientists making discoveries. The findings reported in this chapter provide new insights into complex scientific thinking and will dispel many of the myths surrounding the generation of new concepts and scientific discoveries. InVivo cognition: A new way of investigating cognition There is a large background in cognitive research on thinking, reasoning and problem solving processes that form the foundation for creative cognition (see Dunbar, in press, Holyoak 1996 for recent reviews). However, to a large extent, research on reasoning has demonstrated that subjects in psychology experiments make vast numbers of thinking and reasoning errors even in the most simple problems. How is creative thought even possible if people make so many reasoning errors? One problem with research on reasoning is that the concepts and stimuli that the subjects are asked to use are often arbitrary and involve no background knowledge (cf. Dunbar, 1995; Klahr & Dunbar, 1988). I have proposed that one way of determining what reasoning errors are specific and which are general is to investigate cognition in the cognitive laboratory and the real world (Dunbar, 1995). Psychologists should conduct both InVitro and InVivo research to understand thinking. InVitro research is the standard psychological experiment where subjects are brought into the laboratory and controlled experiments are conducted. As can be seen from the research reported in this volume, this approach yields many insights into the psychological mechanisms underlying complex thinking. The use of an InVivo methodology in which online thinking and reasoning are investigated in a real-world context yields fundamental insights into the basic cognitive mechanisms underlying complex cognition and creativity. The results of InVivo cognitive research can then be used as a basis for further InVitro work in which controlled experiments are conducted. In this chapter, I will outline some of the results of my ongoing InVivo research on creative scientific thinking and relate this research back to the more common InVitro research and show that the …",
"title": ""
},
{
"docid": "4efa56d9c2c387608fe9ddfdafca0f9a",
"text": "Accurate cardinality estimates are essential for a successful query optimization. This is not only true for relational DBMSs but also for RDF stores. An RDF database consists of a set of triples and, hence, can be seen as a relational database with a single table with three attributes. This makes RDF rather special in that queries typically contain many self joins. We show that relational DBMSs are not well-prepared to perform cardinality estimation in this context. Further, there are hardly any special cardinality estimation methods for RDF databases. To overcome this lack of appropriate cardinality estimation methods, we introduce characteristic sets together with new cardinality estimation methods based upon them. We then show experimentally that the new methods are-in the RDF context-highly superior to the estimation methods employed by commercial DBMSs and by the open-source RDF store RDF-3X.",
"title": ""
},
{
"docid": "496175f20823fa42c852060cf41f5095",
"text": "Currently, the use of virtual reality (VR) is being widely applied in different fields, especially in computer science, engineering, and medicine. Concretely, the engineering applications based on VR cover approximately one half of the total number of VR resources (considering the research works published up to last year, 2016). In this paper, the capabilities of different computational software for designing VR applications in engineering education are discussed. As a result, a general flowchart is proposed as a guide for designing VR resources in any application. It is worth highlighting that, rather than this study being based on the applications used in the engineering field, the obtained results can be easily extrapolated to other knowledge areas without any loss of generality. This way, this paper can serve as a guide for creating a VR application.",
"title": ""
},
{
"docid": "32afde90b1bf577aa07135db66250b38",
"text": "We present a generic method for augmenting unsupervised query segmentation by incorporating Parts-of-Speech (POS) sequence information to detect meaningful but rare n-grams. Our initial experiments with an existing English POS tagger employing two different POS tagsets and an unsupervised POS induction technique specifically adapted for queries show that POS information can significantly improve query segmentation performance in all these cases.",
"title": ""
},
{
"docid": "d4acd79e2fdbc9b87b2dbc6ebfa2dd43",
"text": "Airbnb, an online marketplace for accommodations, has experienced a staggering growth accompanied by intense debates and scattered regulations around the world. Current discourses, however, are largely focused on opinions rather than empirical evidences. Here, we aim to bridge this gap by presenting the first large-scale measurement study on Airbnb, using a crawled data set containing 2.3 million listings, 1.3 million hosts, and 19.3 million reviews. We measure several key characteristics at the heart of the ongoing debate and the sharing economy. Among others, we find that Airbnb has reached a global yet heterogeneous coverage. The majority of its listings across many countries are entire homes, suggesting that Airbnb is actually more like a rental marketplace rather than a spare-room sharing platform. Analysis on star-ratings reveals that there is a bias toward positive ratings, amplified by a bias toward using positive words in reviews. The extent of such bias is greater than Yelp reviews, which were already shown to exhibit a positive bias. We investigate a key issue - commercial hosts who own multiple listings on Airbnb - repeatedly discussed in the current debate. We find that their existence is prevalent, they are early movers towards joining Airbnb, and their listings are disproportionately entire homes and located in the US. Our work advances the current understanding of how Airbnb is being used and may serve as an independent and empirical reference to inform the debate.",
"title": ""
},
{
"docid": "80ccc8b5f9e68b5130a24fe3519b9b62",
"text": "A MIMO antenna of size 40mm × 40mm × 1.6mm is proposed for WLAN applications. Antenna consists of four mushroom shaped Apollonian fractal planar monopoles having micro strip feed lines with edge feeding. It uses defective ground structure (DGS) to achieve good isolation. To achieve more isolation, the antenna elements are placed orthogonal to each other. Further, isolation can be increased using parasitic elements between the elements of antenna. Simulation is done to study reflection coefficient as well as coupling between input ports, directivity, peak gain, efficiency, impedance and VSWR. Results show that MIMO antenna has a bandwidth of 1.9GHZ ranging from 5 to 6.9 GHz, and mutual coupling of less than -20dB.",
"title": ""
},
{
"docid": "5a8729b6b08e79e7c27ddf779b0a5267",
"text": "Electric solid propellants are an attractive option for space propulsion because they are ignited by applied electric power only. In this work, the behavior of pulsed microthruster devices utilizing such a material is investigated. These devices are similar in function and operation to the pulsed plasma thruster, which typically uses Teflon as propellant. A Faraday probe, Langmuir triple probe, residual gas analyzer, pendulum thrust stand and high speed camera are utilized as diagnostic devices. These thrusters are made in batches, of which a few devices were tested experimentally in vacuum environments. Results indicate a plume electron temperature of about 1.7 eV, with an electron density between 10 and 10 cm. According to thermal equilibrium and adiabatic expansion calculations, these relatively hot electrons are mixed with ~2000 K neutral and ion species, forming a non-equilibrium gas. From time-of-flight analysis, this gas mixture plume has an effective velocity of 1500-1650 m/s on centerline. The ablated mass of this plume is 215 μg on average, of which an estimated 0.3% is ionized species while 45±11% is ablated at negligible relative speed. This late-time ablation occurs on a time scale three times that of the 0.5 ms pulse discharge, and does not contribute to the measured 0.21 mN-s impulse per pulse. Similar values have previously been measured in pulsed plasma thrusters. These observations indicate the electric solid propellant material in this configuration behaves similar to Teflon in an electrothermal pulsed plasma",
"title": ""
},
{
"docid": "eaf1fbcc93c2330e56335f9df14513e3",
"text": "Virtual machine placement (VMP) and energy efficiency are significant topics in cloud computing research. In this paper, evolutionary computing is applied to VMP to minimize the number of active physical servers, so as to schedule underutilized servers to save energy. Inspired by the promising performance of the ant colony system (ACS) algorithm for combinatorial problems, an ACS-based approach is developed to achieve the VMP goal. Coupled with order exchange and migration (OEM) local search techniques, the resultant algorithm is termed an OEMACS. It effectively minimizes the number of active servers used for the assignment of virtual machines (VMs) from a global optimization perspective through a novel strategy for pheromone deposition which guides the artificial ants toward promising solutions that group candidate VMs together. The OEMACS is applied to a variety of VMP problems with differing VM sizes in cloud environments of homogenous and heterogeneous servers. The results show that the OEMACS generally outperforms conventional heuristic and other evolutionary-based approaches, especially on VMP with bottleneck resource characteristics, and offers significant savings of energy and more efficient use of different resources.",
"title": ""
},
{
"docid": "3e18a760083cd3ed169ed8dae36156b9",
"text": "n engl j med 368;26 nejm.org june 27, 2013 2445 correct diagnoses as often as we think: the diagnostic failure rate is estimated to be 10 to 15%. The rate is highest among specialties in which patients are diagnostically undifferentiated, such as emergency medicine, family medicine, and internal medicine. Error in the visual specialties, such as radiology and pathology, is considerably lower, probably around 2%.1 Diagnostic error has multiple causes, but principal among them are cognitive errors. Usually, it’s not a lack of knowledge that leads to failure, but problems with the clinician’s thinking. Esoteric diagnoses are occasionally missed, but common illnesses are commonly misdiagnosed. For example, physicians know the pathophysiology of pulmonary embolus in excruciating detail, yet because its signs and symptoms are notoriously variable and overlap with those of numerous other diseases, this important diagnosis was missed a staggering 55% of the time in a series of fatal cases.2 Over the past 40 years, work by cognitive psychologists and others has pointed to the human mind’s vulnerability to cognitive biases, logical fallacies, false assumptions, and other reasoning failures. It seems that much of our everyday thinking is f lawed, and clinicians are not immune to the problem (see box). More than 100 biases affecting clinical decision making have been described, and many medical disciplines now acknowledge their pervasive influence on our thinking. Cognitive failures are best understood in the context of how our brains manage and process information. The two principal modes, automatic and controlled, are colloquially referred to as “intuitive” and “analytic”; psychologists know them as Type 1 and Type 2 processes. Various conceptualizations of the reasoning process have been proposed, but most can be incorporated into this dual-process system. This system is more than a model: it is accepted that the two processes involve different cortical mechanisms with associated neurophysiologic and neuroanatomical From Mindless to Mindful Practice — Cognitive Bias and Clinical Decision Making",
"title": ""
},
{
"docid": "51c1a1257b5223401e1465579d75bff2",
"text": "This work describes the statistical machine translation (SMT) systems of RWTH Aachen University developed for the evaluation campaign International Workshop on Spoken Language Translation (IWSLT) 2013. We participated in the English→French, English↔German, Arabic→English, Chinese→English and Slovenian↔English MT tracks and the English→French and English→German SLT tracks. We apply phrase-based and hierarchical SMT decoders, which are augmented by state-of-the-art extensions. The novel techniques we experimentally evaluate include discriminative phrase training, a continuous space language model, a hierarchical reordering model, a word class language model, domain adaptation via data selection and system combination of standard and reverse order models. By application of these methods we can show considerable improvements over the respective baseline systems.",
"title": ""
},
{
"docid": "35f439b86c07f426fd127823a45ffacf",
"text": "The paper concentrates on the fundamental coordination problem that requires a network of agents to achieve a specific but arbitrary formation shape. A new technique based on complex Laplacian is introduced to address the problems of which formation shapes specified by inter-agent relative positions can be formed and how they can be achieved with distributed control ensuring global stability. Concerning the first question, we show that all similar formations subject to only shape constraints are those that lie in the null space of a complex Laplacian satisfying certain rank condition and that a formation shape can be realized almost surely if and only if the graph modeling the inter-agent specification of the formation shape is 2-rooted. Concerning the second question, a distributed and linear control law is developed based on the complex Laplacian specifying the target formation shape, and provable existence conditions of stabilizing gains to assign the eigenvalues of the closed-loop system at desired locations are given. Moreover, we show how the formation shape control law is extended to achieve a rigid formation if a subset of knowledgable agents knowing the desired formation size scales the formation while the rest agents do not need to re-design and change their control laws.",
"title": ""
},
{
"docid": "caa10e745374970796bdd0039416a29d",
"text": "s: Feature selection methods try to find a subset of the available features to improve the application of a learning algorithm. Many methods are based on searching a feature set that optimizes some evaluation function. On the other side, feature set estimators evaluate features individually. Relief is a well known and good feature set estimator. While being usually faster feature estimators have some disadvantages. Based on Relief ideas, we propose a feature set measure that can be used to evaluate the feature sets in a search process. We show how the proposed measure can help guiding the search process, as well as selecting the most appropriate feature set. The new measure is compared with a consistency measure, and the highly reputed wrapper approach.",
"title": ""
},
{
"docid": "58d7e76a4b960e33fc7b541d04825dc9",
"text": "The Internet of Things (IoT) is intended for ubiquitous connectivity among different entities or “things”. While its purpose is to provide effective and efficient solutions, security of the devices and network is a challenging issue. The number of devices connected along with the ad-hoc nature of the system further exacerbates the situation. Therefore, security and privacy has emerged as a significant challenge for the IoT. In this paper, we aim to provide a thorough survey related to the privacy and security challenges of the IoT. This document addresses these challenges from the perspective of technologies and architecture used. This work focuses also in IoT intrinsic vulnerabilities as well as the security challenges of various layers based on the security principles of data confidentiality, integrity and availability. This survey analyzes articles published for the IoT at the time and relates it to the security conjuncture of the field and its projection to the future.",
"title": ""
},
{
"docid": "b9300a58c4b55bfb0f57b36e5054e5c6",
"text": "The problem of designing, coordinating, and managing complex systems has been central to the management and organizations literature. Recent writings have tended to offer modularity as, at least, a partial solution to this design problem. However, little attention has been paid to the problem of identifying what constitutes an appropriate modularization of a complex system. We develop a formal simulation model that allows us to carefully examine the dynamics of innovation and performance in complex systems. The model points to the trade-off between the destabilizing effects of overly refined modularization and the modest levels of search and a premature fixation on inferior designs that can result from excessive levels of integration. The analysis highlights an asymmetry in this trade-off, with excessively refined modules leading to cycling behavior and a lack of performance improvement. We discuss the implications of these arguments for product and organization design.",
"title": ""
},
{
"docid": "0a8c2b600b7392b94677d4ae9d7eae74",
"text": "We construct targeted audio adversarial examples on automatic speech recognition. Given any audio waveform, we can produce another that is over 99.9% similar, but transcribes as any phrase we choose (recognizing up to 50 characters per second of audio). We apply our white-box iterative optimization-based attack to Mozilla's implementation DeepSpeech end-to-end, and show it has a 100% success rate. The feasibility of this attack introduce a new domain to study adversarial examples.",
"title": ""
}
] | scidocsrr |
c0224b859e856875fef59a0c77f04b2f | Map-Reduce for Machine Learning on Multicore | [
{
"docid": "6b038c702a3636664a2f7d4e3dcde4ff",
"text": "This article is reprinted from the Internaional Electron Devices Meeting (1975). It discusses the complexity of integrated circuits, identifies their manufacture, production, and deployment, and addresses trends to their future deployment.",
"title": ""
}
] | [
{
"docid": "b9e4a201050b379500e5e8a2bca81025",
"text": "On the basis of a longitudinal field study of domestic communication, we report some essential constituents of the user experience of awareness of others who are distant in space or time, i.e. presence-in-absence. We discuss presence-in-absence in terms of its social (Contact) and informational (Content) facets, and the circumstances of the experience (Context). The field evaluation of a prototype, 'The Cube', designed to support presence-in-absence, threw up issues in the interrelationships between contact, content and context; issues that the designers of similar social artifacts will need to address.",
"title": ""
},
{
"docid": "bc5a3cd619be11132ea39907f732bf4c",
"text": "A burgeoning interest in the intersection of neuroscience and architecture promises to offer biologically inspired insights into the design of spaces. The goal of such interdisciplinary approaches to architecture is to motivate construction of environments that would contribute to peoples' flourishing in behavior, health, and well-being. We suggest that this nascent field of neuroarchitecture is at a pivotal point in which neuroscience and architecture are poised to extend to a neuroscience of architecture. In such a research program, architectural experiences themselves are the target of neuroscientific inquiry. Here, we draw lessons from recent developments in neuroaesthetics to suggest how neuroarchitecture might mature into an experimental science. We review the extant literature and offer an initial framework from which to contextualize such research. Finally, we outline theoretical and technical challenges that lie ahead.",
"title": ""
},
{
"docid": "2a43e164e536600ee6ceaf6a9c1af1be",
"text": "Unsupervised paraphrase acquisition has been an active research field in recent years, but its effective coverage and performance have rarely been evaluated. We propose a generic paraphrase-based approach for Relation Extraction (RE), aiming at a dual goal: obtaining an applicative evaluation scheme for paraphrase acquisition and obtaining a generic and largely unsupervised configuration for RE. We analyze the potential of our approach and evaluate an implemented prototype of it using an RE dataset. Our findings reveal a high potential for unsupervised paraphrase acquisition. We also identify the need for novel robust models for matching paraphrases in texts, which should address syntactic complexity and variability.",
"title": ""
},
{
"docid": "611b985ae194f562e459dc78f7aafdc3",
"text": "In order to understand the formation and subsequent evolution of galaxies one must first distinguish between the two main morphological classes of massive systems: spirals and early-type systems. This paper introduces a project, Galaxy Zoo, which provides visual morphological classifications for nearly one million galaxies, extracted from the Sloan Digital Sky Survey (SDSS). This achievement was made possible by inviting the general public to visually inspect and classify these galaxies via the internet. The project has obtained more than 4 × 107 individual classifications made by ∼105 participants. We discuss the motivation and strategy for this project, and detail how the classifications were performed and processed. We find that Galaxy Zoo results are consistent with those for subsets of SDSS galaxies classified by professional astronomers, thus demonstrating that our data provide a robust morphological catalogue. Obtaining morphologies by direct visual inspection avoids introducing biases associated with proxies for morphology such as colour, concentration or structural parameters. In addition, this catalogue can be used to directly compare SDSS morphologies with older data sets. The colour–magnitude diagrams for each morphological class are shown, and we illustrate how these distributions differ from those inferred using colour alone as a proxy for",
"title": ""
},
{
"docid": "8d07f52f154f81ce9dedd7c5d7e3182d",
"text": "We present a 3D face reconstruction system that takes as input either one single view or several different views. Given a facial image, we first classify the facial pose into one of five predefined poses, then detect two anchor points that are then used to detect a set of predefined facial landmarks. Based on these initial steps, for a single view we apply a warping process using a generic 3D face model to build a 3D face. For multiple views, we apply sparse bundle adjustment to reconstruct 3D landmarks which are used to deform the generic 3D face model. Experimental results on the Color FERET and CMU multi-PIE databases confirm our framework is effective in creating realistic 3D face models that can be used in many computer vision applications, such as 3D face recognition at a distance.",
"title": ""
},
{
"docid": "ac96a4c1644dfbabc1dd02878c43c966",
"text": "A labeled text corpus made up of Turkish papers' titles, abstracts and keywords is collected. The corpus includes 35 number of different disciplines, and 200 documents per subject. This study presents the text corpus' collection and content. The classification performance of Term Frequcney - Inverse Document Frequency (TF-IDF) and topic probabilities of Latent Dirichlet Allocation (LDA) features are compared for the text corpus. The text corpus is shared as open source so that it could be used for natural language processing applications with academic purposes.",
"title": ""
},
{
"docid": "242e78ed606d13502ace6d5eae00b315",
"text": "Use of information technology management framework plays a major influence on organizational success. This article focuses on the field of Internet of Things (IoT) management. In this study, a number of risks in the field of IoT is investigated, then with review of a number of COBIT5 risk management schemes, some associated strategies, objectives and roles are provided. According to the in-depth studies of this area it is expected that using the best practices of COBIT5 can be very effective, while the use of this standard considerably improve some criteria such as performance, cost and time. Finally, the paper proposes a framework which reflects the best practices and achievements in the field of IoT risk management.",
"title": ""
},
{
"docid": "e6c7d1db1e1cfaab5fdba7dd1146bcd2",
"text": "We define the object detection from imagery problem as estimating a very large but extremely sparse bounding box dependent probability distribution. Subsequently we identify a sparse distribution estimation scheme, Directed Sparse Sampling, and employ it in a single end-to-end CNN based detection model. This methodology extends and formalizes previous state-of-the-art detection models with an additional emphasis on high evaluation rates and reduced manual engineering. We introduce two novelties, a corner based region-of-interest estimator and a deconvolution based CNN model. The resulting model is scene adaptive, does not require manually defined reference bounding boxes and produces highly competitive results on MSCOCO, Pascal VOC 2007 and Pascal VOC 2012 with real-time evaluation rates. Further analysis suggests our model performs particularly well when finegrained object localization is desirable. We argue that this advantage stems from the significantly larger set of available regions-of-interest relative to other methods. Source-code is available from: https://github.com/lachlants/denet",
"title": ""
},
{
"docid": "77d80da2b0cd3e8598f9c677fc8827a9",
"text": "In this report, our approach to tackling the task of ActivityNet 2018 Kinetics-600 challenge is described in detail. Though spatial-temporal modelling methods, which adopt either such end-to-end framework as I3D [1] or two-stage frameworks (i.e., CNN+RNN), have been proposed in existing state-of-the-arts for this task, video modelling is far from being well solved. In this challenge, we propose spatial-temporal network (StNet) for better joint spatial-temporal modelling and comprehensively video understanding. Besides, given that multimodal information is contained in video source, we manage to integrate both early-fusion and later-fusion strategy of multi-modal information via our proposed improved temporal Xception network (iTXN) for video understanding. Our StNet RGB single model achieves 78.99% top-1 precision in the Kinetics-600 validation set and that of our improved temporal Xception network which integrates RGB, flow and audio modalities is up to 82.35%. After model ensemble, we achieve top-1 precision as high as 85.0% on the validation set and rank No.1 among all submissions.",
"title": ""
},
{
"docid": "e61a0ba24db737d42a730d5738583ffa",
"text": "We present a logical formalism for expressing properties of continuous time Markov chains. The semantics for such properties arise as a natural extension of previous work on discrete time Markov chains to continuous time. The major result is that the veriication problem is decidable; this is shown using results in algebraic and transcendental number theory.",
"title": ""
},
{
"docid": "c227cae0ec847a227945f1dec0b224d2",
"text": "We present a highly flexible and efficient software pipeline for programmable triangle voxelization. The pipeline, entirely written in CUDA, supports both fully conservative and thin voxelizations, multiple boolean, floating point, vector-typed render targets, user-defined vertex and fragment shaders, and a bucketing mode which can be used to generate 3D A-buffers containing the entire list of fragments belonging to each voxel. For maximum efficiency, voxelization is implemented as a sort-middle tile-based rasterizer, while the A-buffer mode, essentially performing 3D binning of triangles over uniform grids, uses a sort-last pipeline. Despite its major flexibility, the performance of our tile-based rasterizer is always competitive with and sometimes more than an order of magnitude superior to that of state-of-the-art binary voxelizers, whereas our bucketing system is up to 4 times faster than previous implementations. In both cases the results have been achieved through the use of careful load-balancing and high performance sorting primitives.",
"title": ""
},
{
"docid": "cf45599aeb22470b7922fc64394f114c",
"text": "This paper addresses the task of assigning multiple labels of fine-grained named entity (NE) types to Wikipedia articles. To address the sparseness of the input feature space, which is salient particularly in fine-grained type classification, we propose to learn article vectors (i.e. entity embeddings) from hypertext structure of Wikipedia using a Skip-gram model and incorporate them into the input feature set. To conduct large-scale practical experiments, we created a new dataset containing over 22,000 manually labeled instances. The results of our experiments show that our idea gained statistically significant improvements in classification results.",
"title": ""
},
{
"docid": "9d19d15b070faf62ecfa99d90e37b908",
"text": "Title of Thesis: SYMBOL-BASED CONTROL OF A BALL-ON-PLATE MECHANICAL SYSTEM Degree candidate: Phillip Yip Degree and year: Master of Science, 2004 Thesis directed by: Assistant Professor Dimitrios Hristu-Varsakelis Department of Mechanical Engineering Modern control systems often consist of networks of components that must share a common communication channel. Not all components of the networked control system can communicate with one another simultaneously at any given time. The “attention” that each component receives is an important factor that affects the system’s overall performance. An effective controller should ensure that sensors and actuators receive sufficient attention. This thesis describes a “ball-on-plate” dynamical system that includes a digital controller, which communicates with a pair of language-driven actuators, and an overhead camera. A control algorithm was developed to restrict the ball to a small region on the plate using a quantized set of language-based commands. The size of this containment region was analytically determined as a function of the communication constraints and other control system parameters. The effectiveness of the proposed control law was evaluated in experiments and mathematical simulations. SYMBOL-BASED CONTROL OF A BALL-ON-PLATE MECHANICAL SYSTEM by Phillip Yip Thesis submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial fulfillment of the requirements for the degree of Master of Science 2004 Advisory Commmittee: Assistant Professor Dimitrios Hristu-Varsakelis, Chair/Advisor Professor Balakumar Balachandran Professor Amr Baz c ©Copyright by Phillip T. Yip 2004 DEDICATION: To my family",
"title": ""
},
{
"docid": "3f40b9d1dfff00d8310f08df12096d63",
"text": "This paper explores a monetary policy model with habit formation for consumers, in which consumers’ utility depends in part on current consumption relative to past consumption. The empirical tests developed in the paper show that one can reject the hypothesis of no habit formation with tremendous confidence, largely because the habit formation model captures the gradual hump-shaped response of real spending to various shocks. The paper then embeds the habit consumption specification in a monetary policy model and finds that the responses of both spending and inflation to monetary policy actions are significantly improved by this modification. (JEL D12, E52, E43) Forthcoming, American Economic Review, June 2000. With the resurgence of interest in the effects of monetary policy on the macroeconomy, led by the work of the Christina D. and David H. Romer (1989), Ben S. Bernanke and Alan S. Blinder (1992), Lawrence J. Christiano, Martin S. Eichenbaum, and Charles L. Evans (1996), and others, the need for a structural model that could plausibly be used for monetary policy analysis has become evident. Of course, many extant models have been used for monetary policy analysis, but many of these are perceived as having critical shortcomings. First, some models do not incorporate explicit expectations behavior, so that changes in policy (or private) behavior could cause shifts in reduced-form parameters (i.e., the critique of Robert E. Lucas 1976). Others incorporate expectations, but derive key relationships from ad hoc behavioral assumptions, rather than from explicit optimizing problems for consumers and firms (Fuhrer and George R. Moore 1995b is an example). Explicit expectations and optimizing behavior are both desirable, other things equal, for a model of monetary analysis. First, analyzing potential improvements to monetary policy relative to historical policies requires a model that is stable across alternative policy regimes. This underlines the importance of explicit expectations formation. Second, the “optimal” in optimal monetary policy must ultimately refer to social welfare. Many have approximated social welfare with weighted averages of output and inflation variances, but one cannot know how good these approximations are without more explicit modeling of welfare. This implies that the model be closely tied to the underlying objectives of consumers and firms, hence the emphasis on optimization-based models. A critical test for whether a model reflects underlying objectives is its ability to accurately reflect the dominant dynamic interactions in the data. A number of recent papers (see, for example, Robert G. King and Alexander L. Wolman (1996), Bennett T. McCallum and Edward Nelson (1999a, 1999b); Julio R. Rotemberg and Michael Woodford (1997)) have developed models that incorporate explicit expectations, optimizing behavior, and frictions that allow monetary policy to have real effects. This paper continues in that line of research by documenting the empirical importance of a key feature of aggregate data: the “hump-shaped,” gradual response of spending and inflation to shocks. It then develops a monetary policy model that can capture this feature, as well as all of the features (e.g. the real effects of monetary policy, the persistence of inflation and output) embodied in earlier models. The key to the model’s success on the spending side is the inclusion of habit formation in the consumer’s utility function. This modification",
"title": ""
},
{
"docid": "f709802a6da7db7c71dfa67930111b04",
"text": "Generative adversarial networks (GANs) are a class of unsupervised machine learning algorithms that can produce realistic images from randomly-sampled vectors in a multi-dimensional space. Until recently, it was not possible to generate realistic high-resolution images using GANs, which has limited their applicability to medical images that contain biomarkers only detectable at native resolution. Progressive growing of GANs is an approach wherein an image generator is trained to initially synthesize low resolution synthetic images (8x8 pixels), which are then fed to a discriminator that distinguishes these synthetic images from real downsampled images. Additional convolutional layers are then iteratively introduced to produce images at twice the previous resolution until the desired resolution is reached. In this work, we demonstrate that this approach can produce realistic medical images in two different domains; fundus photographs exhibiting vascular pathology associated with retinopathy of prematurity (ROP), and multi-modal magnetic resonance images of glioma. We also show that fine-grained details associated with pathology, such as retinal vessels or tumor heterogeneity, can be preserved and enhanced by including segmentation maps as additional channels. We envisage several applications of the approach, including image augmentation and unsupervised classification of pathology.",
"title": ""
},
{
"docid": "81243e721527e74f0997d6aeb250cc23",
"text": "This paper compares the attributes of 36 slot, 33 slot and 12 slot brushless interior permanent magnet motor designs, each with an identical 10 pole interior magnet rotor. The aim of the paper is to quantify the trade-offs between alternative distributed and concentrated winding configurations taking into account aspects such as thermal performance, field weakening behaviour, acoustic noise, and efficiency. It is found that the concentrated 12 slot design gives the highest theoretical performance however significant rotor losses are found during testing and a large amount of acoustic noise and vibration is generated. The 33 slot design is found to have marginally better performance than the 36 slot but it also generates some unbalanced magnetic pull on the rotor which may lead to mechanical issues at higher speeds.",
"title": ""
},
{
"docid": "22c6ae71c708d5e2d1bc7e5e085c4842",
"text": "Head pose estimation is a fundamental task for face and social related research. Although 3D morphable model (3DMM) based methods relying on depth information usually achieve accurate results, they usually require frontal or mid-profile poses which preclude a large set of applications where such conditions can not be garanteed, like monitoring natural interactions from fixed sensors placed in the environment. A major reason is that 3DMM models usually only cover the face region. In this paper, we present a framework which combines the strengths of a 3DMM model fitted online with a prior-free reconstruction of a 3D full head model providing support for pose estimation from any viewpoint. In addition, we also proposes a symmetry regularizer for accurate 3DMM fitting under partial observations, and exploit visual tracking to address natural head dynamics with fast accelerations. Extensive experiments show that our method achieves state-of-the-art performance on the public BIWI dataset, as well as accurate and robust results on UbiPose, an annotated dataset of natural interactions that we make public and where adverse poses, occlusions or fast motions regularly occur.",
"title": ""
},
{
"docid": "31e8d60af8a1f9576d28c4c1e0a3db86",
"text": "Management of bulk sensor data is one of the challenging problems in the development of Internet of Things (IoT) applications. High volume of sensor data induces for optimal implementation of appropriate sensor data compression technique to deal with the problem of energy-efficient transmission, storage space optimization for tiny sensor devices, and cost-effective sensor analytics. The compression performance to realize significant gain in processing high volume sensor data cannot be attained by conventional lossy compression methods, which are less likely to exploit the intrinsic unique contextual characteristics of sensor data. In this paper, we propose SensCompr, a dynamic lossy compression method specific for sensor datasets and it is easily realizable with standard compression methods. Senscompr leverages robust statistical and information theoretic techniques and does not require specific physical modeling. It is an information-centric approach that exhaustively analyzes the inherent properties of sensor data for extracting the embedded useful information content and accordingly adapts the parameters of compression scheme to maximize compression gain while optimizing information loss. Senscompr is successfully applied to compress large sets of heterogeneous real sensor datasets like ECG, EEG, smart meter, accelerometer. To the best of our knowledge, for the first time 'sensor information content'-centric dynamic compression technique is proposed and implemented particularly for IoT-applications and this method is independent to sensor data types.",
"title": ""
},
{
"docid": "fbebf8aaeadbd4816a669bd0b23e0e2b",
"text": "In traditional cloud storage systems, attribute-based encryption (ABE) is regarded as an important technology for solving the problem of data privacy and fine-grained access control. However, in all ABE schemes, the private key generator has the ability to decrypt all data stored in the cloud server, which may bring serious problems such as key abuse and privacy data leakage. Meanwhile, the traditional cloud storage model runs in a centralized storage manner, so single point of failure may leads to the collapse of system. With the development of blockchain technology, decentralized storage mode has entered the public view. The decentralized storage approach can solve the problem of single point of failure in traditional cloud storage systems and enjoy a number of advantages over centralized storage, such as low price and high throughput. In this paper, we study the data storage and sharing scheme for decentralized storage systems and propose a framework that combines the decentralized storage system interplanetary file system, the Ethereum blockchain, and ABE technology. In this framework, the data owner has the ability to distribute secret key for data users and encrypt shared data by specifying access policy, and the scheme achieves fine-grained access control over data. At the same time, based on smart contract on the Ethereum blockchain, the keyword search function on the cipher text of the decentralized storage systems is implemented, which solves the problem that the cloud server may not return all of the results searched or return wrong results in the traditional cloud storage systems. Finally, we simulated the scheme in the Linux system and the Ethereum official test network Rinkeby, and the experimental results show that our scheme is feasible.",
"title": ""
},
{
"docid": "0342f89c44e0b86026953196de34b608",
"text": "In this paper, we introduce an approach for recognizing the absence of opposing arguments in persuasive essays. We model this task as a binary document classification and show that adversative transitions in combination with unigrams and syntactic production rules significantly outperform a challenging heuristic baseline. Our approach yields an accuracy of 75.6% and 84% of human performance in a persuasive essay corpus with various topics.",
"title": ""
}
] | scidocsrr |
670e509f17f1f032a90f88c1dcfc2d9b | A Warning System for Obstacle Detection at Vehicle Lateral Blind Spot Area | [
{
"docid": "2d0cc4c7ca6272200bb1ed1c9bba45f0",
"text": "Advanced Driver Assistance Systems (ADAS) based on video camera tends to be generalized in today's automotive. However, if most of these systems perform nicely in good weather conditions, they perform very poorly under adverse weather particularly under rain. We present a novel approach that aims at detecting raindrops on a car windshield using only images from an in-vehicle camera. Based on the photometric properties of raindrops, the algorithm relies on image processing technics to highlight raindrops. Its results can be further used for image restoration and vision enhancement and hence it is a valuable tool for ADAS.",
"title": ""
}
] | [
{
"docid": "b3edfd5b56831080a663faeb0e159627",
"text": "Because wireless sensor networks (WSNs) are becoming increasingly integrated into daily life, solving the energy efficiency problem of such networks is an urgent problem. Many energy-efficient algorithms have been proposed to reduce energy consumption in traditional WSNs. The emergence of software-defined networks (SDNs) enables the transformation of WSNs. Some SDN-based WSNs architectures have been proposed and energy-efficient algorithms in SDN-based WSNs architectures have been studied. In this paper, we integrate an SDN into WSNs and an improved software-defined WSNs (SD-WSNs) architecture is presented. Based on the improved SD-WSNs architecture, we propose an energy-efficient algorithm. This energy-efficient algorithm is designed to match the SD-WSNs architecture, and is based on the residual energy and the transmission power, and the game theory is introduced to extend the network lifetime. Based on the SD-WSNs architecture and the energy-efficient algorithm, we provide a detailed introduction to the operating mechanism of the algorithm in the SD-WSNs. The simulation results show that our proposed algorithm performs better in terms of balancing energy consumption and extending the network lifetime compared with the typical energy-efficient algorithms in traditional WSNs.",
"title": ""
},
{
"docid": "338dcbb45ff0c1752eeb34ec1be1babe",
"text": "I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural",
"title": ""
},
{
"docid": "fae65e55a1a670738d39a3d2db279ceb",
"text": "This paper presents a method to extract tone relevant features based on pitch flux from continuous speech signal. The autocorrelations of two adjacent frames are calculated and the covariance between them is estimated to extract multi-dimensional pitch flux features. These features, together with MFCCs, are modeled in a 2-stream GMM models, and are tested in a 3-dialect identification task for Chinese. The pitch flux features have shown to be very effective in identifying tonal languages with short speech segments. For the test speech segments of 3 seconds, 2-stream model achieves more than 30% error reduction over MFCC-based model",
"title": ""
},
{
"docid": "cf3ee200705e8bb564303bd758e8e235",
"text": "The current state of the art in playing many important perfect information games, including Chess and Go, combines planning and deep reinforcement learning with self-play. We extend this approach to imperfect information games and present ExIt-OOS, a novel approach to playing imperfect information games within the Expert Iteration framework and inspired by AlphaZero. We use Online Outcome Sampling, an online search algorithm for imperfect information games in place of MCTS. While training online, our neural strategy is used to improve the accuracy of playouts in OOS, allowing a learning and planning feedback loop for imperfect information games.",
"title": ""
},
{
"docid": "dd545adf1fba52e794af4ee8de34fc60",
"text": "We propose solving continuous parametric simulation optimizations using a deterministic nonlinear optimization algorithm and sample-path simulations. The optimization problem is written in a modeling language with a simulation module accessed with an external function call. Since we allow no changes to the simulation code at all, we propose using a quadratic approximation of the simulation function to obtain derivatives. Results on three different queueing models are presented that show our method to be effective on a variety of practical problems.",
"title": ""
},
{
"docid": "2e40682bca56659428d2919191e1cbf3",
"text": "Single-cell RNA-Seq (scRNA-Seq) has attracted much attention recently because it allows unprecedented resolution into cellular activity; the technology, therefore, has been widely applied in studying cell heterogeneity such as the heterogeneity among embryonic cells at varied developmental stages or cells of different cancer types or subtypes. A pertinent question in such analyses is to identify cell subpopulations as well as their associated genetic drivers. Consequently, a multitude of approaches have been developed for clustering or biclustering analysis of scRNA-Seq data. In this article, we present a fast and simple iterative biclustering approach called \"BiSNN-Walk\" based on the existing SNN-Cliq algorithm. One of BiSNN-Walk's differentiating features is that it returns a ranked list of clusters, which may serve as an indicator of a cluster's reliability. Another important feature is that BiSNN-Walk ranks genes in a gene cluster according to their level of affiliation to the associated cell cluster, making the result more biologically interpretable. We also introduce an entropy-based measure for choosing a highly clusterable similarity matrix as our starting point among a wide selection to facilitate the efficient operation of our algorithm. We applied BiSNN-Walk to three large scRNA-Seq studies, where we demonstrated that BiSNN-Walk was able to retain and sometimes improve the cell clustering ability of SNN-Cliq. We were able to obtain biologically sensible gene clusters in terms of GO term enrichment. In addition, we saw that there was significant overlap in top characteristic genes for clusters corresponding to similar cell states, further demonstrating the fidelity of our gene clusters.",
"title": ""
},
{
"docid": "0b1e0145affcdf2ff46580d9e5615211",
"text": "Traditional topic models do not account for semantic regularities in language. Recent distributional representations of words exhibit semantic consistency over directional metrics such as cosine similarity. However, neither categorical nor Gaussian observational distributions used in existing topic models are appropriate to leverage such correlations. In this paper, we propose to use the von Mises-Fisher distribution to model the density of words over a unit sphere. Such a representation is well-suited for directional data. We use a Hierarchical Dirichlet Process for our base topic model and propose an efficient inference algorithm based on Stochastic Variational Inference. This model enables us to naturally exploit the semantic structures of word embeddings while flexibly discovering the number of topics. Experiments demonstrate that our method outperforms competitive approaches in terms of topic coherence on two different text corpora while offering efficient inference.",
"title": ""
},
{
"docid": "b26882cddec1690e3099757e835275d2",
"text": "Accumulating evidence suggests that, independent of physical activity levels, sedentary behaviours are associated with increased risk of cardio-metabolic disease, all-cause mortality, and a variety of physiological and psychological problems. Therefore, the purpose of this systematic review is to determine the relationship between sedentary behaviour and health indicators in school-aged children and youth aged 5-17 years. Online databases (MEDLINE, EMBASE and PsycINFO), personal libraries and government documents were searched for relevant studies examining time spent engaging in sedentary behaviours and six specific health indicators (body composition, fitness, metabolic syndrome and cardiovascular disease, self-esteem, pro-social behaviour and academic achievement). 232 studies including 983,840 participants met inclusion criteria and were included in the review. Television (TV) watching was the most common measure of sedentary behaviour and body composition was the most common outcome measure. Qualitative analysis of all studies revealed a dose-response relation between increased sedentary behaviour and unfavourable health outcomes. Watching TV for more than 2 hours per day was associated with unfavourable body composition, decreased fitness, lowered scores for self-esteem and pro-social behaviour and decreased academic achievement. Meta-analysis was completed for randomized controlled studies that aimed to reduce sedentary time and reported change in body mass index (BMI) as their primary outcome. In this regard, a meta-analysis revealed an overall significant effect of -0.81 (95% CI of -1.44 to -0.17, p = 0.01) indicating an overall decrease in mean BMI associated with the interventions. There is a large body of evidence from all study designs which suggests that decreasing any type of sedentary time is associated with lower health risk in youth aged 5-17 years. In particular, the evidence suggests that daily TV viewing in excess of 2 hours is associated with reduced physical and psychosocial health, and that lowering sedentary time leads to reductions in BMI.",
"title": ""
},
{
"docid": "4dd2fc66b1a2f758192b02971476b4cc",
"text": "Although efforts have been directed toward the advancement of women in science, technology, engineering, and mathematics (STEM) positions, little research has directly examined women's perspectives and bottom-up strategies for advancing in male-stereotyped disciplines. The present study utilized Photovoice, a Participatory Action Research method, to identify themes that underlie women's experiences in traditionally male-dominated fields. Photovoice enables participants to convey unique aspects of their experiences via photographs and their in-depth knowledge of a community through personal narrative. Forty-six STEM women graduate students and postdoctoral fellows completed a Photovoice activity in small groups. They presented photographs that described their experiences pursuing leadership positions in STEM fields. Three types of narratives were discovered and classified: career strategies, barriers to achievement, and buffering strategies or methods for managing barriers. Participants described three common types of career strategies and motivational factors, including professional development, collaboration, and social impact. Moreover, the lack of rewards for these workplace activities was seen as limiting professional effectiveness. In terms of barriers to achievement, women indicated they were not recognized as authority figures and often worked to build legitimacy by fostering positive relationships. Women were vigilant to other people's perspectives, which was costly in terms of time and energy. To manage role expectations, including those related to gender, participants engaged in numerous role transitions throughout their day to accommodate workplace demands. To buffer barriers to achievement, participants found resiliency in feelings of accomplishment and recognition. Social support, particularly from mentors, helped participants cope with negative experiences and to envision their future within the field. Work-life balance also helped participants find meaning in their work and have a sense of control over their lives. Overall, common workplace challenges included a lack of social capital and limited degrees of freedom. Implications for organizational policy and future research are discussed.",
"title": ""
},
{
"docid": "0ae071bc719fdaac34a59991e66ab2b8",
"text": "It has recently been shown in a brain-computer interface experiment that motor cortical neurons change their tuning properties selectively to compensate for errors induced by displaced decoding parameters. In particular, it was shown that the three-dimensional tuning curves of neurons whose decoding parameters were reassigned changed more than those of neurons whose decoding parameters had not been reassigned. In this article, we propose a simple learning rule that can reproduce this effect. Our learning rule uses Hebbian weight updates driven by a global reward signal and neuronal noise. In contrast to most previously proposed learning rules, this approach does not require extrinsic information to separate noise from signal. The learning rule is able to optimize the performance of a model system within biologically realistic periods of time under high noise levels. Furthermore, when the model parameters are matched to data recorded during the brain-computer interface learning experiments described above, the model produces learning effects strikingly similar to those found in the experiments.",
"title": ""
},
{
"docid": "4f87b93eb66b7126c53ee8126151f77f",
"text": "We propose a convolutional neural network architecture with k-max pooling layer for semantic modeling of music. The aim of a music model is to analyze and represent the semantic content of music for purposes of classification, discovery, or clustering. The k-max pooling layer is used in the network to make it possible to pool the k most active features, capturing the semantic-rich and time-varying information about music. Our network takes an input music as a sequence of audio words, where each audio word is associated with a distributed feature vector that can be fine-tuned by backpropagating errors during the training. The architecture allows us to take advantage of the better trained audio word embeddings and the deep structures to produce more robust music representations. Experiment results with two different music collections show that our neural networks achieved the best accuracy in music genre classification comparing with three state-of-art systems.",
"title": ""
},
{
"docid": "e711f9f57e1c3c22c762bf17cb6afd2b",
"text": "Qualitative research methodology has become an established part of the medical education research field. A very popular data-collection technique used in qualitative research is the \"focus group\". Focus groups in this Guide are defined as \"… group discussions organized to explore a specific set of issues … The group is focused in the sense that it involves some kind of collective activity … crucially, focus groups are distinguished from the broader category of group interview by the explicit use of the group interaction as research data\" (Kitzinger 1994, p. 103). This Guide has been designed to provide people who are interested in using focus groups with the information and tools to organize, conduct, analyze and publish sound focus group research within a broader understanding of the background and theoretical grounding of the focus group method. The Guide is organized as follows: Firstly, to describe the evolution of the focus group in the social sciences research domain. Secondly, to describe the paradigmatic fit of focus groups within qualitative research approaches in the field of medical education. After defining, the nature of focus groups and when, and when not, to use them, the Guide takes on a more practical approach, taking the reader through the various steps that need to be taken in conducting effective focus group research. Finally, the Guide finishes with practical hints towards writing up a focus group study for publication.",
"title": ""
},
{
"docid": "f7bc42beb169e42496b674c918541865",
"text": "Brain endothelial cells are unique among endothelial cells in that they express apical junctional complexes, including tight junctions, which quite resemble epithelial tight junctions both structurally and functionally. They form the blood-brain-barrier (BBB) which strictly controls the exchanges between the blood and the brain compartments by limiting passive diffusion of blood-borne solutes while actively transporting nutrients to the brain. Accumulating experimental and clinical evidence indicate that BBB dysfunctions are associated with a number of serious CNS diseases with important social impacts, such as multiple sclerosis, stroke, brain tumors, epilepsy or Alzheimer's disease. This review will focus on the implication of brain endothelial tight junctions in BBB architecture and physiology, will discuss the consequences of BBB dysfunction in these CNS diseases and will present some therapeutic strategies for drug delivery to the brain across the BBB.",
"title": ""
},
{
"docid": "8ea6c4957443916c2102f8a173f9d3dc",
"text": "INTRODUCTION\nOpioid overdose fatality has increased threefold since 1999. As a result, prescription drug overdose surpassed motor vehicle collision as the leading cause of unintentional injury-related death in the USA. Naloxone , an opioid antagonist that has been available for decades, can safely reverse opioid overdose if used promptly and correctly. However, clinicians often overestimate the dose of naloxone needed to achieve the desired clinical outcome, precipitating acute opioid withdrawal syndrome (OWS).\n\n\nAREAS COVERED\nThis article provides a comprehensive review of naloxone's pharmacologic properties and its clinical application to promote the safe use of naloxone in acute management of opioid intoxication and to mitigate the risk of precipitated OWS. Available clinical data on opioid-receptor kinetics that influence the reversal of opioid agonism by naloxone are discussed. Additionally, the legal and social barriers to take home naloxone programs are addressed.\n\n\nEXPERT OPINION\nNaloxone is an intrinsically safe drug, and may be administered in large doses with minimal clinical effect in non-opioid-dependent patients. However, when administered to opioid-dependent patients, naloxone can result in acute opioid withdrawal. Therefore, it is prudent to use low-dose naloxone (0.04 mg) with appropriate titration to reverse ventilatory depression in this population.",
"title": ""
},
{
"docid": "1fb0344be6a5da582e0563dceca70d44",
"text": "Self-mutilating behaviors could be minor and benign, but more severe cases are usually associated with psychiatric disorders or with acquired nervous system lesions and could be life-threatening. The patient was a 66-year-old man who had been mutilating his fingers for 6 years. This behavior started as serious nail biting and continued as severe finger mutilation (by biting), resulting in loss of the terminal phalanges of all fingers in both hands. On admission, he complained only about insomnia. The electromyography showed severe peripheral nerve damage in both hands and feet caused by severe diabetic neuropathy. Cognitive decline was not established (Mini Mental State Examination score, 28), although the computed tomographic scan revealed serious brain atrophy. He was given a diagnosis of impulse control disorder not otherwise specified. His impulsive biting improved markedly when low doses of haloperidol (1.5 mg/day) were added to fluoxetine (80 mg/day). In our patient's case, self-mutilating behavior was associated with severe diabetic neuropathy, impulsivity, and social isolation. The administration of a combination of an antipsychotic and an antidepressant proved to be beneficial.",
"title": ""
},
{
"docid": "03eabf03f8ac967c728ff35b77f3dd84",
"text": "In this paper, we tackle the problem of associating combinations of colors to abstract categories (e.g. capricious, classic, cool, delicate, etc.). It is evident that such concepts would be difficult to distinguish using single colors, therefore we consider combinations of colors or color palettes. We leverage two novel databases for color palettes and we learn categorization models using low and high level descriptors. Preliminary results show that Fisher representation based on GMMs is the most rewarding strategy in terms of classification performance over a baseline model. We also suggest a process for cleaning weakly annotated data, whilst preserving the visual coherence of categories. Finally, we demonstrate how learning abstract categories on color palettes can be used in the application of color transfer, personalization and image re-ranking.",
"title": ""
},
{
"docid": "e5c625ceaf78c66c2bfb9562970c09ec",
"text": "A continuing question in neural net research is the size of network needed to solve a particular problem. If training is started with too small a network for the problem no learning can occur. The researcher must then go through a slow process of deciding that no learning is taking place, increasing the size of the network and training again. If a network that is larger than required is used, then processing is slowed, particularly on a conventional von Neumann computer. An approach to this problem is discussed that is based on learning with a net which is larger than the minimum size network required to solve the problem and then pruning the solution network. The result is a small, efficient network that performs as well or better than the original which does not give a complete answer to the question, since the size of the initial network is still largely based on guesswork but it gives a very useful partial answer and sheds some light on the workings of a neural network in the process.<<ETX>>",
"title": ""
},
{
"docid": "d272cf01340c8dcc3c24651eaf876926",
"text": "We propose a new method for learning from a single demonstration to solve hard exploration tasks like the Atari game Montezuma’s Revenge. Instead of imitating human demonstrations, as proposed in other recent works, our approach is to maximize rewards directly. Our agent is trained using off-the-shelf reinforcement learning, but starts every episode by resetting to a state from a demonstration. By starting from such demonstration states, the agent requires much less exploration to learn a game compared to when it starts from the beginning of the game at every episode. We analyze reinforcement learning for tasks with sparse rewards in a simple toy environment, where we show that the run-time of standard RL methods scales exponentially in the number of states between rewards. Our method reduces this to quadratic scaling, opening up many tasks that were previously infeasible. We then apply our method to Montezuma’s Revenge, for which we present a trained agent achieving a high-score of 74,500, better than any previously published result.",
"title": ""
},
{
"docid": "2b569d086698cffc0cba2dc3fe0ab8a6",
"text": "Home security should be a top concern for everyone who owns or rents a home. Moreover, safe and secure residential space is the necessity of every individual as most of the family members are working. The home is left unattended for most of the day-time and home invasion crimes are at its peak as constantly monitoring of the home is difficult. Another reason for the need of home safety is specifically when the elderly person is alone or the kids are with baby-sitter and servant. Home security system i.e. HomeOS is thus applicable and desirable for resident’s safety and convenience. This will be achieved by turning your home into a smart home by intelligent remote monitoring. Smart home comes into picture for the purpose of controlling and monitoring the home. It will give you peace of mind, as you can have a close watch and stay connected anytime, anywhere. But, is common man really concerned about home security? An investigative study was done by conducting a survey to get the inputs from different people from diverse backgrounds. The main motivation behind this survey was to make people aware of advanced HomeOS and analyze their need for security. This paper also studied the necessity of HomeOS investigative study in current situation where the home burglaries are rising at an exponential rate. In order to arrive at findings and conclusions, data were analyzed. The graphical method was employed to identify the relative significance of home security. From this analysis, we can infer that the cases of having kids and aged person at home or location of home contribute significantly to the need of advanced home security system. At the end, the proposed system model with its flow and the challenges faced while implementing home security systems are also discussed.",
"title": ""
},
{
"docid": "da088acea8b1d2dc68b238e671649f4f",
"text": "Water is a naturally circulating resource that is constantly recharged. Therefore, even though the stocks of water in natural and artificial reservoirs are helpful to increase the available water resources for human society, the flow of water should be the main focus in water resources assessments. The climate system puts an upper limit on the circulation rate of available renewable freshwater resources (RFWR). Although current global withdrawals are well below the upper limit, more than two billion people live in highly water-stressed areas because of the uneven distribution of RFWR in time and space. Climate change is expected to accelerate water cycles and thereby increase the available RFWR. This would slow down the increase of people living under water stress; however, changes in seasonal patterns and increasing probability of extreme events may offset this effect. Reducing current vulnerability will be the first step to prepare for such anticipated changes.",
"title": ""
}
] | scidocsrr |
4b6a80b9010fe9aec4ba329c8d7f4be5 | Bioinformatics - an introduction for computer scientists | [
{
"docid": "d6abc85e62c28755ed6118257d9c25c3",
"text": "MOTIVATION\nIn a previous paper, we presented a polynomial time dynamic programming algorithm for predicting optimal RNA secondary structure including pseudoknots. However, a formal grammatical representation for RNA secondary structure with pseudoknots was still lacking.\n\n\nRESULTS\nHere we show a one-to-one correspondence between that algorithm and a formal transformational grammar. This grammar class encompasses the context-free grammars and goes beyond to generate pseudoknotted structures. The pseudoknot grammar avoids the use of general context-sensitive rules by introducing a small number of auxiliary symbols used to reorder the strings generated by an otherwise context-free grammar. This formal representation of the residue correlations in RNA structure is important because it means we can build full probabilistic models of RNA secondary structure, including pseudoknots, and use them to optimally parse sequences in polynomial time.",
"title": ""
}
] | [
{
"docid": "eb6572344dbaf8e209388f888fba1c10",
"text": "[Purpose] The present study was performed to evaluate the changes in the scapular alignment, pressure pain threshold and pain in subjects with scapular downward rotation after 4 weeks of wall slide exercise or sling slide exercise. [Subjects and Methods] Twenty-two subjects with scapular downward rotation participated in this study. The alignment of the scapula was measured using radiographic analysis (X-ray). Pain and pressure pain threshold were assessed using visual analogue scale and digital algometer. Patients were assessed before and after a 4 weeks of exercise. [Results] In the within-group comparison, the wall slide exercise group showed significant differences in the resting scapular alignment, pressure pain threshold, and pain after four weeks. The between-group comparison showed that there were significant differences between the wall slide group and the sling slide group after four weeks. [Conclusion] The results of this study found that the wall slide exercise may be effective at reducing pain and improving scapular alignment in subjects with scapular downward rotation.",
"title": ""
},
{
"docid": "c9a4aff9871fa2f10c61bfb05b820141",
"text": "With single computer's computation power not sufficing, need for sharing resources to manipulate and manage data through clouds is increasing rapidly. Hence, it is favorable to delegate computations or store data with a third party, the cloud provider. However, delegating data to third party poses the risks of data disclosure during computation. The problem can be addressed by carrying out computation without decrypting the encrypted data. The results are also obtained encrypted and can be decrypted at the user side. This requires modifying functions in such a way that they are still executable while privacy is ensured or to search an encrypted database. Homomorphic encryption provides security to cloud consumer data while preserving system usability. We propose a symmetric key homomorphic encryption scheme based on matrix operations with primitives that make it easily adaptable for different needs in various cloud computing scenarios.",
"title": ""
},
{
"docid": "117c66505964344d9c350a4e57a4a936",
"text": "Sorting is a key kernel in numerous big data application including database operations, graphs and text analytics. Due to low control overhead, parallel bitonic sorting networks are usually employed for hardware implementations to accelerate sorting. Although a typical implementation of merge sort network can lead to low latency and small memory usage, it suffers from low throughput due to the lack of parallelism in the final stage. We analyze a pipelined merge sort network, showing its theoretical limits in terms of latency, memory and, throughput. To increase the throughput, we propose a merge sort based hybrid design where the final few stages in the merge sort network are replaced with “folded” bitonic merge networks. In these “folded” networks, all the interconnection patterns are realized by streaming permutation networks (SPN). We present a theoretical analysis to quantify latency, memory and throughput of our proposed design. Performance evaluations are performed by experiments on Xilinx Virtex-7 FPGA with post place-androute results. We demonstrate that our implementation achieves a throughput close to 10 GBps, outperforming state-of-the-art implementation of sorting on the same hardware by 1.2x, while preserving lower latency and higher memory efficiency.",
"title": ""
},
{
"docid": "2f471c24ccb38e70627eba6383c003e0",
"text": "We present an algorithm that enables casual 3D photography. Given a set of input photos captured with a hand-held cell phone or DSLR camera, our algorithm reconstructs a 3D photo, a central panoramic, textured, normal mapped, multi-layered geometric mesh representation. 3D photos can be stored compactly and are optimized for being rendered from viewpoints that are near the capture viewpoints. They can be rendered using a standard rasterization pipeline to produce perspective views with motion parallax. When viewed in VR, 3D photos provide geometrically consistent views for both eyes. Our geometric representation also allows interacting with the scene using 3D geometry-aware effects, such as adding new objects to the scene and artistic lighting effects.\n Our 3D photo reconstruction algorithm starts with a standard structure from motion and multi-view stereo reconstruction of the scene. The dense stereo reconstruction is made robust to the imperfect capture conditions using a novel near envelope cost volume prior that discards erroneous near depth hypotheses. We propose a novel parallax-tolerant stitching algorithm that warps the depth maps into the central panorama and stitches two color-and-depth panoramas for the front and back scene surfaces. The two panoramas are fused into a single non-redundant, well-connected geometric mesh. We provide videos demonstrating users interactively viewing and manipulating our 3D photos.",
"title": ""
},
{
"docid": "fb0ccc3d3ce018c413b20db0bb55fef0",
"text": "In many applications, the training data, from which one need s to learn a classifier, is corrupted with label noise. Many st andard algorithms such as SVM perform poorly in presence of label no ise. In this paper we investigate the robustness of risk mini ization to label noise. We prove a sufficient condition on a loss funct io for the risk minimization under that loss to be tolerant t o uniform label noise. We show that the 0 − 1 loss, sigmoid loss, ramp loss and probit loss satisfy this c ondition though none of the standard convex loss functions satisfy it. We also prove that, by choo sing a sufficiently large value of a parameter in the loss func tio , the sigmoid loss, ramp loss and probit loss can be made tolerant t o non-uniform label noise also if we can assume the classes to be separable under noise-free data distribution. Through ext ensive empirical studies, we show that risk minimization un der the 0− 1 loss, the sigmoid loss and the ramp loss has much better robus tness to label noise when compared to the SVM algorithm.",
"title": ""
},
{
"docid": "99e89314a069a059e1f7214148b150e4",
"text": "Wegener’s granulomatosis (WG) is an autoimmune disease, which particularly affects the upper respiratory pathways, lungs and kidney. Oral mucosal involvement presents in around 5%--10% of cases and may be the first disease symptom. Predominant manifestation is granulomatous gingivitis erythematous papules; mucosal necrosis and non-specific ulcers with or without impact on adjacent structures. Clinically speaking, the most characteristic lesion presents as a gingival hyperplasia of the gum, with hyperaemia and petechias on its surface which bleed when touched. Due to its appearance, it has been called ‘‘Strawberry gingiva’’. The following is a clinical case in which the granulomatous strawberry gingivitis was the first sign of WG.",
"title": ""
},
{
"docid": "4050f76539d79edff962963625298ae2",
"text": "An economic evaluation of a hybrid wind/photovoltaic/fuel cell generation system for a typical home in the Pacific Northwest is performed. In this configuration the combination of a fuel cell stack, an electrolyzer, and a hydrogen storage tank is used for the energy storage system. This system is compared to a traditional hybrid energy system with battery storage. A computer program has been developed to size system components in order to match the load of the site in the most cost effective way. A cost of electricity and an overall system cost are also calculated for each configuration. The study was performed using a graphical user interface programmed in MATLAB.",
"title": ""
},
{
"docid": "ba314edceb1b8ac00f94ad0037bd5b8e",
"text": "AMS subject classifications: primary 62G10 secondary 62H20 Keywords: dCor dCov Multivariate independence Distance covariance Distance correlation High dimension a b s t r a c t Distance correlation is extended to the problem of testing the independence of random vectors in high dimension. Distance correlation characterizes independence and determines a test of multivariate independence for random vectors in arbitrary dimension. In this work, a modified distance correlation statistic is proposed, such that under independence the distribution of a transformation of the statistic converges to Student t, as dimension tends to infinity. Thus we obtain a distance correlation t-test for independence of random vectors in arbitrarily high dimension, applicable under standard conditions on the coordinates that ensure the validity of certain limit theorems. This new test is based on an unbiased es-timator of distance covariance, and the resulting t-test is unbiased for every sample size greater than three and all significance levels. The transformed statistic is approximately normal under independence for sample size greater than nine, providing an informative sample coefficient that is easily interpretable for high dimensional data. 1. Introduction Many applications in genomics, medicine, engineering, etc. require analysis of high dimensional data. Time series data can also be viewed as high dimensional data. Objects can be represented by their characteristics or features as vectors p. In this work, we consider the extension of distance correlation to the problem of testing independence of random vectors in arbitrarily high, not necessarily equal dimensions, so the dimension p of the feature space of a random vector is typically large. measure all types of dependence between random vectors in arbitrary, not necessarily equal dimensions. (See Section 2 for definitions.) Distance correlation takes values in [0, 1] and is equal to zero if and only if independence holds. It is more general than the classical Pearson product moment correlation, providing a scalar measure of multivariate independence that characterizes independence of random vectors. The distance covariance test of independence is consistent against all dependent alternatives with finite second moments. In practice, however, researchers are often interested in interpreting the numerical value of distance correlation, without a formal test. For example, given an array of distance correlation statistics, what can one learn about the strength of dependence relations from the dCor statistics without a formal test? This is in fact, a difficult question, but a solution is finally available for a large class of problems. The …",
"title": ""
},
{
"docid": "24dda2b2334810b375f7771685669177",
"text": "This paper presents a 64-times interleaved 2.6 GS/s 10b successive-approximation-register (SAR) ADC in 65 nm CMOS. The ADC combines interleaving hierarchy with an open-loop buffer array operated in feedforward-sampling and feedback-SAR mode. The sampling front-end consists of four interleaved T/Hs at 650 MS/s that are optimized for timing accuracy and sampling linearity, while the back-end consists of four ADC arrays, each consisting of 16 10b current-mode non-binary SAR ADCs. The interleaving hierarchy allows for many ADCs to be used per T/H and eliminates distortion stemming from open loop buffers interfacing between the front-end and back-end. Startup on-chip calibration deals with offset and gain mismatches as well as DAC linearity. Measurements show that the prototype ADC achieves an SNDR of 48.5 dB and a THD of less than 58 dB at Nyquist with an input signal of 1.4 . An estimated sampling clock skew spread of 400 fs is achieved by careful design and layout. Up to 4 GHz an SNR of more than 49 dB has been measured, enabled by the less than 110 fs rms clock jitter. The ADC consumes 480 mW from 1.2/1.3/1.6 V supplies and occupies an area of 5.1 mm.",
"title": ""
},
{
"docid": "1ee679d237c54dd8aaaeb2383d6b49fa",
"text": "Bike sharing systems (BSSs) have become common in many cities worldwide, providing a new transportation mode for residents' commutes. However, the management of these systems gives rise to many problems. As the bike pick-up demands at different places are unbalanced at times, the systems have to be rebalanced frequently. Rebalancing the bike availability effectively, however, is very challenging as it demands accurate prediction for inventory target level determination. In this work, we propose two types of regression models using multi-source data to predict the hourly bike pick-up demand at cluster level: Similarity Weighted K-Nearest-Neighbor (SWK) based regression and Artificial Neural Network (ANN). SWK-based regression models learn the weights of several meteorological factors and/or taxi usage and use the correlation between consecutive time slots to predict the bike pick-up demand. The ANN is trained by using historical trip records of BSS, meteorological data, and taxi trip records. Our proposed methods are tested with real data from a New York City BSS: Citi Bike NYC. Performance comparison between SWK-based and ANN-based methods is provided. Experimental results indicate the high accuracy of ANN-based prediction for bike pick-up demand using multisource data.",
"title": ""
},
{
"docid": "af56806a30f708cb0909998266b4d8c1",
"text": "There are many excellent toolkits which provide support for developing machine learning software in Python, R, Matlab, and similar environments. Dlib-m l is an open source library, targeted at both engineers and research scientists, which aims to pro vide a similarly rich environment for developing machine learning software in the C++ language. T owards this end, dlib-ml contains an extensible linear algebra toolkit with built in BLAS supp ort. It also houses implementations of algorithms for performing inference in Bayesian networks a nd kernel-based methods for classification, regression, clustering, anomaly detection, and fe atur ranking. To enable easy use of these tools, the entire library has been developed with contract p rogramming, which provides complete and precise documentation as well as powerful debugging too ls.",
"title": ""
},
{
"docid": "5b97d597534e65bf5d00f89d8df97767",
"text": "Research into online gaming has steadily increased over the last decade, although relatively little research has examined the relationship between online gaming addiction and personality factors. This study examined the relationship between a number of personality traits (sensation seeking, self-control, aggression, neuroticism, state anxiety, and trait anxiety) and online gaming addiction. Data were collected over a 1-month period using an opportunity sample of 123 university students at an East Midlands university in the United Kingdom. Gamers completed all the online questionnaires. Results of a multiple linear regression indicated that five traits (neuroticism, sensation seeking, trait anxiety, state anxiety, and aggression) displayed significant associations with online gaming addiction. The study suggests that certain personality traits may be important in the acquisition, development, and maintenance of online gaming addiction, although further research is needed to replicate the findings of the present study.",
"title": ""
},
{
"docid": "907883af0e81f4157e81facd4ff4344c",
"text": "This work presents a low-power low-cost CDR design for RapidIO SerDes. The design is based on phase interpolator, which is controlled by a synthesized standard cell digital block. Half-rate architecture is adopted to lessen the problems in routing high speed clocks and reduce power. An improved half-rate bang-bang phase detector is presented to assure the stability of the system. Moreover, the paper proposes a simplified control scheme for the phase interpolator to further reduce power and cost. The CDR takes an area of less than 0.05mm2, and post simulation shows that the CDR has a RMS jitter of UIpp/32 ([email protected]) and consumes 9.5mW at 3.125GBaud.",
"title": ""
},
{
"docid": "00509d9e0ab2d8dad6bfd20cd264f555",
"text": "A prototype campus bus tacking system is designed and implemented for helping UiTM Student to pinpoint the location and estimate arrival time of their respective desired bus via their smartphone application. This project comprises integration between hardware and software. An Arduino UNO is used to control the GPS module to get the geographic coordinates. An android smartphone application using App Inventor is also developed for the user not only to determine the time for the campus bus to arrive and also will be able to get the bus information. This friendly user system is named as \"UiTM Bus Checker\" application. The user also will be able to view position of the bus on a digital mapping from Google Maps using their smartphone application and webpage. In order to show the effectiveness of this UiTM campus bus tracking system, the practical implementations have been presented and recorded.",
"title": ""
},
{
"docid": "7e91dd40445de51570a8c77cf50f7211",
"text": "Based on phasor measurement units (PMUs), a synchronphasor system is widely recognized as a promising smart grid measurement system. It is able to provide high-frequency, high-accuracy phasor measurements sampling for Wide Area Monitoring and Control (WAMC) applications.However,the high sampling frequency of measurement data under strict latency constraints introduces new challenges for real time communication. It would be very helpful if the collected data can be prioritized according to its importance such that the existing quality of service (QoS) mechanisms in the communication networks can be leveraged. To achieve this goal, certain anomaly detection functions should be conducted by the PMUs. Inspired by the recent emerging edge-fog-cloud computing hierarchical architecture, which allows computing tasks to be conducted at the network edge, a novel PMU fog is proposed in this paper. Two anomaly detection approaches, Singular Spectrum Analysis (SSA) and K-Nearest Neighbors (KNN), are evaluated in the PMU fog using the IEEE 16-machine 68-bus system. The simulation experiments based on Riverbed Modeler demonstrate that the proposed PMU fog can effectively reduce the data flow end-to-end (ETE) delay without sacrificing data completeness.",
"title": ""
},
{
"docid": "fc167904e713a2b4c48fd50b7efa5332",
"text": "Correlated topic modeling has been limited to small model and problem sizes due to their high computational cost and poor scaling. In this paper, we propose a new model which learns compact topic embeddings and captures topic correlations through the closeness between the topic vectors. Our method enables efficient inference in the low-dimensional embedding space, reducing previous cubic or quadratic time complexity to linear w.r.t the topic size. We further speedup variational inference with a fast sampler to exploit sparsity of topic occurrence. Extensive experiments show that our approach is capable of handling model and data scales which are several orders of magnitude larger than existing correlation results, without sacrificing modeling quality by providing competitive or superior performance in document classification and retrieval.",
"title": ""
},
{
"docid": "dc54b73eb740bc1bbdf1b834a7c40127",
"text": "This paper discusses the design and evaluation of an online social network used within twenty-two established after school programs across three major urban areas in the Northeastern United States. The overall goal of this initiative is to empower students in grades K-8 to prevent obesity through healthy eating and exercise. The online social network was designed to support communication between program participants. Results from the related evaluation indicate that the online social network has potential for advancing awareness and community action around health related issues; however, greater attention is needed to professional development programs for program facilitators, and design features could better support critical thinking, social presence, and social activity.",
"title": ""
},
{
"docid": "fa888e57652804e86c900c8e1041d399",
"text": "BACKGROUND\nJehovah's Witness patients (Witnesses) who undergo cardiac surgery provide a unique natural experiment in severe blood conservation because anemia, transfusion, erythropoietin, and antifibrinolytics have attendant risks. Our objective was to compare morbidity and long-term survival of Witnesses undergoing cardiac surgery with a similarly matched group of patients who received transfusions.\n\n\nMETHODS\nA total of 322 Witnesses and 87 453 non-Witnesses underwent cardiac surgery at our center from January 1, 1983, to January 1, 2011. All Witnesses prospectively refused blood transfusions. Among non-Witnesses, 38 467 did not receive blood transfusions and 48 986 did. We used propensity methods to match patient groups and parametric multiphase hazard methods to assess long-term survival. Our main outcome measures were postoperative morbidity complications, in-hospital mortality, and long-term survival.\n\n\nRESULTS\nWitnesses had fewer acute complications and shorter length of stay than matched patients who received transfusions: myocardial infarction, 0.31% vs 2.8% (P = . 01); additional operation for bleeding, 3.7% vs 7.1% (P = . 03); prolonged ventilation, 6% vs 16% (P < . 001); intensive care unit length of stay (15th, 50th, and 85th percentiles), 24, 25, and 72 vs 24, 48, and 162 hours (P < . 001); and hospital length of stay (15th, 50th, and 85th percentiles), 5, 7, and 11 vs 6, 8, and 16 days (P < . 001). Witnesses had better 1-year survival (95%; 95% CI, 93%-96%; vs 89%; 95% CI, 87%-90%; P = . 007) but similar 20-year survival (34%; 95% CI, 31%-38%; vs 32% 95% CI, 28%-35%; P = . 90).\n\n\nCONCLUSIONS\nWitnesses do not appear to be at increased risk for surgical complications or long-term mortality when comparisons are properly made by transfusion status. Thus, current extreme blood management strategies do not appear to place patients at heightened risk for reduced long-term survival.",
"title": ""
},
{
"docid": "5e31d7ff393d69faa25cb6dea5917a0e",
"text": "In this paper we aim to formally explain the phenomenon of fast convergence of Stochastic Gradient Descent (SGD) observed in modern machine learning. The key observation is that most modern learning architectures are over-parametrized and are trained to interpolate the data by driving the empirical loss (classification and regression) close to zero. While it is still unclear why these interpolated solutions perform well on test data, we show that these regimes allow for fast convergence of SGD, comparable in number of iterations to full gradient descent. For convex loss functions we obtain an exponential convergence bound for mini-batch SGD parallel to that for full gradient descent. We show that there is a critical batch size m∗ such that: (a) SGD iteration with mini-batch sizem ≤ m∗ is nearly equivalent to m iterations of mini-batch size 1 (linear scaling regime). (b) SGD iteration with mini-batch m > m∗ is nearly equivalent to a full gradient descent iteration (saturation regime). Moreover, for the quadratic loss, we derive explicit expressions for the optimal mini-batch and step size and explicitly characterize the two regimes above. The critical mini-batch size can be viewed as the limit for effective mini-batch parallelization. It is also nearly independent of the data size, implying O(n) acceleration over GD per unit of computation. We give experimental evidence on real data which closely follows our theoretical analyses. Finally, we show how our results fit in the recent developments in training deep neural networks and discuss connections to adaptive rates for SGD and variance reduction. † See full version of this paper at arxiv.org/abs/1712.06559. Department of Computer Science and Engineering, The Ohio State University, Columbus, Ohio, USA. Correspondence to: Siyuan Ma <[email protected]>, Raef Bassily <[email protected]>, Mikhail Belkin <[email protected]>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s).",
"title": ""
},
{
"docid": "74f02207a48019fa5bc4736ce66e4a0c",
"text": "In this paper we present an effective method for developing realistic numerical three-dimensional (3-D) microwave breast models of different shape, size, and tissue density. These models are especially convenient for microwave breast cancer imaging applications and numerical analysis of human breast-microwave interactions. As in the recent studies on this area, anatomical information of the breast tissue is collected from T1-weighted 3-D MRI data of different patients' in prone position. The method presented in this paper offers significant improvements including efficient noise reduction and tissue segmentation, nonlinear mapping of electromagnetic properties, realistically asymmetric phantom shape, and a realistic classification of breast phantoms. Our method contains a five-step approach where each MRI voxel is classified and mapped to the appropriate dielectric properties. In the first step, the MRI data are denoised by estimating and removing the bias field from each slice, after which the voxels are segmented into two main tissues as fibro-glandular and adipose. Using the distribution of the voxel intensities in MRI histogram, two nonlinear mapping functions are generated for dielectric permittivity and conductivity profiles, which allow each MRI voxel to map to its proper dielectric properties. Obtained dielectric profiles are then converted into 3-D numerical breast phantoms using several image processing techniques, including morphologic operations, filtering. Resultant phantoms are classified according to their adipose content, which is a critical parameter that affects penetration depth during microwave breast imaging.",
"title": ""
}
] | scidocsrr |
c74df8599fc83009b02a67d9863e0984 | A subject identification method based on term frequency technique | [
{
"docid": "1b2d7b2895ae4b996797ea64ddbae14e",
"text": "For the past decade, query processing on relational data has been studied extensively, and many theoretical and practical solutions to query processing have been proposed under various scenarios. With the recent popularity of cloud computing, users now have the opportunity to outsource their data as well as the data management tasks to the cloud. However, due to the rise of various privacy issues, sensitive data (e.g., medical records) need to be encrypted before outsourcing to the cloud. In addition, query processing tasks should be handled by the cloud; otherwise, there would be no point to outsource the data at the first place. To process queries over encrypted data without the cloud ever decrypting the data is a very challenging task. In this paper, we focus on solving the k-nearest neighbor (kNN) query problem over encrypted database outsourced to a cloud: a user issues an encrypted query record to the cloud, and the cloud returns the k closest records to the user. We first present a basic scheme and demonstrate that such a naive solution is not secure. To provide better security, we propose a secure kNN protocol that protects the confidentiality of the data, user's input query, and data access patterns. Also, we empirically analyze the efficiency of our protocols through various experiments. These results indicate that our secure protocol is very efficient on the user end, and this lightweight scheme allows a user to use any mobile device to perform the kNN query.",
"title": ""
},
{
"docid": "e659f976983c28631062bb5c8b1c35ab",
"text": "This paper presents the outcomes of research into using lingual parts of music in an automatic mood classification system. Using a collection of lyrics and corresponding user-tagged moods, we build classifiers that classify lyrics of songs into moods. By comparing the performance of different mood frameworks (or dimensions), we examine to what extent the linguistic part of music reveals adequate information for assigning a mood category and which aspects of mood can be classified best. Our results show that word oriented metrics provide a valuable source of information for automatic mood classification of music, based on lyrics only. Metrics such as term frequencies and tf*idf values are used to measure relevance of words to the different mood classes. These metrics are incorporated in a machine learning classifier setup. Different partitions of the mood plane are investigated and we show that there is no large difference in mood prediction based on the mood division. Predictions on the valence, tension and combinations of aspects lead to similar performance.",
"title": ""
}
] | [
{
"docid": "5109892c554f7fed68136f43b8c05bb8",
"text": "Obese white adipose tissue (AT) is characterized by large-scale infiltration of proinflammatory macrophages, in parallel with systemic insulin resistance; however, the cellular stimulus that initiates this signaling cascade and chemokine release is still unknown. The objective of this study was to determine the role of the phosphoinositide 3-kinase (PI3K) regulatory subunits on AT macrophage (ATM) infiltration in obesity. Here, we find that the Pik3r1 regulatory subunits (i.e., p85a/p55a/p50a) are highly induced in AT from high-fat diet–fed obese mice, concurrent with insulin resistance. Global heterozygous deletion of the Pik3r1 regulatory subunits (aHZ), but not knockout of Pik3r2 (p85b), preserves whole-body, AT, and skeletal muscle insulin sensitivity, despite severe obesity. Moreover, ATM accumulation, proinflammatory gene expression, and ex vivo chemokine secretion in obese aHZ mice are markedly reduced despite endoplasmic reticulum (ER) stress, hypoxia, adipocyte hypertrophy, and Jun NH2-terminal kinase activation. Furthermore, bone marrow transplant studies reveal that these improvements in obese aHZ mice are independent of reduced Pik3r1 expression in the hematopoietic compartment. Taken together, these studies demonstrate that Pik3r1 expression plays a critical role in mediating AT insulin sensitivity and, more so, suggest that reduced PI3K activity is a key step in the initiation and propagation of the inflammatory response in obese AT.",
"title": ""
},
{
"docid": "be75b351098bfda2829967a13b89c5fd",
"text": "Human activities such as international trade and travel promote biological invasions by accidentally or deliberately dispersing species outside their native biogeographical ranges (Lockwood, 2005; Alpert, 2006). Invasive species are now viewed as a significant component of global change and have become a serious threat to natural communities (Mack et al., 2000; Pyšek & Richardson, 2010). The ecological impact of invasive species has been observed in all types of ecosystems. Typically, invaders can change the niches of co-occurring species, alter the structure and function of ecosystems by degrading native communities and disrupt evolutionary processes through anthropogenic movement of species across physical and geographical barriers (D’Antonio & Vitousek, 1992; Mack et al., 2000; Richardson et al., 2000; Levine et al., 2003; Vitousek et al., 2011). Concerns for the implications and consequences of successful invasions have stimulated a considerable amount of research. Recent invasion research ranges from the developing testable hypotheses aimed at understanding the mechanisms of invasion to providing guidelines for control and management of invasive species. Several recent studies have used hyperspectral remote sensing (Underwood et al., 2003; Lass et al., 2005; Underwood Department of Biological Sciences, Murray State University, Murray, KY 42071, USA, Fondazione Edmund Mach, Research and Innovation Centre, Department of Biodiversity and Molecular Ecology, GIS and Remote Sensing Unit, Via E. Mach 1, 38010 S. Michele all’Adige, TN, Italy, Center for the Study of Institutions, Population, and Environmental Change, Indiana University, 408 N. Indiana Avenue, Bloomington, IN 47408, USA, Ashoka Trust for Research in Ecology and the Environment (ATREE), Royal Enclave, Srirampura, Jakkur Post, Bangalore 560064, India",
"title": ""
},
{
"docid": "906b6d1ddac67f9303ce86117b88edf2",
"text": "Over the years, we have harnessed the power of computing to improve the speed of operations and increase in productivity. Also, we have witnessed the merging of computing and telecommunications. This excellent combination of two important fields has propelled our capability even further, allowing us to communicate anytime and anywhere, improving our work flow and increasing our quality of life tremendously. The next wave of evolution we foresee is the convergence of telecommunication, computing, wireless, and transportation technologies. Once this happens, our roads and highways will be both our communications and transportation platforms, which will completely revolutionize when and how we access services and entertainment, how we communicate, commute, navigate, etc., in the coming future. This paper presents an overview of the current state-of-the-art, discusses current projects, their goals, and finally highlights how emergency services and road safety will evolve with the blending of vehicular communication networks with road transportation.",
"title": ""
},
{
"docid": "4f89160f87b862fdc471815b026511d1",
"text": "A procedure is described whereby a computer can determine whether two fingerpring impressions were made by the same finger. The procedure used the X and Y coordinates and the individual directions of the minutiae (ridge endings and bifurcations). The identity of two impressions is established by computing the density of clusters of points in AX and AY space where AX and AY are the differences in coordinates that are found in going from one of the fingerpring impressions to the other. Single fingerpring classification is discussed and experimental results using machine-read minutiae data are given. References: J. H. Wegstein, NBS Technical Notes 538 and 730. ~7 Information Processing for Radar Target Detection andClassification. A. KSIENSKI and L. WHITE, Ohio State-Previous research has demonstrated the feasibility of using multiple low-frequency radar returns for target classification. Simple object shapes have been successfully classified by such techniques, but aircraft data poses greater difficulty, as in general such data are not linearly separable. A misclassification error analysis is provided for aircraft data using k-nearest neighbor algorithms. Another recognition scheme involves the use of a bilinear fit of aircraft data; a misclassification error analysis is being prepared for this technique and will be reported. ~ A Parallel Machine for Silhouette Pre-Processing. PAUL NAHIN, Harvey Mudd-The concept of slope density is introduced as a descriptor of silhouettes. The mechanism of a parallel machine that extracts an approximation to the slope denisty is presented. The machine has been built by Aero-Jet, but because of its complexity, a digital simulation program has been developed I. The effect of sample and hold filtering on the machine output has been investigated, both theoretically, and via simulation. The design of a medical cell analyzer (i.e., marrow granulocyte precursor counter) incorporating the slope density machine is given. of Pittsburgh-In studying pictures of impossible objects, D. A. Huffman I described a labeling technique for interpreting a two dimensional line drawing as a picture of a polyhedron (a solid three dimensional object bounded by plane surfaces). Our work extends this method to interpret a set of planes in three dimensions as a \"picture\" of a four dimensional polyhedron. Huffman labeled each line in two dimensions as either i) concave, 2) convex with one visible plane, or 3) convex with two visible planes. A labeled line drawing is a valid interpretation iff the labeled lines intersect in one of twelve legal ways. Our method is …",
"title": ""
},
{
"docid": "fb4630a6b558ac9b8d8444275e1978e3",
"text": "Relational graphs are widely used in modeling large scale networks such as biological networks and social networks. In this kind of graph, connectivity becomes critical in identifying highly associated groups and clusters. In this paper, we investigate the issues of mining closed frequent graphs with connectivity constraints in massive relational graphs where each graph has around 10K nodes and 1M edges. We adopt the concept of edge connectivity and apply the results from graph theory, to speed up the mining process. Two approaches are developed to handle different mining requests: CloseCut, a pattern-growth approach, and splat, a pattern-reduction approach. We have applied these methods in biological datasets and found the discovered patterns interesting.",
"title": ""
},
{
"docid": "405182cedabc0c75c1b79052bd6db5b3",
"text": "Human resource management systems (HRMS) integrate human resource processes and an organization's information systems. An HRMS frequently represents one of the modules of an enterprise resource planning system (ERP). ERPs are information systems that manage the business and consist of integrated software applications such customer relations and supply chain management, manufacturing, finance and human resources. ERP implementation projects frequently have high failure rates; although research has investigated a number of factors for success and failure rates, limited attention has been directed toward the implementation teams, and how to make these more effective. In this paper we argue that shared leadership represents an appropriate approach to improving the functioning of ERP implementation teams. Shared leadership represents a form of team leadership where the team members, rather than only a single team leader, engage in leadership behaviors. While shared leadership has received increased research attention during the past decade, it has not been applied to ERP implementation teams and therefore that is the purpose of this article. Toward this end, we describe issues related to ERP and HRMS implementation, teams, and the concept of shared leadership, review theoretical and empirical literature, present an integrative framework, and describe the application of shared leadership to ERP and HRMS implementation. Published by Elsevier Inc.",
"title": ""
},
{
"docid": "d5c159a759aeace5085a7305609793e5",
"text": "In this paper, a new method is proposed to eliminate electrolytic capacitors in a two-stage ac-dc light-emitting diode (LED) driver. DC-biased sinusoidal or square-wave LED driving-current can help to reduce the power imbalance between ac input and dc output. In doing so, film capacitors can be adopted to improve LED driver's lifetime. The relationship between the peak-to-average ratio of the pulsating current in LEDs and the storage capacitance according to given storage capacitance is derived. Using the proposed “zero-low-level square-wave driving current” scheme, the storage capacitance in the LED driver can be reduced to 52.7% comparing with that in the driver using constant dc driving current. The input power factor is almost unity, which complies with lighting equipment standards such as IEC-1000-3-2 for Class C equipments. The voltage across the storage capacitors is analyzed and verified during the whole pulse width modulation dimming range. For the ease of dimming and implementation, a 50 W LED driver with zero-low-level square-wave driving current is built and the experimental results are presented to verify the proposed methods.",
"title": ""
},
{
"docid": "da7a2d40d2740e52ac7388fa23f1c797",
"text": "The use of business intelligence tools and other means to generate queries has led to great variety in the size of join queries. While most queries are reasonably small, join queries with up to a hundred relations are not that exotic anymore, and the distribution of query sizes has an incredible long tail. The largest real-world query that we are aware of accesses more than 4,000 relations. This large spread makes query optimization very challenging. Join ordering is known to be NP-hard, which means that we cannot hope to solve such large problems exactly. On the other hand most queries are much smaller, and there is no reason to sacrifice optimality there. This paper introduces an adaptive optimization framework that is able to solve most common join queries exactly, while simultaneously scaling to queries with thousands of joins. A key component there is a novel search space linearization technique that leads to near-optimal execution plans for large classes of queries. In addition, we describe implementation techniques that are necessary to scale join ordering algorithms to these extremely large queries. Extensive experiments with over 10 different approaches show that the new adaptive approach proposed here performs excellent over a huge spectrum of query sizes, and produces optimal or near-optimal solutions for most common queries.",
"title": ""
},
{
"docid": "afa0e5c40ed180b797c0e2e3ec7c62cb",
"text": "We present Science Assistments, an interactive environment, which assesses students’ inquiry skills as they engage in inquiry using science microworlds. We frame our variables, tasks, assessments, and methods of analyzing data in terms of evidence-centered design. Specifically, we focus on the student model, the task model, and the evidence model in the conceptual assessment framework. In order to support both assessment and the provision of scaffolding, the environment makes inferences about student inquiry skills using models developed through a combination of text replay tagging [cf. Sao Pedro et al. 2011], a method for rapid manual coding of student log files, and educational data mining. Models were developed for multiple inquiry skills, with particular focus on detecting if students are testing their articulated hypotheses, and if they are designing controlled experiments. Student-level cross-validation was applied to validate that this approach can automatically and accurately identify these inquiry skills for new students. The resulting detectors also can be applied at run-time to drive scaffolding intervention.",
"title": ""
},
{
"docid": "4f2112175c5d8175c5c0f8cb4d9185a2",
"text": "It is difficult to fully assess the quality of software inhouse, outside the actual time and context in which it will execute after deployment. As a result, it is common for software to manifest field failures, failures that occur on user machines due to untested behavior. Field failures are typically difficult to recreate and investigate on developer platforms, and existing techniques based on crash reporting provide only limited support for this task. In this paper, we present a technique for recording, reproducing, and minimizing failing executions that enables and supports inhouse debugging of field failures. We also present a tool that implements our technique and an empirical study that evaluates the technique on a widely used e-mail client.",
"title": ""
},
{
"docid": "cec75ff485e6575fbf58cb5553e1f8e9",
"text": "Preparation for the role of therapist can occur on both professional and personal levels. Research has found that therapists are at risk for occupationally related psychological problems. It follows that self-care may be a useful complement to the professional training of future therapists. The present study examined the effects of one approach to self-care, Mindfulness-Based Stress Reduction (MBSR), for therapists in training. Using a prospective, cohort-controlled design, the study found participants in the MBSR program reported significant declines in stress, negative affect, rumination, state and trait anxiety, and significant increases in positive affect and self-compassion. Further, MBSR participation was associated with increases in mindfulness, and this enhancement was related to several of the beneficial effects of MBSR participation. Discussion highlights the potential for future research addressing the mental health needs of therapists and therapist trainees.",
"title": ""
},
{
"docid": "c32b7f497450d92634ea097bbb062178",
"text": "This work addresses fine-grained image classification. Our work is based on the hypothesis that when dealing with subtle differences among object classes it is critical to identify and only account for a few informative image parts, as the remaining image context may not only be uninformative but may also hurt recognition. This motivates us to formulate our problem as a sequential search for informative parts over a deep feature map produced by a deep Convolutional Neural Network (CNN). A state of this search is a set of proposal bounding boxes in the image, whose informativeness is evaluated by the heuristic function (H), and used for generating new candidate states by the successor function (S). The two functions are unified via a Long Short-Term Memory network (LSTM) into a new deep recurrent architecture, called HSnet. Thus, HSnet (i) generates proposals of informative image parts and (ii) fuses all proposals toward final fine-grained recognition. We specify both supervised and weakly supervised training of HSnet depending on the availability of object part annotations. Evaluation on the benchmark Caltech-UCSD Birds 200-2011 and Cars-196 datasets demonstrate our competitive performance relative to the state of the art.",
"title": ""
},
{
"docid": "4d857311f86baca70700bb78c8771f22",
"text": "Randomization is a key element in sequential and distributed computing. Reasoning about randomized algorithms is highly non-trivial. In the 1980s, this initiated first proof methods, logics, and model-checking algorithms. The field of probabilistic verification has developed considerably since then. This paper surveys the algorithmic verification of probabilistic models, in particular probabilistic model checking. We provide an informal account of the main models, the underlying algorithms, applications from reliability and dependability analysis---and beyond---and describe recent developments towards automated parameter synthesis.",
"title": ""
},
{
"docid": "626470bd5182dd2a6d4e8a09b31731df",
"text": "In this paper, we present a semi-supervised method for automatic speech act recognition in email and forums. The major challenge of this task is due to lack of labeled data in these two genres. Our method leverages labeled data in the SwitchboardDAMSL and the Meeting Recorder Dialog Act database and applies simple domain adaptation techniques over a large amount of unlabeled email and forum data to address this problem. Our method uses automatically extracted features such as phrases and dependency trees, called subtree features, for semi-supervised learning. Empirical results demonstrate that our model is effective in email and forum speech act recognition.",
"title": ""
},
{
"docid": "1d606f39d429c5f344d5d3bc6810f2f9",
"text": "Cryptography is increasingly applied to the E-commerce world, especially to the untraceable payment system and the electronic voting system. Protocols for these systems strongly require the anonymous digital signature property, and thus a blind signature strategy is the answer to it. Chaum stated that every blind signature protocol should hold two fundamental properties, blindness and intractableness. All blind signature schemes proposed previously almost are based on the integer factorization problems, discrete logarithm problems, or the quadratic residues, which are shown by Lee et al. that none of the schemes is able to meet the two fundamental properties above. Therefore, an ECC-based blind signature scheme that possesses both the above properties is proposed in this paper.",
"title": ""
},
{
"docid": "a79f9ad24c4f047d8ace297b681ccf0a",
"text": "BACKGROUND\nLe Fort III distraction advances the Apert midface but leaves the central concavity and vertical compression untreated. The authors propose that Le Fort II distraction and simultaneous zygomatic repositioning as a combined procedure can move the central midface and lateral orbits in independent vectors in order to improve the facial deformity. The purpose of this study was to determine whether this segmental movement results in more normal facial proportions than Le Fort III distraction.\n\n\nMETHODS\nComputed tomographic scan analyses were performed before and after distraction in patients undergoing Le Fort III distraction (n = 5) and Le Fort II distraction with simultaneous zygomatic repositioning (n = 4). The calculated axial facial ratios and vertical facial ratios relative to the skull base were compared to those of unoperated Crouzon (n = 5) and normal (n = 6) controls.\n\n\nRESULTS\nWith Le Fort III distraction, facial ratios did not change with surgery and remained lower (p < 0.01; paired t test comparison) than normal and Crouzon controls. Although the face was advanced, its shape remained abnormal. With the Le Fort II segmental movement procedure, the central face advanced and lengthened more than the lateral orbit. This differential movement changed the abnormal facial ratios that were present before surgery into ratios that were not significantly different from normal controls (p > 0.05).\n\n\nCONCLUSION\nCompared with Le Fort III distraction, Le Fort II distraction with simultaneous zygomatic repositioning normalizes the position and the shape of the Apert face.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, III.",
"title": ""
},
{
"docid": "baa3d41ba1970125301b0fdd9380a966",
"text": "This article provides an alternative perspective for measuring author impact by applying PageRank algorithm to a coauthorship network. A weighted PageRank algorithm considering citation and coauthorship network topology is proposed. We test this algorithm under different damping factors by evaluating author impact in the informetrics research community. In addition, we also compare this weighted PageRank with the h-index, citation, and program committee (PC) membership of the International Society for Scientometrics and Informetrics (ISSI) conferences. Findings show that this weighted PageRank algorithm provides reliable results in measuring author impact.",
"title": ""
},
{
"docid": "e2807120a8a04a9c5f5f221e413aec4d",
"text": "Background A military aircraft in a hostile environment may need to use radar jamming in order to avoid being detected or engaged by the enemy. Effective jamming can require knowledge of the number and type of enemy radars; however, the radar receiver on the aircraft will observe a single stream of pulses from all radar emitters combined. It is advantageous to separate this collection of pulses into individual streams each corresponding to a particular emitter in the environment; this process is known as pulse deinterleaving. Pulse deinterleaving is critical for effective electronic warfare (EW) signal processing such as electronic attack (EA) and electronic protection (EP) because it not only aids in the identification of enemy radars but also permits the intelligent allocation of processing resources.",
"title": ""
},
{
"docid": "c6c9643816533237a29dd93fd420018f",
"text": "We present an algorithm for finding a meaningful vertex-to-vertex correspondence between two 3D shapes given as triangle meshes. Our algorithm operates on embeddings of the two shapes in the spectral domain so as to normalize them with respect to uniform scaling and rigid-body transformation. Invariance to shape bending is achieved by relying on geodesic point proximities on a mesh to capture its shape. To deal with stretching, we propose to use non-rigid alignment via thin-plate splines in the spectral domain. This is combined with a refinement step based on the geodesic proximities to improve dense correspondence. We show empirically that our algorithm outperforms previous spectral methods, as well as schemes that compute correspondence in the spatial domain via non-rigid iterative closest points or the use of local shape descriptors, e.g., 3D shape context",
"title": ""
},
{
"docid": "b866fc215dbae6538e998b249563e78d",
"text": "The term `heavy metal' is, in this context, imprecise. It should probably be reserved for those elements with an atomic mass of 200 or greater [e.g., mercury (200), thallium (204), lead (207), bismuth (209) and the thorium series]. In practice, the term has come to embrace any metal, exposure to which is clinically undesirable and which constitutes a potential hazard. Our intention in this review is to provide an overview of some general concepts of metal toxicology and to discuss in detail metals of particular importance, namely, cadmium, lead, mercury, thallium, bismuth, arsenic, antimony and tin. Poisoning from individual metals is rare in the UK, even when there is a known risk of exposure. Table 1 shows that during 1991±92 only 1 ́1% of male lead workers in the UK and 5 ́5% of female workers exceeded the legal limits for blood lead concentration. Collectively, however, poisoning with metals forms an important aspect of toxicology because of their widespread use and availability. Furthermore, hitherto unrecognized hazards and accidents continue to be described. The investigation of metal poisoning forms a distinct specialist area, since most metals are usually measured using atomic absorption techniques. Analyses require considerable expertise and meticulous attention to detail to ensure valid results. Different analytical performance standards may be required of assays used for environmental and occupational monitoring, or for solely toxicological purposes. Because of the high capital cost of good quality instruments, the relatively small numbers of tests required and the variety of metals, it is more cost-effective if such testing is carried out in regional, national or other centres having the necessary experience. Nevertheless, patients are frequently cared for locally, and clinical biochemists play a crucial role in maintaining a high index of suspicion and liaising with clinical colleagues to ensure the provision of correct samples for analysis and timely advice.",
"title": ""
}
] | scidocsrr |
c35e7e52def503d263f4bb3cd50ff96a | Online Collaborative Learning for Open-Vocabulary Visual Classifiers | [
{
"docid": "df163d94fbf0414af1dde4a9e7fe7624",
"text": "This paper introduces a web image dataset created by NUS's Lab for Media Search. The dataset includes: (1) 269,648 images and the associated tags from Flickr, with a total of 5,018 unique tags; (2) six types of low-level features extracted from these images, including 64-D color histogram, 144-D color correlogram, 73-D edge direction histogram, 128-D wavelet texture, 225-D block-wise color moments extracted over 5x5 fixed grid partitions, and 500-D bag of words based on SIFT descriptions; and (3) ground-truth for 81 concepts that can be used for evaluation. Based on this dataset, we highlight characteristics of Web image collections and identify four research issues on web image annotation and retrieval. We also provide the baseline results for web image annotation by learning from the tags using the traditional k-NN algorithm. The benchmark results indicate that it is possible to learn effective models from sufficiently large image dataset to facilitate general image retrieval.",
"title": ""
}
] | [
{
"docid": "71b09fba5c4054af268da7c0037253e6",
"text": "Recurrent neural networks are now the state-of-the-art in natural language processing because they can build rich contextual representations and process texts of arbitrary length. However, recent developments on attention mechanisms have equipped feedforward networks with similar capabilities, hence enabling faster computations due to the increase in the number of operations that can be parallelized. We explore this new type of architecture in the domain of question-answering and propose a novel approach that we call Fully Attention Based Information Retriever (FABIR). We show that FABIR achieves competitive results in the Stanford Question Answering Dataset (SQuAD) while having fewer parameters and being faster at both learning and inference than rival methods.",
"title": ""
},
{
"docid": "6da632d61dbda324da5f74b38f25b1b9",
"text": "Deep neural networks have shown good data modelling capabilities when dealing with challenging and large datasets from a wide range of application areas. Convolutional Neural Networks (CNNs) offer advantages in selecting good features and Long Short-Term Memory (LSTM) networks have proven good abilities of learning sequential data. Both approaches have been reported to provide improved results in areas such image processing, voice recognition, language translation and other Natural Language Processing (NLP) tasks. Sentiment classification for short text messages from Twitter is a challenging task, and the complexity increases for Arabic language sentiment classification tasks because Arabic is a rich language in morphology. In addition, the availability of accurate pre-processing tools for Arabic is another current limitation, along with limited research available in this area. In this paper, we investigate the benefits of integrating CNNs and LSTMs and report obtained improved accuracy for Arabic sentiment analysis on different datasets. Additionally, we seek to consider the morphological diversity of particular Arabic words by using different sentiment classification levels.",
"title": ""
},
{
"docid": "3cc97542631d734d8014abfbef652c79",
"text": "Internet exchange points (IXPs) are an important ingredient of the Internet AS-level ecosystem - a logical fabric of the Internet made up of about 30,000 ASes and their mutual business relationships whose primary purpose is to control and manage the flow of traffic. Despite the IXPs' critical role in this fabric, little is known about them in terms of their peering matrices (i.e., who peers with whom at which IXP) and corresponding traffic matrices (i.e., how much traffic do the different ASes that peer at an IXP exchange with one another). In this paper, we report on an Internet-wide traceroute study that was specifically designed to shed light on the unknown IXP-specific peering matrices and involves targeted traceroutes from publicly available and geographically dispersed vantage points. Based on our method, we were able to discover and validate the existence of about 44K IXP-specific peering links - nearly 18K more links than were previously known. In the process, we also classified all known IXPs depending on the type of information required to detect them. Moreover, in view of the currently used inferred AS-level maps of the Internet that are known to miss a significant portion of the actual AS relationships of the peer-to-peer type, our study provides a new method for augmenting these maps with IXP-related peering links in a systematic and informed manner.",
"title": ""
},
{
"docid": "a1367b21acfebfe35edf541cdc6e3f48",
"text": "Mobile phone sensing is an emerging area of interest for researchers as smart phones are becoming the core communication device in people's everyday lives. Sensor enabled mobile phones or smart phones are hovering to be at the center of a next revolution in social networks, green applications, global environmental monitoring, personal and community healthcare, sensor augmented gaming, virtual reality and smart transportation systems. More and more organizations and people are discovering how mobile phones can be used for social impact, including how to use mobile technology for environmental protection, sensing, and to leverage just-in-time information to make our movements and actions more environmentally friendly. In this paper we have described comprehensively all those systems which are using smart phones and mobile phone sensors for humans good will and better human phone interaction.",
"title": ""
},
{
"docid": "2ad2c5fe41133827fa0fdcbf62b3c1e6",
"text": "We describe sensing techniques motivated by unique aspects of human-computer interaction with handheld devices in mobile settings. Special features of mobile interaction include changing orientation and position, changing venues, the use of computing as auxiliary to ongoing, real-world activities like talking to a colleague, and the general intimacy of use for such devices. We introduce and integrate a set of sensors into a handheld device, and demonstrate several new functionalities engendered by the sensors, such as recording memos when the device is held like a cell phone, switching between portrait and landscape display modes by holding the device in the desired orientation, automatically powering up the device when the user picks it up the device to start using it, and scrolling the display using tilt. We present an informal experiment, initial usability testing results, and user reactions to these techniques.",
"title": ""
},
{
"docid": "3380497ab11a7f0e34e8095d35a83f71",
"text": "The reparameterization gradient has become a widely used method to obtain Monte Carlo gradients to optimize the variational objective. However, this technique does not easily apply to commonly used distributions such as beta or gamma without further approximations, and most practical applications of the reparameterization gradient fit Gaussian distributions. In this paper, we introduce the generalized reparameterization gradient, a method that extends the reparameterization gradient to a wider class of variational distributions. Generalized reparameterizations use invertible transformations of the latent variables which lead to transformed distributions that weakly depend on the variational parameters. This results in new Monte Carlo gradients that combine reparameterization gradients and score function gradients. We demonstrate our approach on variational inference for two complex probabilistic models. The generalized reparameterization is e ective: even a single sample from the variational distribution is enough to obtain a low-variance gradient.",
"title": ""
},
{
"docid": "d2bf22468506f1f8b9119796da465f0a",
"text": "We define a language G for querying data represented as a labeled graph G. By considering G as a relation, this graphical query language can be viewed as a relational query language, and its expressive power can be compared to that of other relational query languages. We do not propose G as an alternative to general purpose relational query languages, but rather as a complementary language in which recursive queries are simple to formulate. The user is aided in this formulation by means of a graphical interface. The provision of regular expressions in G allows recursive queries more general than transitive closure to be posed, although the language is not as powerful as those based on function-free Horn clauses. However, we hope to be able to exploit well-known graph algorithms in evaluating recursive queries efficiently, a topic which has received widespread attention recently.",
"title": ""
},
{
"docid": "81349ac7f7a4011ccad32e5c2b392533",
"text": "In this literature a new design of printed antipodal UWB vivaldi antenna is proposed. The design is further modified for acquiring notch characteristics in the WLAN band and high front to backlobe ratio (F/B). The modifications are done on the ground plane of the antenna. Previous literatures have shown that the incorporation of planar meta-material structures on the CPW plane along the feed can produce notch characteristics. Here, a novel concept is introduced regarding antipodal vivaldi antenna. In the ground plane of the antenna, square ring resonator (SRR) structure slot and circular ring resonator (CRR) structure slot are cut to produce the notch characteristic on the WLAN band. The designed antenna covers a bandwidth of 6.8 GHz (2.7 GHz–9.5 GHz) and it can be useful for a large range of wireless applications like satellite communication applications and biomedical applications where directional radiation characteristic is needed. The designed antenna shows better impedance matching in the above said band. A parametric study is also performed on the antenna design to optimize the performance of the antenna. The size of the antenna is 40×44×1.57 mm3. It is designed and simulated using HFSS. The presented prototype offers well directive radiation characteristics, good gain and efficiency.",
"title": ""
},
{
"docid": "9eaf4895f0bf86f8403de61d4a82d39a",
"text": "OBJECTIVE\nTo describe a new surgical technique to treat pectus excavatum utilizing low hardness solid silicone block that can be carved during the intraoperative period promoting a better aesthetic result.\n\n\nMETHODS\nBetween May 1994 and February 2013, 34 male patients presenting pectus excavatum were submitted to surgical repair with the use of low hardness solid silicone block, 10 to 30 Shore A. A block-shaped parallelepiped was used with height and base size coinciding with those of the bone defect. The block was carved intraoperatively according to the shape of the dissected space. The patients were followed for a minimum of 120 days postoperatively. The results and the complications were recorded.\n\n\nRESULTS\nFrom the 34 patients operated on, 28 were primary surgeries and 6 were secondary treatment, using other surgical techniques, bone or implant procedures. Postoperative complications included two case of hematomas and eight of seromas. It was necessary to remove the implant in one patient due to pain, and review surgery was performed in another to check prothesis dimensions. Two patients were submitted to fat grafting to improve the chest wall contour. The result was considered satisfactory in 33 patients.\n\n\nCONCLUSION\nThe procedure proved to be fast and effective. The results of carved silicone block were more effective for allowing a more refined contour as compared to custom made implants.",
"title": ""
},
{
"docid": "ad49595bd04c3285be2939e4ced77551",
"text": "Embedded systems have found a very strong foothold in global Information Technology (IT) market since they can provide very specialized and intricate functionality to a wide range of products. On the other hand, the migration of IT functionality to a plethora of new smart devices (like mobile phones, cars, aviation, game or households machines) has enabled the collection of a considerable number of data that can be characterized sensitive. Therefore, there is a need for protecting that data through IT security means. However, eare usually dployed in hostile environments where they can be easily subject of physical attacks. In this paper, we provide an overview from ES hardware perspective of methods and mechanisms for providing strong security and trust. The various categories of physical attacks on security related embedded systems are presented along with countermeasures to thwart them and the importance of reconfigurable logic flexibility, adaptability and scalability along with trust protection mechanisms is highlighted. We adopt those mechanisms in order to propose a FPGA based embedded system hardware architecture capable of providing security and trust along with physical attack protection using trust zone separation. The benefits of such approach are discussed and a subsystem of the proposed architecture is implemented in FPGA technology as a proof of concept case study. From the performed analysis and implementation, it is concluded that flexibility, security and trust are fully realistic options for embedded system security enhancement. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6f96ac41b772d7b0134dcf613a726e87",
"text": "OBJECTIVE\nThe objective of this research was to explore the effects of risperidone on cognitive processes in children with autism and irritable behavior.\n\n\nMETHOD\nThirty-eight children, ages 5-17 years with autism and severe behavioral disturbance, were randomly assigned to risperidone (0.5 to 3.5 mg/day) or placebo for 8 weeks. This sample of 38 was a subset of 101 subjects who participated in the clinical trial; 63 were unable to perform the cognitive tasks. A double-blind placebo-controlled parallel groups design was used. Dependent measures included tests of sustained attention, verbal learning, hand-eye coordination, and spatial memory assessed before, during, and after the 8-week treatment. Changes in performance were compared by repeated measures ANOVA.\n\n\nRESULTS\nTwenty-nine boys and 9 girls with autism and severe behavioral disturbance and a mental age >or=18 months completed the cognitive part of the study. No decline in performance occurred with risperidone. Performance on a cancellation task (number of correct detections) and a verbal learning task (word recognition) was better on risperidone than on placebo (without correction for multiplicity). Equivocal improvement also occurred on a spatial memory task. There were no significant differences between treatment conditions on the Purdue Pegboard (hand-eye coordination) task or the Analog Classroom Task (timed math test).\n\n\nCONCLUSION\nRisperidone given to children with autism at doses up to 3.5 mg for up to 8 weeks appears to have no detrimental effect on cognitive performance.",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "68f797b34880bf08a8825332165a955b",
"text": "The immune system responds to pathogens by a variety of pattern recognition molecules such as the Toll-like receptors (TLRs), which promote recognition of dangerous foreign pathogens. However, recent evidence indicates that normal intestinal microbiota might also positively influence immune responses, and protect against the development of inflammatory diseases. One of these elements may be short-chain fatty acids (SCFAs), which are produced by fermentation of dietary fibre by intestinal microbiota. A feature of human ulcerative colitis and other colitic diseases is a change in ‘healthy’ microbiota such as Bifidobacterium and Bacteriodes, and a concurrent reduction in SCFAs. Moreover, increased intake of fermentable dietary fibre, or SCFAs, seems to be clinically beneficial in the treatment of colitis. SCFAs bind the G-protein-coupled receptor 43 (GPR43, also known as FFAR2), and here we show that SCFA–GPR43 interactions profoundly affect inflammatory responses. Stimulation of GPR43 by SCFAs was necessary for the normal resolution of certain inflammatory responses, because GPR43-deficient (Gpr43-/-) mice showed exacerbated or unresolving inflammation in models of colitis, arthritis and asthma. This seemed to relate to increased production of inflammatory mediators by Gpr43-/- immune cells, and increased immune cell recruitment. Germ-free mice, which are devoid of bacteria and express little or no SCFAs, showed a similar dysregulation of certain inflammatory responses. GPR43 binding of SCFAs potentially provides a molecular link between diet, gastrointestinal bacterial metabolism, and immune and inflammatory responses.",
"title": ""
},
{
"docid": "01ff7e55830977622482ab018acd2cfe",
"text": "Dictionary learning has been widely used in many image processing tasks. In most of these methods, the number of basis vectors is either set by experience or coarsely evaluated empirically. In this paper, we propose a new scale adaptive dictionary learning framework, which jointly estimates suitable scales and corresponding atoms in an adaptive fashion according to the training data, without the need of prior information. We design an atom counting function and develop a reliable numerical scheme to solve the challenging optimization problem. Extensive experiments on texture and video data sets demonstrate quantitatively and visually that our method can estimate the scale, without damaging the sparse reconstruction ability.",
"title": ""
},
{
"docid": "73545ef815fb22fa048fed3e0bc2cc8b",
"text": "Redox-based resistive switching devices (ReRAM) are an emerging class of nonvolatile storage elements suited for nanoscale memory applications. In terms of logic operations, ReRAM devices were suggested to be used as programmable interconnects, large-scale look-up tables or for sequential logic operations. However, without additional selector devices these approaches are not suited for use in large scale nanocrossbar memory arrays, which is the preferred architecture for ReRAM devices due to the minimum area consumption. To overcome this issue for the sequential logic approach, we recently introduced a novel concept, which is suited for passive crossbar arrays using complementary resistive switches (CRSs). CRS cells offer two high resistive storage states, and thus, parasitic “sneak” currents are efficiently avoided. However, until now the CRS-based logic-in-memory approach was only shown to be able to perform basic Boolean logic operations using a single CRS cell. In this paper, we introduce two multi-bit adder schemes using the CRS-based logic-in-memory approach. We proof the concepts by means of SPICE simulations using a dynamical memristive device model of a ReRAM cell. Finally, we show the advantages of our novel adder concept in terms of step count and number of devices in comparison to a recently published adder approach, which applies the conventional ReRAM-based sequential logic concept introduced by Borghetti et al.",
"title": ""
},
{
"docid": "ddd8c2c44ecb82f7892bed163610f4aa",
"text": "Our aim is to make shape memory alloys (SMAs) accessible and visible as creative crafting materials by combining them with paper. In this paper, we begin by presenting mechanisms for actuating paper with SMAs along with a set of design guidelines for achieving dramatic movement. We then describe how we tested the usability and educational potential of one of these mechanisms in a workshop where participants, age 9 to 15, made actuated electronic origami cranes. We found that participants were able to successfully build constructions integrating SMAs and paper, that they enjoyed doing so, and were able to learn skills like circuitry design and soldering over the course of the workshop.",
"title": ""
},
{
"docid": "1f247e127866e62029310218c380bc31",
"text": "Human Resource is the most important asset for any organization and it is the resource of achieving competitive advantage. Managing human resources is very challenging as compared to managing technology or capital and for its effective management, organization requires effective HRM system. HRM system should be backed up by strong HRM practices. HRM practices refer to organizational activities directed at managing the group of human resources and ensuring that the resources are employed towards the fulfillment of organizational goals. The purpose of this study is to explore contribution of Human Resource Management (HRM) practices including selection, training, career planning, compensation, performance appraisal, job definition and employee participation on perceived employee performance. This research describe why human resource management (HRM) decisions are likely to have an important and unique influence on organizational performance. This research forum will help advance research on the link between HRM and organizational performance. Unresolved questions is trying to identify in need of future study and make several suggestions intended to help researchers studying these questions build a more cumulative body of knowledge that will have key implications for body theory and practice. This study comprehensively evaluated the links between systems of High Performance Work Practices and firm performance. Results based on a national sample of firms indicate that these practices have an economically and statistically significant impact on employee performance. Support for predictions that the impact of High Performance Work Practices on firm performance is in part contingent on their interrelationships and links with competitive strategy was limited.",
"title": ""
},
{
"docid": "4df6bbfaa8842d88df0b916946c59ea3",
"text": "Real-time decision making in emerging IoT applications typically relies on computing quantitative summaries of large data streams in an efficient and incremental manner. To simplify the task of programming the desired logic, we propose StreamQRE, which provides natural and high-level constructs for processing streaming data. Our language has a novel integration of linguistic constructs from two distinct programming paradigms: streaming extensions of relational query languages and quantitative extensions of regular expressions. The former allows the programmer to employ relational constructs to partition the input data by keys and to integrate data streams from different sources, while the latter can be used to exploit the logical hierarchy in the input stream for modular specifications. \n We first present the core language with a small set of combinators, formal semantics, and a decidable type system. We then show how to express a number of common patterns with illustrative examples. Our compilation algorithm translates the high-level query into a streaming algorithm with precise complexity bounds on per-item processing time and total memory footprint. We also show how to integrate approximation algorithms into our framework. We report on an implementation in Java, and evaluate it with respect to existing high-performance engines for processing streaming data. Our experimental evaluation shows that (1) StreamQRE allows more natural and succinct specification of queries compared to existing frameworks, (2) the throughput of our implementation is higher than comparable systems (for example, two-to-four times greater than RxJava), and (3) the approximation algorithms supported by our implementation can lead to substantial memory savings.",
"title": ""
},
{
"docid": "05a7c2820178ea33f79ace6c5bb1a4fa",
"text": "Researchers have explored the design of ambient information systems across a wide range of physical and screen-based media. This work has yielded rich examples of design approaches to the problem of presenting information about a user's world in a way that is not distracting, but is aesthetically pleasing, and tangible to varying degrees. Despite these successes, accumulating theoretical and craft knowledge has been stymied by the lack of a unified vocabulary to describe these systems and a consequent lack of a framework for understanding their design attributes. We argue that this area would significantly benefit from consensus about the design space of ambient information systems and the design attributes that define and distinguish existing approaches. We present a definition of ambient information systems and a taxonomy across four design dimensions: Information Capacity, Notification Level, Representational Fidelity, and Aesthetic Emphasis. Our analysis has uncovered four patterns of system design and points to unexplored regions of the design space, which may motivate future work in the field.",
"title": ""
},
{
"docid": "05bb807afbfa8397c76039afe8c50274",
"text": "In autonomous drone racing, a drone is required to fly through the gates quickly without any collision. Therefore, it is important to detect the gates reliably using computer vision. However, due to the complications such as varying lighting conditions and gates seen overlapped, traditional image processing algorithms based on color and geometry of the gates tend to fail during the actual racing. In this letter, we introduce a convolutional neural network to estimate the center of a gate robustly. Using the detection results, we apply a line-of-sight guidance algorithm. The proposed algorithm is implemented using low cost, off-the-shelf hardware for validation. All vision processing is performed in real time on the onboard NVIDIA Jetson TX2 embedded computer. In a number of tests our proposed framework successfully exhibited fast and reliable detection and navigation performance in indoor environment.",
"title": ""
}
] | scidocsrr |
6eba4926c68232b11cea5f89f5dbf693 | Towards Bayesian Deep Learning: A Survey | [
{
"docid": "46cd71806e85374c36bc77ea28293ecb",
"text": "In this paper we introduce a novel collapsed Gibbs sampling method for the widely used latent Dirichlet allocation (LDA) model. Our new method results in significant speedups on real world text corpora. Conventional Gibbs sampling schemes for LDA require O(K) operations per sample where K is the number of topics in the model. Our proposed method draws equivalent samples but requires on average significantly less then K operations per sample. On real-word corpora FastLDA can be as much as 8 times faster than the standard collapsed Gibbs sampler for LDA. No approximations are necessary, and we show that our fast sampling scheme produces exactly the same results as the standard (but slower) sampling scheme. Experiments on four real world data sets demonstrate speedups for a wide range of collection sizes. For the PubMed collection of over 8 million documents with a required computation time of 6 CPU months for LDA, our speedup of 5.7 can save 5 CPU months of computation.",
"title": ""
},
{
"docid": "9e45bc3ac789fd1343e4e400b7f0218e",
"text": "Due to its successful application in recommender systems, collaborative filtering (CF) has become a hot research topic in data mining and information retrieval. In traditional CF methods, only the feedback matrix, which contains either explicit feedback (also called ratings) or implicit feedback on the items given by users, is used for training and prediction. Typically, the feedback matrix is sparse, which means that most users interact with few items. Due to this sparsity problem, traditional CF with only feedback information will suffer from unsatisfactory performance. Recently, many researchers have proposed to utilize auxiliary information, such as item content (attributes), to alleviate the data sparsity problem in CF. Collaborative topic regression (CTR) is one of these methods which has achieved promising performance by successfully integrating both feedback information and item content information. In many real applications, besides the feedback and item content information, there may exist relations (also known as networks) among the items which can be helpful for recommendation. In this paper, we develop a novel hierarchical Bayesian model called Relational Collaborative Topic Regression (RCTR), which extends CTR by seamlessly integrating the user-item feedback information, item content information, and network structure among items into the same model. Experiments on real-world datasets show that our model can achieve better prediction accuracy than the state-of-the-art methods with lower empirical training time. Moreover, RCTR can learn good interpretable latent structures which are useful for recommendation.",
"title": ""
}
] | [
{
"docid": "eeba7960e52f351405b4be37a0c9174a",
"text": "While vehicle license plate recognition (VLPR) is usually done with a sliding window approach, it can have limited performance on datasets with characters that are of variable width. This can be solved by hand-crafting algorithms to prescale the characters. While this approach can work fairly well, the recognizer is only aware of the pixels within each detector window, and fails to account for other contextual information that might be present in other parts of the image. A sliding window approach also requires training data in the form of presegmented characters, which can be more difficult to obtain. In this paper, we propose a unified ConvNet-RNN model to recognize real-world captured license plate photographs. By using a Convolutional Neural Network (ConvNet) to perform feature extraction and using a Recurrent Neural Network (RNN) for sequencing, we address the problem of sliding window approaches being unable to access the context of the entire image by feeding the entire image as input to the ConvNet. This has the added benefit of being able to perform end-to-end training of the entire model on labelled, full license plate images. Experimental results comparing the ConvNet-RNN architecture to a sliding window-based approach shows that the ConvNet-RNN architecture performs significantly better. Keywords—Vehicle license plate recognition, end-to-end recognition, ConvNet-RNN, segmentation-free recognition",
"title": ""
},
{
"docid": "c88f5359fc6dc0cac2c0bd53cea989ee",
"text": "Automatic detection and monitoring of oil spills and illegal oil discharges is of fundamental importance in ensuring compliance with marine legislation and protection of the coastal environments, which are under considerable threat from intentional or accidental oil spills, uncontrolled sewage and wastewater discharged. In this paper the level set based image segmentation was evaluated for the real-time detection and tracking of oil spills from SAR imagery. The developed processing scheme consists of a preprocessing step, in which an advanced image simplification is taking place, followed by a geometric level set segmentation for the detection of the possible oil spills. Finally a classification was performed, for the separation of lookalikes, leading to oil spill extraction. Experimental results demonstrate that the level set segmentation is a robust tool for the detection of possible oil spills, copes well with abrupt shape deformations and splits and outperforms earlier efforts which were based on different types of threshold or edge detection techniques. The developed algorithm’s efficiency for real-time oil spill detection and monitoring was also tested.",
"title": ""
},
{
"docid": "8dd540b33035904f63c67b57d4c97aa3",
"text": "Wireless local area networks (WLANs) based on the IEEE 802.11 standards are one of today’s fastest growing technologies in businesses, schools, and homes, for good reasons. As WLAN deployments increase, so does the challenge to provide these networks with security. Security risks can originate either due to technical lapse in the security mechanisms or due to defects in software implementations. Standard Bodies and researchers have mainly used UML state machines to address the implementation issues. In this paper we propose the use of GSE methodology to analyse the incompleteness and uncertainties in specifications. The IEEE 802.11i security protocol is used as an example to compare the effectiveness of the GSE and UML models. The GSE methodology was found to be more effective in identifying ambiguities in specifications and inconsistencies between the specification and the state machines. Resolving all issues, we represent the robust security network (RSN) proposed in the IEEE 802.11i standard using different GSE models.",
"title": ""
},
{
"docid": "338dcbb45ff0c1752eeb34ec1be1babe",
"text": "I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural",
"title": ""
},
{
"docid": "53a1d344a6e38dd790e58c6952e51cdb",
"text": "The thermal conductivities of individual single crystalline intrinsic Si nanowires with diameters of 22, 37, 56, and 115 nm were measured using a microfabricated suspended device over a temperature range of 20–320 K. Although the nanowires had well-defined crystalline order, the thermal conductivity observed was more than two orders of magnitude lower than the bulk value. The strong diameter dependence of thermal conductivity in nanowires was ascribed to the increased phonon-boundary scattering and possible phonon spectrum modification. © 2003 American Institute of Physics.@DOI: 10.1063/1.1616981 #",
"title": ""
},
{
"docid": "826e54e8e46dcea0451b53645e679d55",
"text": "Microtia is a congenital disease with various degrees of severity, ranging from the presence of rudimentary and malformed vestigial structures to the total absence of the ear (anotia). The complex anatomy of the external ear and the necessity to provide good projection and symmetry make this reconstruction particularly difficult. The aim of this work is to report our surgical technique of microtic ear correction and to analyse the short and long term results. From 2000 to 2013, 210 patients affected by microtia were treated at the Maxillo-Facial Surgery Division, Head and Neck Department, University Hospital of Parma. The patient population consisted of 95 women and 115 men, aged from 7 to 49 years. A total of 225 reconstructions have been performed in two surgical stages basing of Firmin's technique with some modifications and refinements. The first stage consists in fabrication and grafting of a three-dimensional costal cartilage framework. The second stage is performed 5-6 months later: the reconstructed ear is raised up and an additional cartilaginous graft is used to increase its projection. A mastoid fascial flap together with a skin graft are then used to protect the cartilage graft. All reconstructions were performed without any major complication. The results have been considered satisfactory by all patients starting from the first surgical step. Low morbidity, the good results obtained and a high rate of patient satisfaction make our protocol an optimal choice for treatment of microtia. The surgeon's experience and postoperative patient care must be considered as essential aspects of treatment.",
"title": ""
},
{
"docid": "ccedb6cff054254f3427ab0d45017d2a",
"text": "Traffic and power generation are the main sources of urban air pollution. The idea that outdoor air pollution can cause exacerbations of pre-existing asthma is supported by an evidence base that has been accumulating for several decades, with several studies suggesting a contribution to new-onset asthma as well. In this Series paper, we discuss the effects of particulate matter (PM), gaseous pollutants (ozone, nitrogen dioxide, and sulphur dioxide), and mixed traffic-related air pollution. We focus on clinical studies, both epidemiological and experimental, published in the previous 5 years. From a mechanistic perspective, air pollutants probably cause oxidative injury to the airways, leading to inflammation, remodelling, and increased risk of sensitisation. Although several pollutants have been linked to new-onset asthma, the strength of the evidence is variable. We also discuss clinical implications, policy issues, and research gaps relevant to air pollution and asthma.",
"title": ""
},
{
"docid": "a461592a276b13a6a25c25ab64c23d61",
"text": "To maintain the integrity of an organism constantly challenged by pathogens, the immune system is endowed with a variety of cell types. B lymphocytes were initially thought to only play a role in the adaptive branch of immunity. However, a number of converging observations revealed that two B-cell subsets, marginal zone (MZ) and B1 cells, exhibit unique developmental and functional characteristics, and can contribute to innate immune responses. In addition to their capacity to mount a local antibody response against type-2 T-cell-independent (TI-2) antigens, MZ B-cells can participate to T-cell-dependent (TD) immune responses through the capture and import of blood-borne antigens to follicular areas of the spleen. Here, we discuss the multiple roles of MZ B-cells in humans, non-human primates, and rodents. We also summarize studies - performed in transgenic mice expressing fully human antibodies on their B-cells and in macaques whose infection with Simian immunodeficiency virus (SIV) represents a suitable model for HIV-1 infection in humans - showing that infectious agents have developed strategies to subvert MZ B-cell functions. In these two experimental models, we observed that two microbial superantigens for B-cells (protein A from Staphylococcus aureus and protein L from Peptostreptococcus magnus) as well as inactivated AT-2 virions of HIV-1 and infectious SIV preferentially deplete innate-like B-cells - MZ B-cells and/or B1 B-cells - with different consequences on TI and TD antibody responses. These data revealed that viruses and bacteria have developed strategies to deplete innate-like B-cells during the acute phase of infection and to impair the antibody response. Unraveling the intimate mechanisms responsible for targeting MZ B-cells in humans will be important for understanding disease pathogenesis and for designing novel vaccine strategies.",
"title": ""
},
{
"docid": "9563b47a73e41292599c368e1dfcd40a",
"text": "Non-functional requirements are an important, and often critical, aspect of any software system. However, determining the degree to which any particular software system meets such requirements and incorporating such considerations into the software design process is a difficult challenge. This paper presents a modification of the NFR framework that allows for the discovery of a set of system functionalities that optimally satisfice a given set of non-functional requirements. This new technique introduces an adaptation of softgoal interdependency graphs, denoted softgoal interdependency ruleset graphs, in which label propagation can be done consistently. This facilitates the use of optimisation algorithms to determine the best set of bottom-level operationalizing softgoals that optimally satisfice the highest-level NFR softgoals. The proposed method also introduces the capacity to incorporate both qualitative and quantitative information.",
"title": ""
},
{
"docid": "69b0c5a4a3d5fceda5e902ec8e0479bb",
"text": "Mobile-edge computing (MEC) is an emerging paradigm that provides a capillary distribution of cloud computing capabilities to the edge of the wireless access network, enabling rich services and applications in close proximity to the end users. In this paper, an MEC enabled multi-cell wireless network is considered where each base station (BS) is equipped with a MEC server that assists mobile users in executing computation-intensive tasks via task offloading. The problem of joint task offloading and resource allocation is studied in order to maximize the users’ task offloading gains, which is measured by a weighted sum of reductions in task completion time and energy consumption. The considered problem is formulated as a mixed integer nonlinear program (MINLP) that involves jointly optimizing the task offloading decision, uplink transmission power of mobile users, and computing resource allocation at the MEC servers. Due to the combinatorial nature of this problem, solving for optimal solution is difficult and impractical for a large-scale network. To overcome this drawback, we propose to decompose the original problem into a resource allocation (RA) problem with fixed task offloading decision and a task offloading (TO) problem that optimizes the optimal-value function corresponding to the RA problem. We address the RA problem using convex and quasi-convex optimization techniques, and propose a novel heuristic algorithm to the TO problem that achieves a suboptimal solution in polynomial time. Simulation results show that our algorithm performs closely to the optimal solution and that it significantly improves the users’ offloading utility over traditional approaches.",
"title": ""
},
{
"docid": "696fd5b7e7bff90432f8c219230ebc7c",
"text": "This paper proposes a simple, cost-effective, and efficient brushless dc (BLDC) motor drive for solar photovoltaic (SPV) array-fed water pumping system. A zeta converter is utilized to extract the maximum available power from the SPV array. The proposed control algorithm eliminates phase current sensors and adapts a fundamental frequency switching of the voltage source inverter (VSI), thus avoiding the power losses due to high frequency switching. No additional control or circuitry is used for speed control of the BLDC motor. The speed is controlled through a variable dc link voltage of VSI. An appropriate control of zeta converter through the incremental conductance maximum power point tracking (INC-MPPT) algorithm offers soft starting of the BLDC motor. The proposed water pumping system is designed and modeled such that the performance is not affected under dynamic conditions. The suitability of proposed system at practical operating conditions is demonstrated through simulation results using MATLAB/Simulink followed by an experimental validation.",
"title": ""
},
{
"docid": "9c5711c68c7a9c7a4a8fc4d9dbcf145d",
"text": "Approximate set membership data structures (ASMDSs) are ubiquitous in computing. They trade a tunable, often small, error rate ( ) for large space savings. The canonical ASMDS is the Bloom filter, which supports lookups and insertions but not deletions in its simplest form. Cuckoo filters (CFs), a recently proposed class of ASMDSs, add deletion support and often use fewer bits per item for equal . This work introduces the Morton filter (MF), a novel ASMDS that introduces several key improvements to CFs. Like CFs, MFs support lookups, insertions, and deletions, but improve their respective throughputs by 1.3× to 2.5×, 0.9× to 15.5×, and 1.3× to 1.6×. MFs achieve these improvements by (1) introducing a compressed format that permits a logically sparse filter to be stored compactly in memory, (2) leveraging succinct embedded metadata to prune unnecessary memory accesses, and (3) heavily biasing insertions to use a single hash function. With these optimizations, lookups, insertions, and deletions often only require accessing a single hardware cache line from the filter. These improvements are not at a loss in space efficiency, as MFs typically use comparable to slightly less space than CFs for the same . PVLDB Reference Format: Alex D. Breslow and Nuwan S. Jayasena. Morton Filters: Faster, Space-Efficient Cuckoo Filters via Biasing, Compression, and Decoupled Logical Sparsity. PVLDB, 11(9): 1041-1055, 2018. DOI: https://doi.org/10.14778/3213880.3213884",
"title": ""
},
{
"docid": "76156cea2ef1d49179d35fd8f333b011",
"text": "Climate change, pollution, and energy insecurity are among the greatest problems of our time. Addressing them requires major changes in our energy infrastructure. Here, we analyze the feasibility of providing worldwide energy for all purposes (electric power, transportation, heating/cooling, etc.) from wind, water, and sunlight (WWS). In Part I, we discuss WWS energy system characteristics, current and future energy demand, availability of WWS resources, numbers of WWS devices, and area and material requirements. In Part II, we address variability, economics, and policy of WWS energy. We estimate that !3,800,000 5 MW wind turbines, !49,000 300 MW concentrated solar plants, !40,000 300 MW solar PV power plants, !1.7 billion 3 kW rooftop PV systems, !5350 100 MWgeothermal power plants, !270 new 1300 MW hydroelectric power plants, !720,000 0.75 MWwave devices, and !490,000 1 MW tidal turbines can power a 2030 WWS world that uses electricity and electrolytic hydrogen for all purposes. Such a WWS infrastructure reduces world power demand by 30% and requires only !0.41% and !0.59% more of the world’s land for footprint and spacing, respectively. We suggest producing all new energy withWWSby 2030 and replacing the pre-existing energy by 2050. Barriers to the plan are primarily social and political, not technological or economic. The energy cost in a WWS world should be similar to",
"title": ""
},
{
"docid": "7c5ce3005c4529e0c34220c538412a26",
"text": "Six studies investigate whether and how distant future time perspective facilitates abstract thinking and impedes concrete thinking by altering the level at which mental representations are construed. In Experiments 1-3, participants who envisioned their lives and imagined themselves engaging in a task 1 year later as opposed to the next day subsequently performed better on a series of insight tasks. In Experiments 4 and 5 a distal perspective was found to improve creative generation of abstract solutions. Moreover, Experiment 5 demonstrated a similar effect with temporal distance manipulated indirectly, by making participants imagine their lives in general a year from now versus tomorrow prior to performance. In Experiment 6, distant time perspective undermined rather than enhanced analytical problem solving.",
"title": ""
},
{
"docid": "3fec27391057a4c14f2df5933c4847d8",
"text": "This article explains how entrepreneurship can help resolve the environmental problems of global socio-economic systems. Environmental economics concludes that environmental degradation results from the failure of markets, whereas the entrepreneurship literature argues that opportunities are inherent in market failure. A synthesis of these literatures suggests that environmentally relevant market failures represent opportunities for achieving profitability while simultaneously reducing environmentally degrading economic behaviors. It also implies conceptualizations of sustainable and environmental entrepreneurship which detail how entrepreneurs seize the opportunities that are inherent in environmentally relevant market failures. Finally, the article examines the ability of the proposed theoretical framework to transcend its environmental context and provide insight into expanding the domain of the study of entrepreneurship. D 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "e107c2f396299e79f9b3db29ae43c943",
"text": "To achieve the concept of smart roads, intelligent sensors are being placed on the roadways to collect real-time traffic streams. Traditional method is not a real-time response, and incurs high communication and storage costs. Existing distributed stream mining algorithms do not consider the resource limitation on the lightweight devices such as sensors. In this paper, we propose a distributed traffic stream mining system. The central server performs various data mining tasks only in the training and updating stage and sends the interesting patterns to the sensors. The sensors monitor and predict the coming traffic or raise alarms independently by comparing with the patterns observed in the historical streams. The sensors provide real-time response with less wireless communication and small resource requirement, and the computation burden on the central server is reduced. We evaluate our system on the real highway traffic streams in the GCM Transportation Corridor in Chicagoland.",
"title": ""
},
{
"docid": "69f95ac2ca7b32677151de88b9d95d4c",
"text": "Gunaratna, Kalpa. PhD, Department of Computer Science and Engineering, Wright State University, 2017. Semantics-based Summarization of Entities in Knowledge Graphs. The processing of structured and semi-structured content on the Web has been gaining attention with the rapid progress in the Linking Open Data project and the development of commercial knowledge graphs. Knowledge graphs capture domain-specific or encyclopedic knowledge in the form of a data layer and add rich and explicit semantics on top of the data layer to infer additional knowledge. The data layer of a knowledge graph represents entities and their descriptions. The semantic layer on top of the data layer is called the schema (ontology), where relationships of the entity descriptions, their classes, and the hierarchy of the relationships and classes are defined. Today, there exist large knowledge graphs in the research community (e.g., encyclopedic datasets like DBpedia and Yago) and corporate world (e.g., Google knowledge graph) that encapsulate a large amount of knowledge for human and machine consumption. Typically, they consist of millions of entities and billions of facts describing these entities. While it is good to have this much knowledge available on the Web for consumption, it leads to information overload, and hence proper summarization (and presentation) techniques need to be explored. In this dissertation, we focus on creating both comprehensive and concise entity summaries at: (i) the single entity level and (ii) the multiple entity level. To summarize a single entity, we propose a novel approach called FACeted Entity Summarization (FACES) that considers importance, which is computed by combining popularity and uniqueness, and diversity of facts getting selected for the summary. We first conceptually group facts using semantic expansion and hierarchical incremental clustering techniques and form facets (i.e., groupings) that go beyond syntactic similarity. Then we rank both the facts and facets using Information Retrieval (IR) ranking techniques to pick the",
"title": ""
},
{
"docid": "b769f7b96b9613132790a73752c2a08f",
"text": "ITIL is the most widely used IT framework in majority of organizations in the world now. However, implementing such best practice experiences in an organization comes with some implementation challenges such as staff resistance, task conflicts and ambiguous orders. It means that implementing such framework is not easy and it can be caused of the organization destruction. This paper tries to describe overall view of ITIL framework and address major reasons on the failure of this framework’s implementation in the organizations",
"title": ""
},
{
"docid": "bab949abe2d00567853504e38c84a1c9",
"text": "7SK RNA is a key player in the regulation of polymerase II transcription. 7SK RNA was considered as a highly conserved vertebrate innovation. The discovery of poorly conserved homologs in several insects and lophotrochozoans, however, implies a much earlier evolutionary origin. The mechanism of 7SK function requires interaction with the proteins HEXIM and La-related protein 7. Here, we present a comprehensive computational analysis of these two proteins in metazoa, and we extend the collection of 7SK RNAs by several additional candidates. In particular, we describe 7SK homologs in Caenorhabditis species. Furthermore, we derive an improved secondary structure model of 7SK RNA, which shows that the structure is quite well-conserved across animal phyla despite the extreme divergence at sequence level.",
"title": ""
}
] | scidocsrr |
e12810a39baa7c96646907aceec16c72 | An effective solution for a real cutting stock problem in manufacturing plastic rolls | [
{
"docid": "74381f9602374af5ad0775a69163d1b9",
"text": "This paper discusses some of the basic formulation issues and solution procedures for solving oneand twodimensional cutting stock problems. Linear programming, sequential heuristic and hybrid solution procedures are described. For two-dimensional cutting stock problems with rectangular shapes, we also propose an approach for solving large problems with limits on the number of times an ordered size may appear in a pattern.",
"title": ""
}
] | [
{
"docid": "a4605974c90bc17edf715eb9edb10b8a",
"text": "Natural language processing has been in existence for more than fifty years. During this time, it has significantly contributed to the field of human-computer interaction in terms of theoretical results and practical applications. As computers continue to become more affordable and accessible, the importance of user interfaces that are effective, robust, unobtrusive, and user-friendly – regardless of user expertise or impediments – becomes more pronounced. Since natural language usually provides for effortless and effective communication in human-human interaction, its significance and potential in human-computer interaction should not be overlooked – either spoken or typewritten, it may effectively complement other available modalities, such as windows, icons, and menus, and pointing; in some cases, such as in users with disabilities, natural language may even be the only applicable modality. This chapter examines the field of natural language processing as it relates to humancomputer interaction by focusing on its history, interactive application areas, theoretical approaches to linguistic modeling, and relevant computational and philosophical issues. It also presents a taxonomy for interactive natural language systems based on their linguistic knowledge and processing requirements, and reviews related applications. Finally, it discusses linguistic coverage issues, and explores the development of natural language widgets and their integration into multimodal user interfaces.",
"title": ""
},
{
"docid": "e3f4add37a083f61feda8805478d0729",
"text": "The evaluation of the effects of different media ionic strengths and pH on the release of hydrochlorothiazide, a poorly soluble drug, and diltiazem hydrochloride, a cationic and soluble drug, from a gel forming hydrophilic polymeric matrix was the objective of this study. The drug to polymer ratio of formulated tablets was 4:1. Hydrochlorothiazide or diltiazem HCl extended release (ER) matrices containing hypromellose (hydroxypropyl methylcellulose (HPMC)) were evaluated in media with a pH range of 1.2-7.5, using an automated USP type III, Bio-Dis dissolution apparatus. The ionic strength of the media was varied over a range of 0-0.4M to simulate the gastrointestinal fed and fasted states and various physiological pH conditions. Sodium chloride was used for ionic regulation due to its ability to salt out polymers in the midrange of the lyotropic series. The results showed that the ionic strength had a profound effect on the drug release from the diltiazem HCl K100LV matrices. The K4M, K15M and K100M tablets however withstood the effects of media ionic strength and showed a decrease in drug release to occur with an increase in ionic strength. For example, drug release after the 1h mark for the K100M matrices in water was 36%. Drug release in pH 1.2 after 1h was 30%. An increase of the pH 1.2 ionic strength to 0.4M saw a reduction of drug release to 26%. This was the general trend for the K4M and K15M matrices as well. The similarity factor f2 was calculated using drug release in water as a reference. Despite similarity occurring for all the diltiazem HCl matrices in the pH 1.2 media (f2=64-72), increases of ionic strength at 0.2M and 0.4M brought about dissimilarity. The hydrochlorothiazide tablet matrices showed similarity at all the ionic strength tested for all polymers (f2=56-81). The values of f2 however reduced with increasing ionic strengths. DSC hydration results explained the hydrochlorothiazide release from their HPMC matrices. There was an increase in bound water as ionic strengths increased. Texture analysis was employed to determine the gel strength and also to explain the drug release for the diltiazem hydrochloride. This methodology can be used as a valuable tool for predicting potential ionic effects related to in vivo fed and fasted states on drug release from hydrophilic ER matrices.",
"title": ""
},
{
"docid": "72cf634b61876d3ad9c265e61f1148ae",
"text": "Many functionals have been proposed for validation of partitions of object data produced by the fuzzy c-means (FCM) clustering algorithm. We examine the role a subtle but important parameter-the weighting exponent m of the FCM model-plays in determining the validity of FCM partitions. The functionals considered are the partition coefficient and entropy indexes of Bezdek, the Xie-Beni, and extended Xie-Beni indexes, and the FukuyamaSugeno index. Limit analysis indicates, and numerical experiments confirm, that the FukuyamaSugeno index is sensitive to both high and low values of m and may be unreliable because of this. Of the indexes tested, the Xie-Beni index provided the best response over a wide range of choices for the number of clusters, (%lo), and for m from 1.01-7. Finally, our calculations suggest that the best choice for m is probably in the interval [U, 2.51, whose mean and midpoint, m = 2, have often been the preferred choice for many users of FCM.",
"title": ""
},
{
"docid": "18848101a74a23d6740f08f86992a4a4",
"text": "Post-traumatic stress disorder (PTSD) is accompanied by disturbed sleep and an impaired ability to learn and remember extinction of conditioned fear. Following a traumatic event, the full spectrum of PTSD symptoms typically requires several months to develop. During this time, sleep disturbances such as insomnia, nightmares, and fragmented rapid eye movement sleep predict later development of PTSD symptoms. Only a minority of individuals exposed to trauma go on to develop PTSD. We hypothesize that sleep disturbance resulting from an acute trauma, or predating the traumatic experience, may contribute to the etiology of PTSD. Because symptoms can worsen over time, we suggest that continued sleep disturbances can also maintain and exacerbate PTSD. Sleep disturbance may result in failure of extinction memory to persist and generalize, and we suggest that this constitutes one, non-exclusive mechanism by which poor sleep contributes to the development and perpetuation of PTSD. Also reviewed are neuroendocrine systems that show abnormalities in PTSD, and in which stress responses and sleep disturbance potentially produce synergistic effects that interfere with extinction learning and memory. Preliminary evidence that insomnia alone can disrupt sleep-dependent emotional processes including consolidation of extinction memory is also discussed. We suggest that optimizing sleep quality following trauma, and even strategically timing sleep to strengthen extinction memories therapeutically instantiated during exposure therapy, may allow sleep itself to be recruited in the treatment of PTSD and other trauma and stress-related disorders.",
"title": ""
},
{
"docid": "51ba2c02aa4ad9b7cfb381ddae0f3dfe",
"text": "The dynamics of spontaneous fluctuations in neural activity are shaped by underlying patterns of anatomical connectivity. While numerous studies have demonstrated edge-wise correspondence between structural and functional connections, much less is known about how large-scale coherent functional network patterns emerge from the topology of structural networks. In the present study, we deploy a multivariate statistical technique, partial least squares, to investigate the association between spatially extended structural networks and functional networks. We find multiple statistically robust patterns, reflecting reliable combinations of structural and functional subnetworks that are optimally associated with one another. Importantly, these patterns generally do not show a one-to-one correspondence between structural and functional edges, but are instead distributed and heterogeneous, with many functional relationships arising from nonoverlapping sets of anatomical connections. We also find that structural connections between high-degree hubs are disproportionately represented, suggesting that these connections are particularly important in establishing coherent functional networks. Altogether, these results demonstrate that the network organization of the cerebral cortex supports the emergence of diverse functional network configurations that often diverge from the underlying anatomical substrate.",
"title": ""
},
{
"docid": "4c2f9f9681a1d3bc6d9a27a59c2a01d6",
"text": "BACKGROUND\nStatin therapy reduces low-density lipoprotein (LDL) cholesterol levels and the risk of cardiovascular events, but whether the addition of ezetimibe, a nonstatin drug that reduces intestinal cholesterol absorption, can reduce the rate of cardiovascular events further is not known.\n\n\nMETHODS\nWe conducted a double-blind, randomized trial involving 18,144 patients who had been hospitalized for an acute coronary syndrome within the preceding 10 days and had LDL cholesterol levels of 50 to 100 mg per deciliter (1.3 to 2.6 mmol per liter) if they were receiving lipid-lowering therapy or 50 to 125 mg per deciliter (1.3 to 3.2 mmol per liter) if they were not receiving lipid-lowering therapy. The combination of simvastatin (40 mg) and ezetimibe (10 mg) (simvastatin-ezetimibe) was compared with simvastatin (40 mg) and placebo (simvastatin monotherapy). The primary end point was a composite of cardiovascular death, nonfatal myocardial infarction, unstable angina requiring rehospitalization, coronary revascularization (≥30 days after randomization), or nonfatal stroke. The median follow-up was 6 years.\n\n\nRESULTS\nThe median time-weighted average LDL cholesterol level during the study was 53.7 mg per deciliter (1.4 mmol per liter) in the simvastatin-ezetimibe group, as compared with 69.5 mg per deciliter (1.8 mmol per liter) in the simvastatin-monotherapy group (P<0.001). The Kaplan-Meier event rate for the primary end point at 7 years was 32.7% in the simvastatin-ezetimibe group, as compared with 34.7% in the simvastatin-monotherapy group (absolute risk difference, 2.0 percentage points; hazard ratio, 0.936; 95% confidence interval, 0.89 to 0.99; P=0.016). Rates of prespecified muscle, gallbladder, and hepatic adverse effects and cancer were similar in the two groups.\n\n\nCONCLUSIONS\nWhen added to statin therapy, ezetimibe resulted in incremental lowering of LDL cholesterol levels and improved cardiovascular outcomes. Moreover, lowering LDL cholesterol to levels below previous targets provided additional benefit. (Funded by Merck; IMPROVE-IT ClinicalTrials.gov number, NCT00202878.).",
"title": ""
},
{
"docid": "6b55931c9945a71de6b28789323f191b",
"text": "Resistant hypertension-uncontrolled hypertension with 3 or more antihypertensive agents-is increasingly common in clinical practice. Clinicians should exclude pseudoresistant hypertension, which results from nonadherence to medications or from elevated blood pressure related to the white coat syndrome. In patients with truly resistant hypertension, thiazide diuretics, particularly chlorthalidone, should be considered as one of the initial agents. The other 2 agents should include calcium channel blockers and angiotensin-converting enzyme inhibitors for cardiovascular protection. An increasing body of evidence has suggested benefits of mineralocorticoid receptor antagonists, such as eplerenone and spironolactone, in improving blood pressure control in patients with resistant hypertension, regardless of circulating aldosterone levels. Thus, this class of drugs should be considered for patients whose blood pressure remains elevated after treatment with a 3-drug regimen to maximal or near maximal doses. Resistant hypertension may be associated with secondary causes of hypertension including obstructive sleep apnea or primary aldosteronism. Treating these disorders can significantly improve blood pressure beyond medical therapy alone. The role of device therapy for treating the typical patient with resistant hypertension remains unclear.",
"title": ""
},
{
"docid": "a0fc4982c5d63191ab1b15deff4e65d6",
"text": "Sentiment classification is an important subject in text mining research, which concerns the application of automatic methods for predicting the orientation of sentiment present on text documents, with many applications on a number of areas including recommender and advertising systems, customer intelligence and information retrieval. In this paper, we provide a survey and comparative study of existing techniques for opinion mining including machine learning and lexicon-based approaches, together with evaluation metrics. Also cross-domain and cross-lingual approaches are explored. Experimental results show that supervised machine learning methods, such as SVM and naive Bayes, have higher precision, while lexicon-based methods are also very competitive because they require few effort in human-labeled document and isn't sensitive to the quantity and quality of the training dataset.",
"title": ""
},
{
"docid": "be45e9231cc468c8f9551868c1d13938",
"text": "We present a user-centric approach for stream surface generation. Given a set of densely traced streamlines over the flow field, we design a sketch-based interface that allows users to draw simple strokes directly on top of the streamline visualization result. Based on the 2D stroke, we identify a 3D seeding curve and generate a stream surface that captures the flow pattern of streamlines at the outermost layer. Then, we remove the streamlines whose patterns are covered by the stream surface. Repeating this process, users can peel the flow by replacing the streamlines with customized surfaces layer by layer. Our sketch-based interface leverages an intuitive painting metaphor which most users are familiar with. We present results using multiple data sets to show the effectiveness of our approach, and discuss the limitations and future directions.",
"title": ""
},
{
"docid": "59786d8ea951639b8b9a4e60c9d43a06",
"text": "Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) • It gives near-optimal error guarantees. • It is robust to observation noise. • It succeeds with a minimum number of observations. • It can be used with any sampling operator for which the operator and its adjoint can be computed. • The memory requirement is linear in the problem size. Preprint submitted to Elsevier 28 January 2009 • Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. • It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. • Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.",
"title": ""
},
{
"docid": "ee3c2f50a7ea955d33305a3e02310109",
"text": "This research strives for natural language moment retrieval in long, untrimmed video streams. The problem nevertheless is not trivial especially when a video contains multiple moments of interests and the language describes complex temporal dependencies, which often happens in real scenarios. We identify two crucial challenges: semantic misalignment and structural misalignment. However, existing approaches treat different moments separately and do not explicitly model complex moment-wise temporal relations. In this paper, we present Moment Alignment Network (MAN), a novel framework that unifies the candidate moment encoding and temporal structural reasoning in a single-shot feed-forward network. MAN naturally assigns candidate moment representations aligned with language semantics over different temporal locations and scales. Most importantly, we propose to explicitly model momentwise temporal relations as a structured graph and devise an iterative graph adjustment network to jointly learn the best structure in an end-to-end manner. We evaluate the proposed approach on two challenging public benchmarks Charades-STA and DiDeMo, where our MAN significantly outperforms the state-of-the-art by a large margin.",
"title": ""
},
{
"docid": "8f9e3bb85b4a2fcff3374fd700ac3261",
"text": "Vehicle theft has become a pervasive problem in metropolitan cities. The aim of our work is to reduce the vehicle and fuel theft with an alert given by commonly used smart phones. The modern vehicles are interconnected with computer systems so that the information can be obtained from vehicular sources and Internet services. This provides space for tracking the vehicle through smart phones. In our work, an Advanced Encryption Standard (AES) algorithm is implemented which integrates a smart phone with classical embedded systems to avoid vehicle theft.",
"title": ""
},
{
"docid": "caa35f58e9e217fd45daa2e49c4a4cde",
"text": "Despite its linguistic complexity, the Horn of Africa region includes several major languages with more than 5 million speakers, some crossing the borders of multiple countries. All of these languages have official status in regions or nations and are crucial for development; yet computational resources for the languages remain limited or non-existent. Since these languages are complex morphologically, software for morphological analysis and generation is a necessary first step toward nearly all other applications. This paper describes a resource for morphological analysis and generation for three of the most important languages in the Horn of Africa, Amharic, Tigrinya, and Oromo. 1 Language in the Horn of Africa The Horn of Africa consists politically of four modern nations, Ethiopia, Somalia, Eritrea, and Djibouti. As in most of sub-Saharan Africa, the linguistic picture in the region is complex. The great majority of people are speakers of AfroAsiatic languages belonging to three sub-families: Semitic, Cushitic, and Omotic. Approximately 75% of the population of almost 100 million people are native speakers of four languages: the Cushitic languages Oromo and Somali and the Semitic languages Amharic and Tigrinya. Many others speak one or the other of these languages as second languages. All of these languages have official status at the national or regional level. All of the languages of the region, especially the Semitic languages, are characterized by relatively complex morphology. For such languages, nearly all forms of language technology depend on the existence of software for analyzing and generating word forms. As with most other subSaharan languages, this software has previously not been available. This paper describes a set of Python programs called HornMorpho that address this lack for three of the most important languages, Amharic, Tigrinya, and Oromo. 2 Morphological processingn 2.1 Finite state morphology Morphological analysis is the segmentation of words into their component morphemes and the assignment of grammatical morphemes to grammatical categories and lexical morphemes to lexemes. Morphological generation is the reverse process. Both processes relate a surface level to a lexical level. The relationship between the levels has traditionally been viewed within linguistics in terms of an ordered series of phonological rules. Within computational morphology, a very significant advance came with the demonstration that phonological rules could be implemented as finite state transducers (Kaplan and Kay, 1994) (FSTs) and that the rule ordering could be dispensed with using FSTs that relate the surface and lexical levels directly (Koskenniemi, 1983), so-called “twolevel” morphology. A second important advance was the recognition by Karttunen et al. (1992) that a cascade of composed FSTs could implement the two-level model. This made possible quite complex finite state systems, including ordered alternation rules representing context-sensitive variation in the phonological or orthographic shape of morphemes, the morphotactics characterizing the possible sequences of morphemes (in canonical form) for a given word class, and a lexicon. The key feature of such systems is that, even though the FSTs making up the cascade must be composed in a particular order, the result of composition is a single FST relating surface and lexical levels directly, as in two-level morphology. Because of the invertibility of FSTs, it is a simple matter to convert an analysis FST (surface input Figure 1: Basic architecture of lexical FSTs for morphological analysis and generation. Each rectangle represents an FST; the outermost rectangle is the full FST that is actually used for processing. “.o.” represents composition of FSTs, “+” concatenation of FSTs. to lexical output) to one that performs generation (lexical input to surface output). This basic architecture, illustrated in Figure 1, consisting of a cascade of composed FSTs representing (1) alternation rules and (2) morphotactics, including a lexicon of stems or roots, is the basis for the system described in this paper. We may also want to handle words whose roots or stems are not found in the lexicon, especially when the available set of known roots or stems is limited. In such cases the lexical component is replaced by a phonotactic component characterizing the possible shapes of roots or stems. Such a “guesser” analyzer (Beesley and Karttunen, 2003) analyzes words with unfamiliar roots or stems by positing possible roots or stems. 2.2 Semitic morphology These ideas have revolutionized computational morphology, making languages with complex word structure, such as Finnish and Turkish, far more amenable to analysis by traditional computational techniques. However, finite state morphology is inherently biased to view morphemes as sequences of characters or phones and words as concatenations of morphemes. This presents problems in the case of non-concatenative morphology, for example, discontinuous morphemes and the template morphology that characterizes Semitic languages such as Amharic and Tigrinya. The stem of a Semitic verb consists of a root, essentially a sequence of consonants, and a template that inserts other segments between the root consonants and possibly copies certain of the consonants. For example, the Amharic verb root sbr ‘break’ can combine with roughly 50 different templates to form stems in words such as y ̃b•l y1-sEbr-al ‘he breaks’, ° ̃¤ tEsEbbEr-E ‘it was broken’, ‰ ̃bw l-assEbb1r-Ew , ‘let me cause him to break something’, ̃§§” sEbabar-i ‘broken into many pieces’. A number of different additions to the basic FST framework have been proposed to deal with non-concatenative morphology, all remaining finite state in their complexity. A discussion of the advantages and drawbacks of these different proposals is beyond the scope of this paper. The approach used in our system is one first proposed by Amtrup (2003), based in turn on the well studied formalism of weighted FSTs. In brief, in Amtrup’s approach, each of the arcs in a transducer may be “weighted” with a feature structure, that is, a set of grammatical feature-value pairs. As the arcs in an FST are traversed, a set of feature-value pairs is accumulated by unifying the current set with whatever appears on the arcs along the path through the transducer. These feature-value pairs represent a kind of memory for the path that has been traversed but without the power of a stack. Any arc whose feature structure fails to unify with the current set of feature-value pairs cannot be traversed. The result of traversing such an FST during morphological analysis is not only an output character sequence, representing the root of the word, but a set of feature-value pairs that represents the grammatical structure of the input word. In the generation direction, processing begins with a root and a set of feature-value pairs, representing the desired grammatical structure of the output word, and the output is the surface wordform corresponding to the input root and grammatical structure. In Gasser (2009) we showed how Amtrup’s technique can be applied to the analysis and generation of Tigrinya verbs. For an alternate approach to handling the morphotactics of a subset of Amharic verbs, within the context of the Xerox finite state tools (Beesley and Karttunen, 2003), see Amsalu and Demeke (2006). Although Oromo, a Cushitic language, does not exhibit the root+template morphology that is typical of Semitic languages, it is also convenient to handle its morphology using the same technique because there are some long-distance dependencies and because it is useful to have the grammatical output that this approach yields for analysis.",
"title": ""
},
{
"docid": "18d8fe3f77ab8878ae2eb72b04fa8a48",
"text": "A new magneto-electric dipole antenna with a unidirectional radiation pattern is proposed. A novel differential feeding structure is designed to provide an ultra-wideband impedance matching. A stable gain of 8.25±1.05 dBi is realized by introducing two slots in the magneto-electric dipole and using a rectangular box-shaped reflector, instead of a planar reflector. The antenna can achieve an impedance bandwidth of 114% for SWR ≤ 2 from 2.95 to 10.73 GHz. Radiation patterns with low cross polarization, low back radiation, fixing broadside direction mainbeam and symmetrical E- and H -plane patterns are obtained over the operating frequency range. Moreover, the correlation factor between the transmitting antenna input signal and the receiving antenna output signal is calculated for evaluating the time-domain characteristic. The proposed antenna, which is small in size, can be constructed easily by using PCB fabrication technique.",
"title": ""
},
{
"docid": "2ed16f9344f5c5b024095a4e27283596",
"text": "An overview is presented of the impact of NLO on today's daily life. While NLO researchers have promised many applications, only a few have changed our lives so far. This paper categorizes applications of NLO into three areas: improving lasers, interaction with materials, and information technology. NLO provides: coherent light of different wavelengths; multi-photon absorption for plasma-materials interaction; advanced spectroscopy and materials analysis; and applications to communications and sensors. Applications in information processing and storage seem less mature.",
"title": ""
},
{
"docid": "2c3bfdb36a691434ece6b9f3e7e281e9",
"text": "Heterogeneous cloud radio access networks (H-CRAN) is a new trend of SC that aims to leverage the heterogeneous and cloud radio access networks advantages. Low power remote radio heads (RRHs) are exploited to provide high data rates for users with high quality of service requirements (QoS), while high power macro base stations (BSs) are deployed for coverage maintenance and low QoS users support. However, the inter-tier interference between the macro BS and RRHs and energy efficiency are critical challenges that accompany resource allocation in H-CRAN. Therefore, we propose a centralized resource allocation scheme using online learning, which guarantees interference mitigation and maximizes energy efficiency while maintaining QoS requirements for all users. To foster the performance of such scheme with a model-free learning, we consider users' priority in resource blocks (RBs) allocation and compact state representation based learning methodology to enhance the learning process. Simulation results confirm that the proposed resource allocation solution can mitigate interference, increase energy and spectral efficiencies significantly, and maintain users' QoS requirements.",
"title": ""
},
{
"docid": "556c9a28f9bbd81d53e093b139ce7866",
"text": "This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human-machine interaction.",
"title": ""
},
{
"docid": "76375aa50ebe8388d653241ba481ecd2",
"text": "Sequential learning of tasks using gradient descent leads to an unremitting decline in the accuracy of tasks for which training data is no longer available, termed catastrophic forgetting. Generative models have been explored as a means to approximate the distribution of old tasks and bypass storage of real data. Here we propose a cumulative closed-loop generator and embedded classifier using an AC-GAN architecture provided with external regularization by a small buffer. We evaluate incremental learning using a notoriously hard paradigm, “single headed learning,” in which each task is a disjoint subset of classes in the overall dataset, and performance is evaluated on all previous classes. First, we show that the variability contained in a small percentage of a dataset (memory buffer) accounts for a significant portion of the reported accuracy, both in multi-task and continual learning settings. Second, we show that using a generator to continuously output new images while training provides an up-sampling of the buffer, which prevents catastrophic forgetting and yields superior performance when compared to a fixed buffer. We achieve an average accuracy for all classes of 92.26% in MNIST and 76.15% in FASHION-MNIST after 5 tasks using GAN sampling with a buffer of only 0.17% of the entire dataset size. We compare to a network with regularization (EWC) which shows a deteriorated average performance of 29.19% (MNIST) and 26.5% (FASHION). The baseline of no regularization (plain gradient descent) performs at 99.84% (MNIST) and 99.79% (FASHION) for the last task, but below 3% for all previous tasks. Our method has very low long-term memory cost, the buffer, as well as negligible intermediate memory storage.",
"title": ""
},
{
"docid": "0fa35886300345106390cc55c6025257",
"text": "Non-linear models recently receive a lot of attention as people are starting to discover the power of statistical and embedding features. However, tree-based models are seldom studied in the context of structured learning despite their recent success on various classification and ranking tasks. In this paper, we propose S-MART, a tree-based structured learning framework based on multiple additive regression trees. S-MART is especially suitable for handling tasks with dense features, and can be used to learn many different structures under various loss functions. We apply S-MART to the task of tweet entity linking — a core component of tweet information extraction, which aims to identify and link name mentions to entities in a knowledge base. A novel inference algorithm is proposed to handle the special structure of the task. The experimental results show that S-MART significantly outperforms state-of-the-art tweet entity linking systems.",
"title": ""
},
{
"docid": "107c839a73c12606d4106af7dc04cd96",
"text": "This study presents a novel four-fingered robotic hand to attain a soft contact and high stability under disturbances while holding an object. Each finger is constructed using a tendon-driven skeleton, granular materials corresponding to finger pulp, and a deformable rubber skin. This structure provides soft contact with an object, as well as high adaptation to its shape. Even if the object is deformable and fragile, a grasping posture can be formed without deforming the object. If the air around the granular materials in the rubber skin and jamming transition is vacuumed, the grasping posture can be fixed and the object can be grasped firmly and stably. A high grasping stability under disturbances can be attained. Additionally, the fingertips can work as a small jamming gripper to grasp an object smaller than a fingertip. An experimental investigation indicated that the proposed structure provides a high grasping force with a jamming transition with high adaptability to the object's shape.",
"title": ""
}
] | scidocsrr |
974b984bcb78e3ae752c031b58854bc5 | AnswerBus question answering system | [
{
"docid": "9ca4543f4943a1679b639caa186f1650",
"text": "SHAPE ADJECTIVE COLOR DISEASE TEXT NARRATIVE* GENERAL-INFO DEFINITION USE EXPRESSION-ORIGIN HISTORY WHY-FAMOUS BIO ANTECEDENT INFLUENCE CONSEQUENT CAUSE-EFFECT METHOD-MEANS CIRCUMSTANCE-MEANS REASON EVALUATION PRO-CON CONTRAST RATING COUNSEL-ADVICE To create the QA Typology, we analyzed 17,384 questions and their answers (downloaded from answers.com); see (Gerber, 2001). The Typology contains 94 nodes, of which 47 are leaf nodes; a section of it appears in Figure 2. Each Typology node has been annotated with examples and typical patterns of expression of both Question and Answer, as indicated in Figure 3 for Proper-Person. Question examples Question templates Who was Johnny Mathis' high school track coach? who be <entity>'s <role> Who was Lincoln's Secretary of State? Who was President of Turkmenistan in 1994? who be <role> of <entity> Who is the composer of Eugene Onegin? Who is the CEO of General Electric? Actual answers Answer templates Lou Vasquez, track coach of...and Johnny Mathis <person>, <role> of <entity> Signed Saparmurad Turkmenbachy [Niyazov], <person> <role-title*> of <entity> president of Turkmenistan ...Turkmenistan’s President Saparmurad Niyazov... <entity>’s <role> <person> ...in Tchaikovsky's Eugene Onegin... <person>'s <entity> Mr. Jack Welch, GE chairman... <role-title> <person> ... <entity> <role> ...Chairman John Welch said ...GE's <subject>|<psv object> of related role-verb Figure 3. Portion of QA Typology node annotations for Proper-Person.",
"title": ""
}
] | [
{
"docid": "51624e6c70f4eb5f2295393c68ee386c",
"text": "Advances in mobile technologies and devices has changed the way users interact with devices and other users. These new interaction methods and services are offered by the help of intelligent sensing capabilities, using context, location and motion sensors. However, indoor location sensing is mostly achieved by utilizing radio signal (Wi-Fi, Bluetooth, GSM etc.) and nearest neighbor identification. The most common algorithm adopted for Received Signal Strength (RSS)-based location sensing is K Nearest Neighbor (KNN), which calculates K nearest neighboring points to mobile users (MUs). Accordingly, in this paper, we aim to improve the KNN algorithm by enhancing the neighboring point selection by applying k-means clustering approach. In the proposed method, k-means clustering algorithm groups nearest neighbors according to their distance to mobile user. Then the closest group to the mobile user is used to calculate the MU's location. The evaluation results indicate that the performance of clustered KNN is closely tied to the number of clusters, number of neighbors to be clustered and the initiation of the center points in k-mean algorithm. Keywords-component; Received signal strength, k-Means, clustering, location estimation, personal digital assistant (PDA), wireless, indoor positioning",
"title": ""
},
{
"docid": "80345e8476f1c64dbae0a9d1f1e612b0",
"text": "Vehicle behavior models and motion prediction are critical for advanced safety systems and safety system validation. This paper studies the effectiveness of convolutional recurrent neural networks in predicting action profiles for vehicles on highways. Instead of using hand-selected features, the neural network is given an image-like representation of the local scene. Convolutional neural networks and recurrence allow for the automatic identification of robust features based on spatial and temporal relations. Real driving data from the NGSIM dataset is used for the evaluation, and the resulting models are used to propagate simulated vehicle trajectories over ten-second horizons. Prediction models using Long Short Term Memory (LSTM) networks are shown to quantitatively and qualitatively outperform baseline methods in generating realistic vehicle trajectories. Predictions over driver actions are shown to depend heavily on previous action values. Efforts to improve performance through inclusion of information about the local scene proved unsuccessful, and will be the focus of further study.",
"title": ""
},
{
"docid": "3b1d959357ed1605b8ab8ae5f5287c4f",
"text": "Data privacy refers to ensuring that users keep control over access to information, whereas data accessibility refers to ensuring that information access is unconstrained. Conflicts between privacy and accessibility of data are natural to occur, and healthcare is a domain in which they are particularly relevant. In the present article, we discuss how blockchain technology, and smart contracts, could help in some typical scenarios related to data access, data management and data interoperability for the specific healthcare domain. We then propose the implementation of a large-scale information architecture to access Electronic Health Records (EHRs) based on Smart Contracts as information mediators. Our main contribution is the framing of data privacy and accessibility issues in healthcare and the proposal of an integrated blockchain based architecture.",
"title": ""
},
{
"docid": "e677ba3fa8d54fad324add0bda767197",
"text": "In this paper, we present a novel approach for mining opinions from product reviews, where it converts opinion mining task to identify product features, expressions of opinions and relations between them. By taking advantage of the observation that a lot of product features are phrases, a concept of phrase dependency parsing is introduced, which extends traditional dependency parsing to phrase level. This concept is then implemented for extracting relations between product features and expressions of opinions. Experimental evaluations show that the mining task can benefit from phrase dependency parsing.",
"title": ""
},
{
"docid": "ee4dbe3dc0352a60c61ec8d36ebda56d",
"text": "This paper proposes a two-axis-decoupled solar tracker based on parallel mechanism. Utilizing Grassmann line geometry, the type design of the two-axis solar tracker is investigated. Then, singularity is studied to obtain the workspace without singularities. By using the virtual work principle, the inverse dynamics is derived to find out the driving torque. Taking Beijing as a sample city where the solar tracker is placed, the motion trajectory of the tracker is planned to collect the maximum solar energy. The position of the mass center of the solar mirror on the platform is optimized to minimize the driving torque. The driving torque of the proposed tracker is compared with that of a conventional serial tracker, which shows that the proposed tracker can greatly reduce the driving torque and the reducers with large reduction ratio are not necessary. Thus, the complexity and power dissipation of the system can be reduced.",
"title": ""
},
{
"docid": "f0c334e0d626bd5be4e17f08049d573e",
"text": "The cost efficiency and diversity of digital channels facilitate marketers’ frequent and interactive communication with their customers. Digital channels like the Internet, email, mobile phones and digital television offer new prospects to cultivate customer relationships. However, there are a few models explaining how digital marketing communication (DMC) works from a relationship marketing perspective, especially for cultivating customer loyalty. In this paper, we draw together previous research into an integrative conceptual model that explains how the key elements of DMC frequency and content of brand communication, personalization, and interactivity can lead to improved customer value, commitment, and loyalty.",
"title": ""
},
{
"docid": "03b8136e2ca033f42d497844d362813c",
"text": "We present a new approach for performing high-quality edge-preserving filtering of images and videos in real time. Our solution is based on a transform that defines an isometry between curves on the 2D image manifold in 5D and the real line. This transform preserves the geodesic distance between points on these curves, adaptively warping the input signal so that 1D edge-preserving filtering can be efficiently performed in linear time. We demonstrate three realizations of 1D edge-preserving filters, show how to produce high-quality 2D edge-preserving filters by iterating 1D-filtering operations, and empirically analyze the convergence of this process. Our approach has several desirable features: the use of 1D operations leads to considerable speedups over existing techniques and potential memory savings; its computational cost is not affected by the choice of the filter parameters; and it is the first edge-preserving filter to work on color images at arbitrary scales in real time, without resorting to subsampling or quantization. We demonstrate the versatility of our domain transform and edge-preserving filters on several real-time image and video processing tasks including edge-preserving filtering, depth-of-field effects, stylization, recoloring, colorization, detail enhancement, and tone mapping.",
"title": ""
},
{
"docid": "578e8c5d2ed1fd41bd2c869eb842f305",
"text": "We are investigating the magnetic resonance imaging characteristics of magnetic nanoparticles (MNPs) that consist of an iron-oxide magnetic core coated with oleic acid (OA), then stabilized with a pluronic or tetronic block copolymer. Since pluronics and tetronics vary structurally, and also in the ratio of hydrophobic (poly[propylene oxide]) and hydrophilic (poly[ethylene oxide]) segments in the polymer chain and in molecular weight, it was hypothesized that their anchoring to the OA coating around the magnetic core could significantly influence the physical properties of MNPs, their interactions with biological environment following intravenous administration, and ability to localize to tumors. The amount of block copolymer associated with MNPs was seen to depend upon their molecular structures and influence the characteristics of MNPs. Pluronic F127-modified MNPs demonstrated sustained and enhanced contrast in the whole tumor, whereas that of Feridex IV was transient and confined to the tumor periphery. In conclusion, our pluronic F127-coated MNPs, which can also be loaded with anticancer agents for drug delivery, can be developed as an effective cancer theranostic agent, i.e. an agent with combined drug delivery and imaging properties.",
"title": ""
},
{
"docid": "45ea01d82897401058492bc2f88369b3",
"text": "Reduction in greenhouse gas emissions from transportation is essential in combating global warming and climate change. Eco-routing enables drivers to use the most eco-friendly routes and is effective in reducing vehicle emissions. The EcoTour system assigns eco-weights to a road network based on GPS and fuel consumption data collected from vehicles to enable ecorouting. Given an arbitrary source-destination pair in Denmark, EcoTour returns the shortest route, the fastest route, and the eco-route, along with statistics for the three routes. EcoTour also serves as a testbed for exploring advanced solutions to a range of challenges related to eco-routing.",
"title": ""
},
{
"docid": "8b09387799c37a0131e6ba08715ed187",
"text": "Simulation optimization tools have the potential to provide an unprecedented level of support for the design and execution of operational control in Discrete Event Logistics Systems (DELS). While much of the simulation optimization literature has focused on developing and exploiting integration and syntactical interoperability between simulation and optimization tools, maximizing the effectiveness of these tools to support the design and execution of control behavior requires an even greater degree of interoperability than the current state of the art. In this paper, we propose a modeling methodology for operational control decision-making that can improve the interoperability between these two analysis methods and their associated tools in the context of DELS control. This methodology establishes a standard definition of operational control for both simulation and optimization methods and defines a mapping between decision variables (optimization) and execution mechanisms (simulation / base system). The goal is a standard for creating conforming simulation and optimization tools that are capable of meeting the functional needs of operational control decision making in DELS.",
"title": ""
},
{
"docid": "f3bda47434c649f6b8fad89199ff5987",
"text": "Structural health monitoring (SHM) of civil infrastructure using wireless smart sensor networks (WSSNs) has received significant public attention in recent years. The benefits of WSSNs are that they are low-cost, easy to install, and provide effective data management via on-board computation. This paper reports on the deployment and evaluation of a state-of-the-art WSSN on the new Jindo Bridge, a cable-stayed bridge in South Korea with a 344-m main span and two 70-m side spans. The central components of the WSSN deployment are the Imote2 smart sensor platforms, a custom-designed multimetric sensor boards, base stations, and software provided by the Illinois Structural Health Monitoring Project (ISHMP) Services Toolsuite. In total, 70 sensor nodes and two base stations have been deployed to monitor the bridge using an autonomous SHM application with excessive wind and vibration triggering the system to initiate monitoring. Additionally, the performance of the system is evaluated in terms of hardware durability, software stability, power consumption and energy harvesting capabilities. The Jindo Bridge SHM system constitutes the largest deployment of wireless smart sensors for civil infrastructure monitoring to date. This deployment demonstrates the strong potential of WSSNs for monitoring of large scale civil infrastructure.",
"title": ""
},
{
"docid": "037d86a0371dfc838f9171a507ad89cf",
"text": "This paper proposes a new robust adaptive beamformer a p plicable to microphone arrays. The proposed beamformer is a generalized sidelobe canceller (GSC) with a variable blocking matrix using coefficient-constrained adaptive digital filters (CCADFs). The CCADFs minimize leakage of target signal into the interference path of the GSC. Each coefficient of the CCADFs is constrained to avoid mistracking. The input signal to all the CCADFs is the output of a fixed beamformer. In multiple-input canceller, leaky ADFs are used to decrease undesirable target-signal cancellation. The proposed beamformer can allow large look-direction error with almost no degradation in interference-reduction performance and can be implemented with a small number of microphones. The maximum allowable look-direction error can be specified by the user. Simulation results show that the proposed beamformer designed to allow about 20 degrees of look-direction error can suppress interferences by more than 17dB. 1. I N T R O D U C T I O N Microphone arrays have been studied for teleconferencing, hearing aid, speech recognition, and speech enhancement, Especially adaptive microphone arrays are promising technique. They are based on adaptive beamforming such as generalized sidelobe canceller (GSC) and can attain high interference-reduction performance with a small number of microphones arranged in small space [l]. Adaptive beamformers extract the signal from the direction of arrival (DOA) specified by the steering vector, a parameter of beamforming. However, with classical adaptive beamformers based on GSC like simple Griffiths-Jim beamformer (GJBF)[2], target-signal cancellation occurs in the presence of steering-vector error. The error in the steering vector is inevitable with actual microphone arrays. Several signal processing techniques have been proposed to avoid the signal cancellation. These techniques are called robust beamformer after the fact that they are robust against errors. Unfortunately, they still have other problems such as degradation in interference-reduction performance, increase in the number of microphones, or mistracking. In this paper, a new robust adaptive beamformer to avoid these difficulties is proposed. The proposed beamformer uses a variable blocking matrix with coefficient-constrained adaptive digital filters (CCADFs). 0-7803-3 192-3/96 $5.0001996 IEEE 925 2. R O B U S T B E A M F O R M E R S BASED ON GENERALIZED SIDELOBE C A N C E L L E R A structure of the GSC with M microphones is shown in Fig.1. The GSC includes a fixed beamformer (FBF), multiple-input canceller (MC), and blocking matrix (BM). The FBF enhances the target signal. d(b ) is the output signal of the FBF a t sample index b, and zm(b) is the output signal of the m-th imicrophone ~ ( m = 0, ..., M). The MC adaptively subtracts the components correlated to the output signals ym(b) of the BM, froin the delayed output signal d ( k Q) of the FEIF, where Q is the number of delay samples for causality. The BM is a kind of spatial rejection filter. It rejects the target signal and passes interferences. If the input signals ym(b) of MC, which are the output signals of the BM, contain only interferences, the MC rejects the interferences and extract the target signal. However, if the target signal leaks in ym ( I C ) , target-signal cancellation occurs a t the MC. The BM in the simple GJBF is sensitive to the steering-vector error and easily leaks the target signal. The vector error is caused by microphone arrangement error, microphone sensitivity error, look-direction error, and so on. In the actual usage, the major factor of the steeringvector error is the look-direction error. This is because the target often changes the position by tlhe speaker movement. It is impossible to know t8he exact DOA of the target signal. Thus, the signal cancellation is an important problem. Several approaches to inhibit target-signal cancellation have been proposed[3]-[SI. Some robust beamformers introduce constraints to the adaptive algorithm in the MCs. Adaptive ailgorithms with leakage[3], noise injection[4], or norm conistraint[5] restrain the undesirable signal-cancellation. The beamformers pass the target signal in the presence of small steering-vector error. However, when they are designed to allow large look-direction error which is often required for microphone arrays, interference reduction is also restrained. Some robust beamformers use improved spatial filters in BM [3][6][7]. The filters eliminate the target signal in the presence of steering-vector error. However, they have been developed to allow small look-direction error. When they are designed to allow large look-direction error, the filters lose a lot of degrees of freedom for interference reduction. The loss in the degrees of freedom degrades interferencereduction performance or requires increase in the number of microphones. Target tracking or calibration is (another approach for",
"title": ""
},
{
"docid": "fd96e152e8579b0e8027ae7131b70fb1",
"text": "(Semi-)automatic mapping — also called (semi-)automatic alignment — of ontologies is a core task to achieve interoperability when two agents or services use different ontologies. In the existing literature, the focus ha s so far been on improving the quality of mapping results. We here consider QOM, Q uick Ontology Mapping, as a way to trade off between effectiveness (i.e. qu ality) and efficiency of the mapping generation algorithms. We show that QOM ha s lower run-time complexity than existing prominent approaches. Then, we show in experiments that this theoretical investigation translates into practical bene fits. While QOM gives up some of the possibilities for producing high-quality resu lts in favor of efficiency, our experiments show that this loss of quality is mar gin l.",
"title": ""
},
{
"docid": "7f110e4769b996de13afe63962bcf2d2",
"text": "Versu is a text-based simulationist interactive drama. Because it uses autonomous agents, the drama is highly replayable: you can play the same story from multiple perspectives, or assign different characters to the various roles. The architecture relies on the notion of a social practice to achieve coordination between the independent autonomous agents. A social practice describes a recurring social situation, and is a successor to the Schankian script. Social practices are implemented as reactive joint plans, providing affordances to the agents who participate in them. The practices never control the agents directly; they merely provide suggestions. It is always the individual agent who decides what to do, using utility-based reactive action selection.",
"title": ""
},
{
"docid": "4502c377c8e4cd83c968b6ce84ce3204",
"text": "Educational information mining is rising field that spotlights on breaking down educational information to create models for enhancing learning encounters and enhancing institutional viability. Expanding enthusiasm for information mining and educational frameworks, make educational information mining as another developing exploration group. Educational Data Mining intends to remove the concealed learning from expansive Educational databases with the utilization of procedures and apparatuses. Educational Data Mining grows new techniques to find information from Educational database and it is utilized for basic decision making in Educational framework. The knowledge is hidden among the Educational informational Sets and it is extractable through data mining techniques. It is essential to think about and dissect Educational information particularly understudies execution. Educational Data Mining (EDM) is the field of study relates about mining Educational information to discover intriguing examples and learning in Educational associations. This investigation is similarly worried about this subject, particularly, the understudies execution. This study investigates numerous components theoretically expected to influence student's performance in higher education, and finds a subjective model which best classifies and predicts the student's performance in light of related individual and phenomenal elements.",
"title": ""
},
{
"docid": "cded40190ef8cc022adeb97c2e77ce36",
"text": "Question classification is very important for question answering. This paper present our research work on question classification through machine learning approach. In order to train the learning model, we designed a rich set of features that are predictive of question categories. An important component of question answering systems is question classification. The task of question classification is to predict the entity type of the answer of a natural language question. Question classification is typically done using machine learning techniques. Different lexical, syntactical and semantic features can be extracted from a question. In this work we combined lexical, syntactic and semantic features which improve the accuracy of classification. Furthermore, we adopted three different classifiers: Nearest Neighbors (NN), Naïve Bayes (NB), and Support Vector Machines (SVM) using two kinds of features: bag-of-words and bag-of n grams. Furthermore, we discovered that when we take SVM classifier and combine the semantic, syntactic, lexical feature we found that it will improve the accuracy of classification. We tested our proposed approaches on the well-known UIUC dataset and succeeded to achieve a new record on the accuracy of classification on this dataset.",
"title": ""
},
{
"docid": "cd8c1c24d4996217c8927be18c48488f",
"text": "Recurrent neural networks (RNNs), such as long short-term memory networks (LSTMs), serve as a fundamental building block for many sequence learning tasks, including machine translation, language modeling, and question answering. In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTMbased models. We propose the weight-dropped LSTM which uses DropConnect on hidden-tohidden weights as a form of recurrent regularization. Further, we introduce NT-ASGD, a variant of the averaged stochastic gradient method, wherein the averaging trigger is determined using a non-monotonic condition as opposed to being tuned by the user. Using these and other regularization strategies, we achieve state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2.",
"title": ""
},
{
"docid": "990067864c123b45e5c3d06ef1a0cf7d",
"text": "BACKGROUND\nRetrospective single-centre series have shown the feasibility of sentinel lymph-node (SLN) identification in endometrial cancer. We did a prospective, multicentre cohort study to assess the detection rate and diagnostic accuracy of the SLN procedure in predicting the pathological pelvic-node status in patients with early stage endometrial cancer.\n\n\nMETHODS\nPatients with International Federation of Gynecology and Obstetrics (FIGO) stage I-II endometrial cancer had pelvic SLN assessment via cervical dual injection (with technetium and patent blue), and systematic pelvic-node dissection. All lymph nodes were histopathologically examined and SLNs were serial sectioned and examined by immunochemistry. The primary endpoint was estimation of the negative predictive value (NPV) of sentinel-node biopsy per hemipelvis. This is an ongoing study for which recruitment has ended. The study is registered with ClinicalTrials.gov, number NCT00987051.\n\n\nFINDINGS\nFrom July 5, 2007, to Aug 4, 2009, 133 patients were enrolled at nine centres in France. No complications occurred after injection of technetium colloid and no anaphylactic reactions were noted after patent blue injection. No surgical complications were reported during SLN biopsy, including procedures that involved conversion to open surgery. At least one SLN was detected in 111 of the 125 eligible patients. 19 of 111 (17%) had pelvic-lymph-node metastases. Five of 111 patients (5%) had an associated SLN in the para-aortic area. Considering the hemipelvis as the unit of analysis, NPV was 100% (95% CI 95-100) and sensitivity 100% (63-100). Considering the patient as the unit of analysis, three patients had false-negative results (two had metastatic nodes in the contralateral pelvic area and one in the para-aortic area), giving an NPV of 97% (95% CI 91-99) and sensitivity of 84% (62-95). All three of these patients had type 2 endometrial cancer. Immunohistochemistry and serial sectioning detected metastases undiagnosed by conventional histology in nine of 111 (8%) patients with detected SLNs, representing nine of the 19 patients (47%) with metastases. SLN biopsy upstaged 10% of patients with low-risk and 15% of those with intermediate-risk endometrial cancer.\n\n\nINTERPRETATION\nSLN biopsy with cervical dual labelling could be a trade-off between systematic lymphadenectomy and no dissection at all in patients with endometrial cancer of low or intermediate risk. Moreover, our study suggests that SLN biopsy could provide important data to tailor adjuvant therapy.\n\n\nFUNDING\nDirection Interrégionale de Recherche Clinique, Ile-de-France, Assistance Publique-Hôpitaux de Paris.",
"title": ""
},
{
"docid": "1e638842d245472a0d8365b7da27b20a",
"text": "How similar are the experiences of social rejection and physical pain? Extant research suggests that a network of brain regions that support the affective but not the sensory components of physical pain underlie both experiences. Here we demonstrate that when rejection is powerfully elicited--by having people who recently experienced an unwanted break-up view a photograph of their ex-partner as they think about being rejected--areas that support the sensory components of physical pain (secondary somatosensory cortex; dorsal posterior insula) become active. We demonstrate the overlap between social rejection and physical pain in these areas by comparing both conditions in the same individuals using functional MRI. We further demonstrate the specificity of the secondary somatosensory cortex and dorsal posterior insula activity to physical pain by comparing activated locations in our study with a database of over 500 published studies. Activation in these regions was highly diagnostic of physical pain, with positive predictive values up to 88%. These results give new meaning to the idea that rejection \"hurts.\" They demonstrate that rejection and physical pain are similar not only in that they are both distressing--they share a common somatosensory representation as well.",
"title": ""
}
] | scidocsrr |
35c81e99bc7bb0be3ec777516308dfb9 | Supply chain ontology: Review, analysis and synthesis | [
{
"docid": "910c42c4737d38db592f7249c2e0d6d2",
"text": "This document presents the Enterprise Ontology a collection of terms and de nitions relevant to business enterprises It was developed as part of the Enterprise Project a collaborative e ort to provide a framework for enterprise modelling The Enterprise Ontology will serve as a basis for this framework which includes methods and a computer toolset for enterprise modelling We give an overview of the Enterprise Project elaborate on the intended use of the Ontology and discuss the process we went through to build it The scope of the Enterprise Ontology is limited to those core concepts required for the project however it is expected that it will appeal to a wider audience It should not be considered static during the course of the project the Enterprise Ontology will be further re ned and extended",
"title": ""
}
] | [
{
"docid": "928ed1aed332846176ad52ce7cc0754c",
"text": "What is the price of anarchy when unsplittable demands are ro uted selfishly in general networks with load-dependent edge dela ys? Motivated by this question we generalize the model of [14] to the case of weighted congestion games. We show that varying demands of users crucially affect the n ature of these games, which are no longer isomorphic to exact potential gam es, even for very simple instances. Indeed we construct examples where even a single-commodity (weighted) network congestion game may have no pure Nash equ ilibrium. On the other hand, we study a special family of networks (whic h we call the l-layered networks ) and we prove that any weighted congestion game on such a network with resource delays equal to the congestions, pos sesses a pure Nash Equilibrium. We also show how to construct one in pseudo-pol yn mial time. Finally, we give a surprising answer to the question above for s uch games: The price of anarchy of any weighted l-layered network congestion game with m edges and edge delays equal to the loads, is Θ (",
"title": ""
},
{
"docid": "c237facfc6639dfff82659f927a25267",
"text": "The scientific approach to understand the nature of consciousness revolves around the study of human brain. Neurobiological studies that compare the nervous system of different species have accorded highest place to the humans on account of various factors that include a highly developed cortical area comprising of approximately 100 billion neurons, that are intrinsically connected to form a highly complex network. Quantum theories of consciousness are based on mathematical abstraction and Penrose-Hameroff Orch-OR Theory is one of the most promising ones. Inspired by Penrose-Hameroff Orch-OR Theory, Behrman et. al. (Behrman, 2006) have simulated a quantum Hopfield neural network with the structure of a microtubule. They have used an extremely simplified model of the tubulin dimers with each dimer represented simply as a qubit, a single quantum two-state system. The extension of this model to n-dimensional quantum states, or n-qudits presented in this work holds considerable promise for even higher mathematical abstraction in modelling consciousness systems.",
"title": ""
},
{
"docid": "755f7e93dbe43a0ed12eb90b1d320cb2",
"text": "This paper presents a deep architecture for learning a similarity metric on variablelength character sequences. The model combines a stack of character-level bidirectional LSTM’s with a Siamese architecture. It learns to project variablelength strings into a fixed-dimensional embedding space by using only information about the similarity between pairs of strings. This model is applied to the task of job title normalization based on a manually annotated taxonomy. A small data set is incrementally expanded and augmented with new sources of variance. The model learns a representation that is selective to differences in the input that reflect semantic differences (e.g., “Java developer” vs. “HR manager”) but also invariant to nonsemantic string differences (e.g., “Java developer” vs. “Java programmer”).",
"title": ""
},
{
"docid": "e72ed2b388577122402831d4cd75aa0f",
"text": "Development and testing of a compact 200-kV, 10-kJ/s industrial-grade power supply for capacitor charging applications is described. Pulse repetition rate (PRR) can be from single shot to 250 Hz, depending on the storage capacitance. Energy dosing (ED) topology enables high efficiency at switching frequency of up to 55 kHz using standard slow IGBTs. Circuit simulation examples are given. They clearly show zero-current switching at variable frequency during the charge set by the ED governing equations. Peak power drawn from the primary source is about only 60% higher than the average power, which lowers the stress on the input rectifier. Insulation design was assisted by electrostatic field analyses. Field plots of the main transformer insulation illustrate field distribution and stresses in it. Subsystem and system tests were performed including limited insulation life test. A precision, high-impedance, fast HV divider was developed for measuring voltages up to 250 kV with risetime down to 10 μs. The charger was successfully tested with stored energy of up to 550 J at discharge via a custom designed open-air spark gap at PRR up to 20 Hz (in bursts). Future work will include testing at customer sites.",
"title": ""
},
{
"docid": "b0103474ecd369a9f0ba637c34bacc56",
"text": "BACKGROUND\nThe Internet Addiction Test (IAT) by Kimberly Young is one of the most utilized diagnostic instruments for Internet addiction. Although many studies have documented psychometric properties of the IAT, consensus on the optimal overall structure of the instrument has yet to emerge since previous analyses yielded markedly different factor analytic results.\n\n\nOBJECTIVE\nThe objective of this study was to evaluate the psychometric properties of the Italian version of the IAT, specifically testing the factor structure stability across cultures.\n\n\nMETHODS\nIn order to determine the dimensional structure underlying the questionnaire, both exploratory and confirmatory factor analyses were performed. The reliability of the questionnaire was computed by the Cronbach alpha coefficient.\n\n\nRESULTS\nData analyses were conducted on a sample of 485 college students (32.3%, 157/485 males and 67.7%, 328/485 females) with a mean age of 24.05 years (SD 7.3, range 17-47). Results showed 176/485 (36.3%) participants with IAT score from 40 to 69, revealing excessive Internet use, and 11/485 (1.9%) participants with IAT score from 70 to 100, suggesting significant problems because of Internet use. The IAT Italian version showed good psychometric properties, in terms of internal consistency and factorial validity. Alpha values were satisfactory for both the one-factor solution (Cronbach alpha=.91), and the two-factor solution (Cronbach alpha=.88 and Cronbach alpha=.79). The one-factor solution comprised 20 items, explaining 36.18% of the variance. The two-factor solution, accounting for 42.15% of the variance, showed 11 items loading on Factor 1 (Emotional and Cognitive Preoccupation with the Internet) and 7 items on Factor 2 (Loss of Control and Interference with Daily Life). Goodness-of-fit indexes (NNFI: Non-Normed Fit Index; CFI: Comparative Fit Index; RMSEA: Root Mean Square Error of Approximation; SRMR: Standardized Root Mean Square Residual) from confirmatory factor analyses conducted on a random half subsample of participants (n=243) were satisfactory in both factorial solutions: two-factor model (χ²₁₃₂= 354.17, P<.001, χ²/df=2.68, NNFI=.99, CFI=.99, RMSEA=.02 [90% CI 0.000-0.038], and SRMR=.07), and one-factor model (χ²₁₆₉=483.79, P<.001, χ²/df=2.86, NNFI=.98, CFI=.99, RMSEA=.02 [90% CI 0.000-0.039], and SRMR=.07).\n\n\nCONCLUSIONS\nOur study was aimed at determining the most parsimonious and veridical representation of the structure of Internet addiction as measured by the IAT. Based on our findings, support was provided for both single and two-factor models, with slightly strong support for the bidimensionality of the instrument. Given the inconsistency of the factor analytic literature of the IAT, researchers should exercise caution when using the instrument, dividing the scale into factors or subscales. Additional research examining the cross-cultural stability of factor solutions is still needed.",
"title": ""
},
{
"docid": "ef6160d304908ea87287f2071dea5f6d",
"text": "The diffusion of fake images and videos on social networks is a fast growing problem. Commercial media editing tools allow anyone to remove, add, or clone people and objects, to generate fake images. Many techniques have been proposed to detect such conventional fakes, but new attacks emerge by the day. Image-to-image translation, based on generative adversarial networks (GANs), appears as one of the most dangerous, as it allows one to modify context and semantics of images in a very realistic way. In this paper, we study the performance of several image forgery detectors against image-to-image translation, both in ideal conditions, and in the presence of compression, routinely performed upon uploading on social networks. The study, carried out on a dataset of 36302 images, shows that detection accuracies up to 95% can be achieved by both conventional and deep learning detectors, but only the latter keep providing a high accuracy, up to 89%, on compressed data.",
"title": ""
},
{
"docid": "e8e3f77626742ef7aa40703e3113f148",
"text": "This paper presents a multi-agent based framework for target tracking. We exploit the agent-oriented software paradigm with its characteristics that provide intelligent autonomous behavior together with a real time computer vision system to achieve high performance real time target tracking. The framework consists of four layers; interface, strategic, management, and operation layers. Interface layer receives from the user the tracking parameters such as the number and type of trackers and targets and type of the tracking environment, and then delivers these parameters to the subsequent layers. Strategic (decision making) layer is provided with a knowledge base of target tracking methodologies that are previously implemented by researchers in diverse target tracking applications and are proven successful. And by inference in the knowledge base using the user input a tracking methodology is chosen. Management layer is responsible for pursuing and controlling the tracking methodology execution. Operation layer represents the phases in the tracking methodology and is responsible for communicating with the real-time computer vision system to execute the algorithms in the phases. The framework is presented with a case study to show its ability to tackle the target tracking problem and its flexibility to solve the problem with different tracking parameters. This paper describes the ability of the agent-based framework to deploy any real-time vision system that fits in solving the target tracking problem. It is a step towards a complete open standard, real-time, agent-based framework for target tracking.",
"title": ""
},
{
"docid": "871af4524fcbbae44ba9139bef3481d0",
"text": "AIM\n'Othering' is described as a social process whereby a dominant group or person uses negative attributes to define and subordinate others. Literature suggests othering creates exclusive relationships and puts patients at risk for suboptimal care. A concept analysis delineating the properties of othering was conducted to develop knowledge to support inclusionary practices in nursing.\n\n\nDESIGN\nRodgers' Evolutionary Method for concept analysis guided this study.\n\n\nMETHODS\nThe following databases were searched spanning the years 1999-2015: CINAHL, PUBMED, PsychINFO and Google. Search terms included \"othering\", \"nurse\", \"other\", \"exclusion\" and \"patient\".\n\n\nRESULTS\nTwenty-eight papers were analyzed whereby definitions, related concepts and othering attributes were identified. Findings support that othering in nursing is a sequential process with a trajectory aimed at marginalization and exclusion, which in turn has a negative impact on patient care and professional relationships. Implications are discussed in terms of deriving practical solutions to disrupt othering. We conclude with a conceptual foundation designed to support inclusionary strategies in nursing.",
"title": ""
},
{
"docid": "b15f185258caa9d355fae140a41ae03c",
"text": "The current approaches in terms of information security awareness and education are descriptive (i.e. they are not accomplishment-oriented nor do they recognize the factual/normative dualism); and current research has not explored the possibilities offered by motivation/behavioural theories. The first situation, level of descriptiveness, is deemed to be questionable because it may prove eventually that end-users fail to internalize target goals and do not follow security guidelines, for example ± which is inadequate. Moreover, the role of motivation in the area of information security is not considered seriously enough, even though its role has been widely recognised. To tackle such weaknesses, this paper constructs a conceptual foundation for information systems/organizational security awareness. The normative and prescriptive nature of end-user guidelines will be considered. In order to understand human behaviour, the behavioural science framework, consisting in intrinsic motivation, a theory of planned behaviour and a technology acceptance model, will be depicted and applied. Current approaches (such as the campaign) in the area of information security awareness and education will be analysed from the viewpoint of the theoretical framework, resulting in information on their strengths and weaknesses. Finally, a novel persuasion strategy aimed at increasing users' commitment to security guidelines is presented. spite of its significant role, seems to lack adequate foundations. To begin with, current approaches (e.g. McLean, 1992; NIST, 1995, 1998; Perry, 1985; Morwood, 1998), are descriptive in nature. Their inadequacy with respect to point of departure is partly recognized by McLean (1992), who points out that the approaches presented hitherto do not ensure learning. Learning can also be descriptive, however, which makes it an improper objective for security awareness. Learning and other concepts or approaches are not irrelevant in the case of security awareness, education or training, but these and other approaches need a reasoned contextual foundation as a point of departure in order to be relevant. For instance, if learning does not reflect the idea of prescriptiveness, the objective of the learning approach includes the fact that users may learn guidelines, but nevertheless fails to comply with them in the end. This state of affairs (level of descriptiveness[6]), is an inadequate objective for a security activity (the idea of prescriptiveness will be thoroughly considered in section 3). Also with regard to the content facet, the important role of motivation (and behavioural theories) with respect to the uses of security systems has been recognised (e.g. by NIST, 1998; Parker, 1998; Baskerville, 1989; Spruit, 1998; SSE-CMM, 1998a; 1998b; Straub, 1990; Straub et al., 1992; Thomson and von Solms, 1998; Warman, 1992) ± but only on an abstract level (as seen in Table I, the issue islevel (as seen in Table I, the issue is not considered from the viewpoint of any particular behavioural theory as yet). Motivation, however, is an issue where a deeper understanding may be of crucial relevance with respect to the effectiveness of approaches based on it. The role, possibilities and constraints of motivation and attitude in the effort to achieve positive results with respect to information security activities will be addressed at a conceptual level from the viewpoints of different theories. The scope of this paper is limited to the content aspects of awareness (Table I) and further end-users, thus resulting in a research contribution that is: a conceptual foundation and a framework for IS security awareness. This is achieved by addressing the following research questions: . What are the premises, nature and point of departure of awareness? . What is the role of attitude, and particularly motivation: the possibilities and requirements for achieving motivation/user acceptance and commitment with respect to information security tasks? . What approaches can be used as a framework to reach the stage of internalization and end-user",
"title": ""
},
{
"docid": "5c8ab947856945b32d4d3e0edc89a9e0",
"text": "While MOOCs offer educational data on a new scale, many educators find great potential of the big data including detailed activity records of every learner. A learner's behavior such as if a learner will drop out from the course can be predicted. How to provide an effective, economical, and scalable method to detect cheating on tests such as surrogate exam-taker is a challenging problem. In this paper, we present a grade predicting method that uses student activity features to predict whether a learner may get a certification if he/she takes a test. The method consists of two-step classifications: motivation classification (MC) and grade classification (GC). The MC divides all learners into three groups including certification earning, video watching, and course sampling. The GC then predicts a certification earning learner may or may not obtain a certification. Our experiment shows that the proposed method can fit the classification model at a fine scale and it is possible to find a surrogate exam-taker.",
"title": ""
},
{
"docid": "29aa7084f7d6155d4626b682a5fc88ef",
"text": "There is an underlying cascading behavior over road networks. Traffic cascading patterns are of great importance to easing traffic and improving urban planning. However, what we can observe is individual traffic conditions on different road segments at discrete time intervals, rather than explicit interactions or propagation (e.g., A→B) between road segments. Additionally, the traffic from multiple sources and the geospatial correlations between road segments make it more challenging to infer the patterns. In this paper, we first model the three-fold influences existing in traffic propagation and then propose a data-driven approach, which finds the cascading patterns through maximizing the likelihood of observed traffic data. As this is equivalent to a submodular function maximization problem, we solve it by using an approximate algorithm with provable near-optimal performance guarantees based on its submodularity. Extensive experiments on real-world datasets demonstrate the advantages of our approach in both effectiveness and efficiency.",
"title": ""
},
{
"docid": "46e37ce77756f58ab35c0930d45e367f",
"text": "In this letter, we propose an enhanced stereophonic acoustic echo suppression (SAES) algorithm incorporating spectral and temporal correlations in the short-time Fourier transform (STFT) domain. Unlike traditional stereophonic acoustic echo cancellation, SAES estimates the echo spectra in the STFT domain and uses a Wiener filter to suppress echo without performing any explicit double-talk detection. The proposed approach takes account of interdependencies among components in adjacent time frames and frequency bins, which enables more accurate estimation of the echo signals. Experimental results show that the proposed method yields improved performance compared to that of conventional SAES.",
"title": ""
},
{
"docid": "e8681043d4551f6da335a649a6d7b13c",
"text": "In recent years, wireless communication particularly in the front-end transceiver architecture has increased its functionality. This trend is continuously expanding and of particular is reconfigurable radio frequency (RF) front-end. A multi-band single chip architecture which consists of an array of switches and filters could simplify the complexity of the current superheterodyne architecture. In this paper, the design of a Single Pole Double Throw (SPDT) switch using 0.35μm Complementary Metal Oxide Semiconductor (CMOS) technology is discussed. The SPDT RF CMOS switch was then simulated in the range of frequency of 0-2GHz. At 2 GHz, the switch exhibits insertion loss of 1.153dB, isolation of 21.24dB, P1dB of 21.73dBm and IIP3 of 26.02dBm. Critical RF T/R switch characteristic such as insertion loss, isolation, power 1dB compression point and third order intercept point, IIP3 is discussed and compared with other type of switch designs. Pre and post layout simulation of the SPDT RF CMOS switch are also discussed to analyze the effect of parasitic capacitance between components' interconnection.",
"title": ""
},
{
"docid": "dbf8e0125944b526f7b14c98fc46afa2",
"text": "People counting is one of the key techniques in video surveillance. This task usually encounters many challenges in crowded environment, such as heavy occlusion, low resolution, imaging viewpoint variability, etc. Motivated by the success of R-CNN [1] on object detection, in this paper we propose a head detection based people counting method combining the Adaboost algorithm and the CNN. Unlike the R-CNN which uses the general object proposals as the inputs of CNN, our method uses the cascade Adaboost algorithm to obtain the head region proposals for CNN, which can greatly reduce the following classification time. Resorting to the strong ability of feature learning of the CNN, it is used as a feature extractor in this paper, instead of as a classifier as its commonlyused strategy. The final classification is done by a linear SVM classifier trained on the features extracted using the CNN feature extractor. Finally, the prior knowledge can be applied to post-process the detection results to increase the precision of head detection and the people count is obtained by counting the head detection results. A real classroom surveillance dataset is used to evaluate the proposed method and experimental results show that this method has good performance and outperforms the baseline methods, including deformable part model and cascade Adaboost methods. ∗Corresponding author Email address: [email protected] (Chenqiang Gao∗, Pei Li, Yajun Zhang, Jiang Liu, Lan Wang) Preprint submitted to Neurocomputing May 28, 2016",
"title": ""
},
{
"docid": "d69573f767b2e72bcff5ed928ca8271c",
"text": "This article provides a novel analytical method of magnetic circuit on Axially-Laminated Anisotropic (ALA) rotor synchronous reluctance motor when the motor is magnetized on the d-axis. To simplify the calculation, the reluctance of stator magnet yoke and rotor magnetic laminations and leakage magnetic flux all are ignored. With regard to the uneven air-gap brought by the teeth and slots of the stator and rotor, the method resolves the problem with the equivalent air-gap length distribution function, and clarifies the magnetic circuit when the stator teeth are saturated or unsaturated. In order to conduct exact computation, the high order harmonics of the stator magnetic potential are also taken into account.",
"title": ""
},
{
"docid": "33e6abc5ed78316cc03dae8ba5a0bfc8",
"text": "In this paper, we present a deep learning architecture which addresses the problem of 3D semantic segmentation of unstructured point clouds. Compared to previous work, we introduce grouping techniques which define point neighborhoods in the initial world space and the learned feature space. Neighborhoods are important as they allow to compute local or global point features depending on the spatial extend of the neighborhood. Additionally, we incorporate dedicated loss functions to further structure the learned point feature space: the pairwise distance loss and the centroid loss. We show how to apply these mechanisms to the task of 3D semantic segmentation of point clouds and report state-of-the-art performance on indoor and outdoor datasets. ar X iv :1 81 0. 01 15 1v 2 [ cs .C V ] 8 D ec 2 01 8 2 F. Engelmann et al.",
"title": ""
},
{
"docid": "23d9479a38afa6e8061fe431047bed4e",
"text": "We introduce cMix, a new approach to anonymous communications. Through a precomputation, the core cMix protocol eliminates all expensive realtime public-key operations—at the senders, recipients and mixnodes—thereby decreasing real-time cryptographic latency and lowering computational costs for clients. The core real-time phase performs only a few fast modular multiplications. In these times of surveillance and extensive profiling there is a great need for an anonymous communication system that resists global attackers. One widely recognized solution to the challenge of traffic analysis is a mixnet, which anonymizes a batch of messages by sending the batch through a fixed cascade of mixnodes. Mixnets can offer excellent privacy guarantees, including unlinkability of sender and receiver, and resistance to many traffic-analysis attacks that undermine many other approaches including onion routing. Existing mixnet designs, however, suffer from high latency in part because of the need for real-time public-key operations. Precomputation greatly improves the real-time performance of cMix, while its fixed cascade of mixnodes yields the strong anonymity guarantees of mixnets. cMix is unique in not requiring any real-time public-key operations by users. Consequently, cMix is the first mixing suitable for low latency chat for lightweight devices. Our presentation includes a specification of cMix, security arguments, anonymity analysis, and a performance comparison with selected other approaches. We also give benchmarks from our prototype.",
"title": ""
},
{
"docid": "0408aeb750ca9064a070248f0d32d786",
"text": "Mood, attention and motivation co-vary with activity in the neuromodulatory systems of the brain to influence behaviour. These psychological states, mediated by neuromodulators, have a profound influence on the cognitive processes of attention, perception and, particularly, our ability to retrieve memories from the past and make new ones. Moreover, many psychiatric and neurodegenerative disorders are related to dysfunction of these neuromodulatory systems. Neurons of the brainstem nucleus locus coeruleus are the sole source of noradrenaline, a neuromodulator that has a key role in all of these forebrain activities. Elucidating the factors that control the activity of these neurons and the effect of noradrenaline in target regions is key to understanding how the brain allocates attention and apprehends the environment to select, store and retrieve information for generating adaptive behaviour.",
"title": ""
},
{
"docid": "8a708ec1187ecb2fe9fa929b46208b34",
"text": "This paper proposes a new face verification method that uses multiple deep convolutional neural networks (DCNNs) and a deep ensemble, that extracts two types of low dimensional but discriminative and high-level abstracted features from each DCNN, then combines them as a descriptor for face verification. Our DCNNs are built from stacked multi-scale convolutional layer blocks to present multi-scale abstraction. To train our DCNNs, we use different resolutions of triplets that consist of reference images, positive images, and negative images, and triplet-based loss function that maximize the ratio of distances between negative pairs and positive pairs and minimize the absolute distances between positive face images. A deep ensemble is generated from features extracted by each DCNN, and used as a descriptor to train the joint Bayesian learning and its transfer learning method. On the LFW, although we use only 198,018 images and only four different types of networks, the proposed method with the joint Bayesian learning and its transfer learning method achieved 98.33% accuracy. In addition to further increase the accuracy, we combine the proposed method and high dimensional LBP based joint Bayesian method, and achieved 99.08% accuracy on the LFW. Therefore, the proposed method helps to improve the accuracy of face verification when training data is insufficient to train DCNNs.",
"title": ""
},
{
"docid": "95037e7dc3ae042d64a4b343ad4efd39",
"text": "We classify human actions occurring in depth image sequences using features based on skeletal joint positions. The action classes are represented by a multi-level Hierarchical Dirichlet Process – Hidden Markov Model (HDP-HMM). The non-parametric HDP-HMM allows the inference of hidden states automatically from training data. The model parameters of each class are formulated as transformations from a shared base distribution, thus promoting the use of unlabelled examples during training and borrowing information across action classes. Further, the parameters are learnt in a discriminative way. We use a normalized gamma process representation of HDP and margin based likelihood functions for this purpose. We sample parameters from the complex posterior distribution induced by our discriminative likelihood function using elliptical slice sampling. Experiments with two different datasets show that action class models learnt using our technique produce good classification results.",
"title": ""
}
] | scidocsrr |
f0026a7bfaadac338395d72b2bb48017 | Design of an arm exoskeleton with scapula motion for shoulder rehabilitation | [
{
"docid": "8eca353064d3b510b32c486e5f26c264",
"text": "Theoretical control algorithms are developed and an experimental system is described for 6-dof kinesthetic force/moment feedback to a human operator from a remote system. The remote system is a common six-axis slave manipulator with a force/torque sensor, while the haptic interface is a unique, cable-driven, seven-axis, force/moment-reflecting exoskeleton. The exoskeleton is used for input when motion commands are sent to the robot and for output when force/moment wrenches of contact are reflected to the human operator. This system exists at Wright-Patterson AFB. The same techniques are applicable to a virtual environment with physics models and general haptic interfaces.",
"title": ""
}
] | [
{
"docid": "305cfc6824ec7ac30a08ade2fff66c13",
"text": "Psychological research has shown that 'peak-end' effects influence people's retrospective evaluation of hedonic and affective experience. Rather than objectively reviewing the total amount of pleasure or pain during an experience, people's evaluation is shaped by the most intense moment (the peak) and the final moment (end). We describe an experiment demonstrating that peak-end effects can influence a user's preference for interaction sequences that are objectively identical in their overall requirements. Participants were asked to choose which of two interactive sequences of five pages they preferred. Both sequences required setting a total of 25 sliders to target values, and differed only in the distribution of the sliders across the five pages -- with one sequence intended to induce positive peak-end effects, the other negative. The study found that manipulating only the peak or the end of the series did not significantly change preference, but that a combined manipulation of both peak and end did lead to significant differences in preference, even though all series had the same overall effort.",
"title": ""
},
{
"docid": "1fe8f55e2d402c5fe03176cbf83a16c3",
"text": "This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying sequences of binary logic operations, adding sequences of integers, and sorting sequences of real numbers. Overall performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. When applied to character-level language modelling on the Hutter prize Wikipedia dataset, ACT yields intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could be used to infer segment boundaries in sequence data.",
"title": ""
},
{
"docid": "bb0ef8084d0693d7ea453cd321b13e0b",
"text": "Distributed computation is increasingly important for deep learning, and many deep learning frameworks provide built-in support for distributed training. This results in a tight coupling between the neural network computation and the underlying distributed execution, which poses a challenge for the implementation of new communication and aggregation strategies. We argue that decoupling the deep learning framework from the distributed execution framework enables the flexible development of new communication and aggregation strategies. Furthermore, we argue that Ray [12] provides a flexible set of distributed computing primitives that, when used in conjunction with modern deep learning libraries, enable the implementation of a wide range of gradient aggregation strategies appropriate for different computing environments. We show how these primitives can be used to address common problems, and demonstrate the performance benefits empirically.",
"title": ""
},
{
"docid": "e73de1e6f191fef625f75808d7fbfbb1",
"text": "Colon cancer is one of the most prevalent diseases across the world. Numerous epidemiological studies indicate that diets rich in fruit, such as berries, provide significant health benefits against several types of cancer, including colon cancer. The anticancer activities of berries are attributed to their high content of phytochemicals and to their relevant antioxidant properties. In vitro and in vivo studies have demonstrated that berries and their bioactive components exert therapeutic and preventive effects against colon cancer by the suppression of inflammation, oxidative stress, proliferation and angiogenesis, through the modulation of multiple signaling pathways such as NF-κB, Wnt/β-catenin, PI3K/AKT/PKB/mTOR, and ERK/MAPK. Based on the exciting outcomes of preclinical studies, a few berries have advanced to the clinical phase. A limited number of human studies have shown that consumption of berries can prevent colorectal cancer, especially in patients at high risk (familial adenopolyposis or aberrant crypt foci, and inflammatory bowel diseases). In this review, we aim to highlight the findings of berries and their bioactive compounds in colon cancer from in vitro and in vivo studies, both on animals and humans. Thus, this review could be a useful step towards the next phase of berry research in colon cancer.",
"title": ""
},
{
"docid": "d4345ee2baaa016fc38ba160e741b8ee",
"text": "Unstructured data, such as news and blogs, can provide valuable insights into the financial world. We present the NewsStream portal, an intuitive and easy-to-use tool for news analytics, which supports interactive querying and visualizations of the documents at different levels of detail. It relies on a scalable architecture for real-time processing of a continuous stream of textual data, which incorporates data acquisition, cleaning, natural-language preprocessing and semantic annotation components. It has been running for over two years and collected over 18 million news articles and blog posts. The NewsStream portal can be used to answer the questions when, how often, in what context, and with what sentiment was a financial entity or term mentioned in a continuous stream of news and blogs, and therefore providing a complement to news aggregators. We illustrate some features of our system in four use cases: relations between the rating agencies and the PIIGS countries, reflection of financial news on credit default swap (CDS) prices, the emergence of the Bitcoin digital currency, and visualizing how the world is connected through news.",
"title": ""
},
{
"docid": "63f20dd528d54066ed0f189e4c435fe7",
"text": "In many specific laboratories the students use only a PLC simulator software, because the hardware equipment is expensive. This paper presents a solution that allows students to study both the hardware and software parts, in the laboratory works. The hardware part of solution consists in an old plotter, an adapter board, a PLC and a HMI. The software part of this solution is represented by the projects of the students, in which they developed applications for programming the PLC and the HMI. This equipment can be made very easy and can be used in university labs by students, so that they design and test their applications, from low to high complexity [1], [2].",
"title": ""
},
{
"docid": "9423718cce01b45c688066f322b2c2aa",
"text": "Currently there are many techniques based on information technology and communication aimed at assessing the performance of students. Data mining applied in the educational field (educational data mining) is one of the most popular techniques that are used to provide feedback with regard to the teaching-learning process. In recent years there have been a large number of open source applications in the area of educational data mining. These tools have facilitated the implementation of complex algorithms for identifying hidden patterns of information in academic databases. The main objective of this paper is to compare the technical features of three open source tools (RapidMiner, Knime and Weka) as used in educational data mining. These features have been compared in a practical case study on the academic records of three engineering programs in an Ecuadorian university. This comparison has allowed us to determine which tool is most effective in terms of predicting student performance.",
"title": ""
},
{
"docid": "11ce5bca8989b3829683430abe2aee47",
"text": "Android is the most popular smartphone operating system with a market share of 80%, but as a consequence, also the platform most targeted by malware. To deal with the increasing number of malicious Android apps in the wild, malware analysts typically rely on analysis tools to extract characteristic information about an app in an automated fashion. While the importance of such tools has been addressed by the research community, the resulting prototypes remain limited in terms of analysis capabilities and availability. In this paper we present ANDRUBIS, a fully automated, publicly available and comprehensive analysis system for Android apps. ANDRUBIS combines static analysis with dynamic analysis on both Dalvik VM and system level, as well as several stimulation techniques to increase code coverage. With ANDRUBIS, we collected a dataset of over 1,000,000 Android apps, including 40% malicious apps. This dataset allows us to discuss trends in malware behavior observed from apps dating back as far as 2010, as well as to present insights gained from operating ANDRUBIS as a publicly available service for the past two years.",
"title": ""
},
{
"docid": "23384db962a1eb524f40ca52f4852b14",
"text": "Recent developments in Artificial Intelligence (AI) have generated a steep interest from media and general public. As AI systems (e.g. robots, chatbots, avatars and other intelligent agents) are moving from being perceived as a tool to being perceived as autonomous agents and team-mates, an important focus of research and development is understanding the ethical impact of these systems. What does it mean for an AI system to make a decision? What are the moral, societal and legal consequences of their actions and decisions? Can an AI system be held accountable for its actions? How can these systems be controlled once their learning capabilities bring them into states that are possibly only remotely linked to their initial, designed, setup? Should such autonomous innovation in commercial systems even be allowed, and how should use and development be regulated? These and many other related questions are currently the focus of much attention. The way society and our systems will be able to deal with these questions will for a large part determine our level of trust, and ultimately, the impact of AI in society, and the existence of AI. Contrary to the frightening images of a dystopic future in media and popular fiction, where AI systems dominate the world and is mostly concerned with warfare, AI is already changing our daily lives mostly in ways that improve human health, safety, and productivity (Stone et al. 2016). This is the case in domain such as transportation; service robots; health-care; education; public safety and security; and entertainment. Nevertheless, and in order to ensure that those dystopic futures do not become reality, these systems must be introduced in ways that build trust and understanding, and respect human and civil rights. The need for ethical considerations in the development of intelligent interactive systems is becoming one of the main influential areas of research in the last few years, and has led to several initiatives both from researchers as from practitioners, including the IEEE initiative on Ethics of Autonomous Systems1, the Foundation for Responsible Robotics2, and the Partnership on AI3 amongst several others. As the capabilities for autonomous decision making grow, perhaps the most important issue to consider is the need to rethink responsibility (Dignum 2017). Whatever their level of autonomy and social awareness and their ability to learn, AI systems are artefacts, constructed by people to fulfil some goals. Theories, methods, algorithms are needed to integrate societal, legal and moral values into technological developments in AI, at all stages of development (analysis, design, construction, deployment and evaluation). These frameworks must deal both with the autonomic reasoning of the machine about such issues that we consider to have ethical impact, but most importantly, we need frameworks to guide design choices, to regulate the reaches of AI systems, to ensure proper data stewardship, and to help individuals determine their own involvement. Values are dependent on the socio-cultural context (Turiel 2002), and are often only implicit in deliberation processes, which means that methodologies are needed to elicit the values held by all the stakeholders, and to make these explicit can lead to better understanding and trust on artificial autonomous systems. That is, AI reasoning should be able to take into account societal values, moral and ethical considerations; weigh the respective priorities of values held by different stakeholders in various multicultural contexts; explain its reasoning; and guarantee transparency. Responsible Artificial Intelligence is about human responsibility for the development of intelligent systems along fundamental human principles and values, to ensure human flourishing and wellbeing in a sustainable world. In fact, Responsible AI is more than the ticking of some ethical ‘boxes’ in a report, or the development of some add-on features, or switch-off buttons in AI systems. Rather, responsibility is fundamental",
"title": ""
},
{
"docid": "d66799a5d65a6f23527a33b124812ea6",
"text": "Time series is an important class of temporal data objects and it can be easily obtained from scientific and financial applications, and anomaly detection for time series is becoming a hot research topic recently. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. In this paper, we have discussed the definition of anomaly and grouped existing techniques into different categories based on the underlying approach adopted by each technique. And for each category, we identify the advantages and disadvantages of the techniques in that category. Then, we provide a briefly discussion on the representative methods recently. Furthermore, we also point out some key issues about multivariate time series anomaly. Finally, some suggestions about anomaly detection are discussed and future research trends are also summarized, which is hopefully beneficial to the researchers of time series and other relative domains.",
"title": ""
},
{
"docid": "45c1119cd76ed4f1470ac398caf6d192",
"text": "UNLABELLED\nL-3,4-Dihydroxy-6-(18)F-fluoro-phenyl-alanine ((18)F-FDOPA) is an amino acid analog used to evaluate presynaptic dopaminergic neuronal function. Evaluation of tumor recurrence in neurooncology is another application. Here, the kinetics of (18)F-FDOPA in brain tumors were investigated.\n\n\nMETHODS\nA total of 37 patients underwent 45 studies; 10 had grade IV, 10 had grade III, and 13 had grade II brain tumors; 2 had metastases; and 2 had benign lesions. After (18)F-DOPA was administered at 1.5-5 MBq/kg, dynamic PET images were acquired for 75 min. Images were reconstructed with iterative algorithms, and corrections for attenuation and scatter were applied. Images representing venous structures, the striatum, and tumors were generated with factor analysis, and from these, input and output functions were derived with simple threshold techniques. Compartmental modeling was applied to estimate rate constants.\n\n\nRESULTS\nA 2-compartment model was able to describe (18)F-FDOPA kinetics in tumors and the cerebellum but not the striatum. A 3-compartment model with corrections for tissue blood volume, metabolites, and partial volume appeared to be superior for describing (18)F-FDOPA kinetics in tumors and the striatum. A significant correlation was found between influx rate constant K and late uptake (standardized uptake value from 65 to 75 min), whereas the correlation of K with early uptake was weak. High-grade tumors had significantly higher transport rate constant k(1), equilibrium distribution volumes, and influx rate constant K than did low-grade tumors (P < 0.01). Tumor uptake showed a maximum at about 15 min, whereas the striatum typically showed a plateau-shaped curve. Patlak graphical analysis did not provide accurate parameter estimates. Logan graphical analysis yielded reliable estimates of the distribution volume and could separate newly diagnosed high-grade tumors from low-grade tumors.\n\n\nCONCLUSION\nA 2-compartment model was able to describe (18)F-FDOPA kinetics in tumors in a first approximation. A 3-compartment model with corrections for metabolites and partial volume could adequately describe (18)F-FDOPA kinetics in tumors, the striatum, and the cerebellum. This model suggests that (18)F-FDOPA was transported but not trapped in tumors, unlike in the striatum. The shape of the uptake curve appeared to be related to tumor grade. After an early maximum, high-grade tumors had a steep descending branch, whereas low-grade tumors had a slowly declining curve, like that for the cerebellum but on a higher scale.",
"title": ""
},
{
"docid": "403310053251e81cdad10addedb64c87",
"text": "Many types of data are best analyzed by fitting a curve using nonlinear regression, and computer programs that perform these calculations are readily available. Like every scientific technique, however, a nonlinear regression program can produce misleading results when used inappropriately. This article reviews the use of nonlinear regression in a practical and nonmathematical manner to answer the following questions: Why is nonlinear regression superior to linear regression of transformed data? How does nonlinear regression differ from polynomial regression and cubic spline? How do nonlinear regression programs work? What choices must an investigator make before performing nonlinear regression? What do the final results mean? How can two sets of data or two fits to one set of data be compared? What problems can cause the results to be wrong? This review is designed to demystify nonlinear regression so that both its power and its limitations will be appreciated.",
"title": ""
},
{
"docid": "32e1b7734ba1b26a6a27e0504db07643",
"text": "Due to its high popularity and rich functionalities, the Portable Document Format (PDF) has become a major vector for malware propagation. To detect malicious PDF files, the first step is to extract and de-obfuscate Java Script codes from the document, for which an effective technique is yet to be created. However, existing static methods cannot de-obfuscate Java Script codes, existing dynamic methods bring high overhead, and existing hybrid methods introduce high false negatives. Therefore, in this paper, we present MPScan, a scanner that combines dynamic Java Script de-obfuscation and static malware detection. By hooking the Adobe Reader's native Java Script engine, Java Script source code and op-code can be extracted on the fly after the source code is parsed and then executed. We also perform a multilevel analysis on the resulting Java Script strings and op-code to detect malware. Our evaluation shows that regardless of obfuscation techniques, MPScan can effectively de-obfuscate and detect 98% malicious PDF samples.",
"title": ""
},
{
"docid": "4f287c788c7e95bf350a998650ff6221",
"text": "Wireless sensor network has become an emerging technology due its wide range of applications in object tracking and monitoring, military commands, smart homes, forest fire control, surveillance, etc. Wireless sensor network consists of thousands of miniature devices which are called sensors but as it uses wireless media for communication, so security is the major issue. There are number of attacks on wireless of which selective forwarding attack is one of the harmful attacks. This paper describes selective forwarding attack and detection techniques against selective forwarding attacks which have been proposed by different researchers. In selective forwarding attacks, malicious nodes act like normal nodes and selectively drop packets. The selective forwarding attack is a serious threat in WSN. Identifying such attacks is very difficult and sometimes impossible. This paper also presents qualitative analysis of detection techniques in tabular form. Keywordswireless sensor network, attacks, selective forwarding attacks, malicious nodes.",
"title": ""
},
{
"docid": "f066cb3e2fc5ee543e0cc76919b261eb",
"text": "Eco-labels are part of a new wave of environmental policy that emphasizes information disclosure as a tool to induce environmentally friendly behavior by both firms and consumers. Little consensus exists as to whether eco-certified products are actually better than their conventional counterparts. This paper seeks to understand the link between eco-certification and product quality. We use data from three leading wine rating publications (Wine Advocate, Wine Enthusiast, and Wine Spectator) to assess quality for 74,148 wines produced in California between 1998 and 2009. Our results indicate that eco-certification is associated with a statistically significant increase in wine quality rating.",
"title": ""
},
{
"docid": "4d3b988de22e4630e1b1eff9e0d4551b",
"text": "In this chapter we present a methodology for introducing and maintaining ontology based knowledge management applications into enterprises with a focus on Knowledge Processes and Knowledge Meta Processes. While the former process circles around the usage of ontologies, the latter process guides their initial set up. We illustrate our methodology by an example from a case study on skills management. The methodology serves as a scaffold for Part B “Ontology Engineering” of the handbook. It shows where more specific concerns of ontology engineering find their place and how they are related in the overall process.",
"title": ""
},
{
"docid": "1444a4acc00c1d7d69a906f6e5f52a6d",
"text": "The prevalence of obesity among children is high and is increasing. We know that obesity runs in families, with children of obese parents at greater risk of developing obesity than children of thin parents. Research on genetic factors in obesity has provided us with estimates of the proportion of the variance in a population accounted for by genetic factors. However, this research does not provide information regarding individual development. To design effective preventive interventions, research is needed to delineate how genetics and environmental factors interact in the etiology of childhood obesity. Addressing this question is especially challenging because parents provide both genes and environment for children. An enormous amount of learning about food and eating occurs during the transition from the exclusive milk diet of infancy to the omnivore's diet consumed by early childhood. This early learning is constrained by children's genetic predispositions, which include the unlearned preference for sweet tastes, salty tastes, and the rejection of sour and bitter tastes. Children also are predisposed to reject new foods and to learn associations between foods' flavors and the postingestive consequences of eating. Evidence suggests that children can respond to the energy density of the diet and that although intake at individual meals is erratic, 24-hour energy intake is relatively well regulated. There are individual differences in the regulation of energy intake as early as the preschool period. These individual differences in self-regulation are associated with differences in child-feeding practices and with children's adiposity. This suggests that child-feeding practices have the potential to affect children's energy balance via altering patterns of intake. Initial evidence indicates that imposition of stringent parental controls can potentiate preferences for high-fat, energy-dense foods, limit children's acceptance of a variety of foods, and disrupt children's regulation of energy intake by altering children's responsiveness to internal cues of hunger and satiety. This can occur when well-intended but concerned parents assume that children need help in determining what, when, and how much to eat and when parents impose child-feeding practices that provide children with few opportunities for self-control. Implications of these findings for preventive interventions are discussed.",
"title": ""
},
{
"docid": "ff50d07261681dcc210f01593ad2c109",
"text": "A mathematical model of the system composed of two sensors, the semicircular canal and the sacculus, is suggested. The model is described by three lines of blocks, each line of which has the following structure: a biomechanical block, a mechanoelectrical transduction mechanism, and a block describing the hair cell ionic currents and membrane potential dynamics. The response of this system to various stimuli (head rotation under gravity and falling) is investigated. Identification of the model parameters was done with the experimental data obtained for the axolotl (Ambystoma tigrinum) at the Institute of Physiology, Autonomous University of Puebla, Mexico. Comparative analysis of the semicircular canal and sacculus membrane potentials is presented.",
"title": ""
},
{
"docid": "23d7eb4d414e4323c44121040c3b2295",
"text": "BACKGROUND\nThe use of clinical decision support systems to facilitate the practice of evidence-based medicine promises to substantially improve health care quality.\n\n\nOBJECTIVE\nTo describe, on the basis of the proceedings of the Evidence and Decision Support track at the 2000 AMIA Spring Symposium, the research and policy challenges for capturing research and practice-based evidence in machine-interpretable repositories, and to present recommendations for accelerating the development and adoption of clinical decision support systems for evidence-based medicine.\n\n\nRESULTS\nThe recommendations fall into five broad areas--capture literature-based and practice-based evidence in machine--interpretable knowledge bases; develop maintainable technical and methodological foundations for computer-based decision support; evaluate the clinical effects and costs of clinical decision support systems and the ways clinical decision support systems affect and are affected by professional and organizational practices; identify and disseminate best practices for work flow-sensitive implementations of clinical decision support systems; and establish public policies that provide incentives for implementing clinical decision support systems to improve health care quality.\n\n\nCONCLUSIONS\nAlthough the promise of clinical decision support system-facilitated evidence-based medicine is strong, substantial work remains to be done to realize the potential benefits.",
"title": ""
}
] | scidocsrr |
0e718877fe2f6ef795736d50498af25a | A Compact UWB Three-Way Power Divider | [
{
"docid": "6f671b7b67a543f923b3253b018ff221",
"text": "This letter presents the design and measured performance of a microstrip three-way power combiner. The combiner is designed using the conventional Wilkinson topology with the extension to three outputs, which has been rarely considered for the design and fabrication of V-way combiners. It is shown that with an appropriate design approach, the main drawback reported with this topology (nonplanarity of the circuit when N > 2) can be minimized to have a negligible effect on the circuit performance and still allow an easy MIC or MHMIC fabrication.",
"title": ""
},
{
"docid": "2baf55123171c6e2110b19b1583c3d17",
"text": "A novel three-way power divider using tapered lines is presented. It has several strip resistors which are formed like a ladder between the tapered-line conductors to achieve a good output isolation. The equivalent circuits are derived with the EE/OE/OO-mode analysis based on the fundamental propagation modes in three-conductor coupled lines. The fabricated three-way power divider shows a broadband performance in input return loss which is greater than 20 dB over a 3:1 bandwidth in the C-Ku bands.",
"title": ""
}
] | [
{
"docid": "648bfc5deeb52aaf9bc4c766e1ae4b70",
"text": "In this letter, a miniature 0.97–1.53-GHz tunable four-pole bandpass filter with constant fractional bandwidth is demonstrated. The filter consists of three quarter-wavelength resonators and one half-wavelength resonator. By introducing cross-coupling, two transmission zeroes are generated and are located at both sides of the passband. Also, source–load coupling is employed to produce two extra transmission zeroes, resulting in a miniature (<inline-formula> <tex-math notation=\"LaTeX\">$0.09\\lambda _{{\\text {g}}}\\times 0.1\\lambda _{{\\text {g}}}$ </tex-math></inline-formula>) four-pole, four-transmission zero filter with high selectivity. The measured results show a tuning range of 0.97–1.53 GHz with an insertion loss of 4.2–2 dB and 1-dB fractional bandwidth of 5.5%. The four transmission zeroes change with the passband synchronously, ensuring high selectivity over a wide tuning range. The application areas are in software-defined radios in high-interference environments.",
"title": ""
},
{
"docid": "18548de7ebb6609ff2ce9b8d9d673f57",
"text": "In this work we discuss the related challenges and describe an approach towards the fusion of state-of-the-art technologies from the Spoken Dialogue Systems (SDS) and the Semantic Web and Information Retrieval domains. We envision a dialogue system named LD-SDS that will support advanced, expressive, and engaging user requests, over multiple, complex, rich, and open-domain data sources that will leverage the wealth of the available Linked Data. Specifically, we focus on: a) improving the identification, disambiguation and linking of entities occurring in data sources and user input; b) offering advanced query services for exploiting the semantics of the data, with reasoning and exploratory capabilities; and c) expanding the typical information seeking dialogue model (slot filling) to better reflect real-world conversational search scenarios.",
"title": ""
},
{
"docid": "574c07709b65749bc49dd35d1393be80",
"text": "Optical coherence tomography (OCT) is used for non-invasive diagnosis of diabetic macular edema assessing the retinal layers. In this paper, we propose a new fully convolutional deep architecture, termed ReLayNet, for end-to-end segmentation of retinal layers and fluid masses in eye OCT scans. ReLayNet uses a contracting path of convolutional blocks (encoders) to learn a hierarchy of contextual features, followed by an expansive path of convolutional blocks (decoders) for semantic segmentation. ReLayNet is trained to optimize a joint loss function comprising of weighted logistic regression and Dice overlap loss. The framework is validated on a publicly available benchmark dataset with comparisons against five state-of-the-art segmentation methods including two deep learning based approaches to substantiate its effectiveness.",
"title": ""
},
{
"docid": "8bab67e95bdb7cf1ded4a05f7b9c503d",
"text": "A national sample of 295 transgender adults and their nontransgender siblings were surveyed about demographics, perceptions of social support, and violence, harassment, and discrimination. Transwomen were older than the other 4 groups. Transwomen, transmen, and genderqueers were more highly educated than nontransgender sisters and nontransgender brothers, but did not have a corresponding higher income. Other demographic differences between groups were found in religion, geographic mobility, relationship status, and sexual orientation. Transgender people were more likely to experience harassment and discrimination than nontransgender sisters and nontransgender brothers. All transgender people perceived less social support from family than nontransgender sisters. This is the first study to compare trans people to nontrans siblings as a comparison group.",
"title": ""
},
{
"docid": "eeafcab155da5229bf26ddc350e37951",
"text": "Interferons (IFNs) are the hallmark of the vertebrate antiviral system. Two of the three IFN families identified in higher vertebrates are now known to be important for antiviral defence in teleost fish. Based on the cysteine patterns, the fish type I IFN family can be divided into two subfamilies, which possibly interact with distinct receptors for signalling. The fish type II IFN family consists of two members, IFN-γ with similar functions to mammalian IFN-γ and a teleost specific IFN-γ related (IFN-γrel) molecule whose functions are not fully elucidated. These two type II IFNs also appear to bind to distinct receptors to exert their functions. It has become clear that fish IFN responses are mediated by the host pattern recognition receptors and an array of transcription factors including the IFN regulatory factors, the Jak/Stat proteins and the suppressor of cytokine signalling (SOCS) molecules.",
"title": ""
},
{
"docid": "800337ef10a4245db4e45a1a5931e578",
"text": "This paper describes a method for generating sense-tagged data using Wikipedia as a source of sense annotations. Through word sense disambiguation experiments, we show that the Wikipedia-based sense annotations are reliable and can be used to construct accurate sense classifiers.",
"title": ""
},
{
"docid": "31712d0398ac98598e77f05ebbf917a2",
"text": "This paper illustrates the mechanical structure's spherical motion, kinematic matrices and achievable workspace of an exoskeleton upper limb device. The purpose of this paper is to assist individuals that have lost their upper limb motor functions by creating an exoskeleton device that does not require an external support; but still provides a large workspace. This allows for movement according to the Activities of Daily Living (ADL).",
"title": ""
},
{
"docid": "305f0c417d1e6f6189c431078b359793",
"text": "Sentence relation extraction aims to extract relational facts from sentences, which is an important task in natural language processing field. Previous models rely on the manually labeled supervised dataset. However, the human annotation is costly and limits to the number of relation and data size, which is difficult to scale to large domains. In order to conduct largely scaled relation extraction, we utilize an existing knowledge base to heuristically align with texts, which not rely on human annotation and easy to scale. However, using distant supervised data for relation extraction is facing a new challenge: sentences in the distant supervised dataset are not directly labeled and not all sentences that mentioned an entity pair can represent the relation between them. To solve this problem, we propose a novel model with reinforcement learning. The relation of the entity pair is used as distant supervision and guide the training of relation extractor with the help of reinforcement learning method. We conduct two types of experiments on a publicly released dataset. Experiment results demonstrate the effectiveness of the proposed method compared with baseline models, which achieves 13.36% improvement.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "da217097f8ab7b08fcf7a91263785996",
"text": "Parallel bit stream algorithms exploit the SWAR (SIMD within a register) capabilities of commodity processors in high-performance text processing applications such as UTF-8 to UTF-16 transcoding, XML parsing, string search and regular expression matching. Direct architectural support for these algorithms in future SWAR instruction sets could further increase performance as well as simplifying the programming task. A set of simple SWAR instruction set extensions are proposed for this purpose based on the principle of systematic support for inductive doubling as an algorithmic technique. These extensions are shown to significantly reduce instruction count in core parallel bit stream algorithms, often providing a 3X or better improvement. The extensions are also shown to be useful for SWAR programming in other application areas, including providing a systematic treatment for horizontal operations. An implementation model for these extensions involves relatively simple circuitry added to the operand fetch components in a pipelined processor.",
"title": ""
},
{
"docid": "77e501546d95fa18cf2a459fae274875",
"text": "Complex organizations exhibit surprising, nonlinear behavior. Although organization scientists have studied complex organizations for many years, a developing set of conceptual and computational tools makes possible new approaches to modeling nonlinear interactions within and between organizations. Complex adaptive system models represent a genuinely new way of simplifying the complex. They are characterized by four key elements: agents with schemata, self-organizing networks sustained by importing energy, coevolution to the edge of chaos, and system evolution based on recombination. New types of models that incorporate these elements will push organization science forward by merging empirical observation with computational agent-based simulation. Applying complex adaptive systems models to strategic management leads to an emphasis on building systems that can rapidly evolve effective adaptive solutions. Strategic direction of complex organizations consists of establishing and modifying environments within which effective, improvised, self-organized solutions can evolve. Managers influence strategic behavior by altering the fitness landscape for local agents and reconfiguring the organizational architecture within which agents adapt. (Complexity Theory; Organizational Evolution; Strategic Management) Since the open-systems view of organizations began to diffuse in the 1960s, comnplexity has been a central construct in the vocabulary of organization scientists. Open systems are open because they exchange resources with the environment, and they are systems because they consist of interconnected components that work together. In his classic discussion of hierarchy in 1962, Simon defined a complex system as one made up of a large number of parts that have many interactions (Simon 1996). Thompson (1967, p. 6) described a complex organization as a set of interdependent parts, which together make up a whole that is interdependent with some larger environment. Organization theory has treated complexity as a structural variable that characterizes both organizations and their environments. With respect to organizations, Daft (1992, p. 15) equates complexity with the number of activities or subsystems within the organization, noting that it can be measured along three dimensions. Vertical complexity is the number of levels in an organizational hierarchy, horizontal complexity is the number of job titles or departments across the organization, and spatial complexity is the number of geographical locations. With respect to environments, complexity is equated with the number of different items or elements that must be dealt with simultaneously by the organization (Scott 1992, p. 230). Organization design tries to match the complexity of an organization's structure with the complexity of its environment and technology (Galbraith 1982). The very first article ever published in Organization Science suggested that it is inappropriate for organization studies to settle prematurely into a normal science mindset, because organizations are enormously complex (Daft and Lewin 1990). What Daft and Lewin meant is that the behavior of complex systems is surprising and is hard to 1047-7039/99/1003/0216/$05.OO ORGANIZATION SCIENCE/Vol. 10, No. 3, May-June 1999 Copyright ? 1999, Institute for Operations Research pp. 216-232 and the Management Sciences PHILIP ANDERSON Complexity Theory and Organization Science predict, because it is nonlinear (Casti 1994). In nonlinear systems, intervening to change one or two parameters a small amount can drastically change the behavior of the whole system, and the whole can be very different from the sum of the parts. Complex systems change inputs to outputs in a nonlinear way because their components interact with one another via a web of feedback loops. Gell-Mann (1994a) defines complexity as the length of the schema needed to describe and predict the properties of an incoming data stream by identifying its regularities. Nonlinear systems can difficult to compress into a parsimonious description: this is what makes them complex (Casti 1994). According to Simon (1996, p. 1), the central task of a natural science is to show that complexity, correctly viewed, is only a mask for simplicity. Both social scientists and people in organizations reduce a complex description of a system to a simpler one by abstracting out what is unnecessary or minor. To build a model is to encode a natural system into a formal system, compressing a longer description into a shorter one that is easier to grasp. Modeling the nonlinear outcomes of many interacting components has been so difficult that both social and natural scientists have tended to select more analytically tractable problems (Casti 1994). Simple boxes-andarrows causal models are inadequate for modeling systems with complex interconnections and feedback loops, even when nonlinear relations between dependent and independent variables are introduced by means of exponents, logarithms, or interaction terms. How else might we compress complex behavior so we can comprehend it? For Perrow (1967), the more complex an organization is, the less knowable it is and the more deeply ambiguous is its operation. Modem complexity theory suggests that some systems with many interactions among highly differentiated parts can produce surprisingly simple, predictable behavior, while others generate behavior that is impossible to forecast, though they feature simple laws and few actors. As Cohen and Stewart (1994) point out, normal science shows how complex effects can be understood from simple laws; chaos theory demonstrates that simple laws can have complicated, unpredictable consequences; and complexity theory describes how complex causes can produce simple effects. Since the mid-1980s, new approaches to modeling complex systems have been emerging from an interdisciplinary invisible college, anchored on the Santa Fe Institute (see Waldrop 1992 for a historical perspective). The agenda of these scholars includes identifying deep principles underlying a wide variety of complex systems, be they physical, biological, or social (Fontana and Ballati 1999). Despite somewhat frequent declarations that a new paradigm has emerged, it is still premature to declare that a science of complexity, or even a unified theory of complex systems, exists (Horgan 1995). Holland and Miller (1991) have likened the present situation to that of evolutionary theory before Fisher developed a mathematical theory of genetic selection. This essay is not a review of the emerging body of research in complex systems, because that has been ably reviewed many times, in ways accessible to both scholars and managers. Table 1 describes a number of recent, prominent books and articles that inform this literature; Heylighen (1997) provides an excellent introductory bibliography, with a more comprehensive version available on the Internet at http://pespmcl.vub.ac.be/ Evocobib. html. Organization science has passed the point where we can regard as novel a summary of these ideas or an assertion that an empirical phenomenon is consistent with them (see Browning et al. 1995 for a pathbreaking example). Six important insights, explained at length in the works cited in Table 1, should be regarded as well-established scientifically. First, many dynamical systems (whose state at time t determines their state at time t + 1) do not reach either a fixed-point or a cyclical equilibrium (see Dooley and Van de Ven's paper in this issue). Second, processes that appear to be random may be chaotic, revolving around identifiable types of attractors in a deterministic way that seldom if ever return to the same state. An attractor is a limited area in a system's state space that it never departs. Chaotic systems revolve around \"strange attractors,\" fractal objects that constrain the system to a small area of its state space, which it explores in a neverending series that does not repeat in a finite amount of time. Tests exist that can establish whether a given process is random or chaotic (Koput 1997, Ott 1993). Similarly, time series that appear to be random walks may actually be fractals with self-reinforcing trends (Bar-Yam 1997). Third, the behavior of complex processes can be quite sensitive to small differences in initial conditions, so that two entities with very similar initial states can follow radically divergent paths over time. Consequently, historical accidents may \"tip\" outcomes strongly in a particular direction (Arthur 1989). Fourth, complex systems resist simple reductionist analyses, because interconnections and feedback loops preclude holding some subsystems constant in order to study others in isolation. Because descriptions at multiple scales are necessary to identify how emergent properties are produced (Bar-Yam 1997), reductionism and holism are complementary strategies in analyzing such systems (Fontana and Ballati ORGANIZATION SCIENCE/Vol. 10, No. 3, May-June 1999 217 PHILIP ANDERSON Complexity Theory and Organization Science Table 1 Selected Resources that Provide an Overview of Complexity Theory Allison and Kelly, 1999 Written for managers, this book provides an overview of major themes in complexity theory and discusses practical applications rooted in-experiences at firms such as Citicorp. Bar-Yam, 1997 A very comprehensive introduction for mathematically sophisticated readers, the book discusses the major computational techniques used to analyze complex systems, including spin-glass models, cellular automata, simulation methodologies, and fractal analysis. Models are developed to describe neural networks, protein folding, developmental biology, and the evolution of human civilization. Brown and Eisenhardt, 1998 Although this book is not an introduction to complexity theory, a series of small tables throughout the text introduces and explains most of the important concepts. The purpose of the book is to view stra",
"title": ""
},
{
"docid": "b29f2d688e541463b80006fac19eaf20",
"text": "Autonomous navigation has become an increasingly popular machine learning application. Recent advances in deep learning have also brought huge improvements to autonomous navigation. However, prior outdoor autonomous navigation methods depended on various expensive sensors or expensive and sometimes erroneously labeled real data. In this paper, we propose an autonomous navigation method that does not require expensive labeled real images and uses only a relatively inexpensive monocular camera. Our proposed method is based on (1) domain adaptation with an adversarial learning framework and (2) exploiting synthetic data from a simulator. To the best of the authors’ knowledge, this is the first work to apply domain adaptation with adversarial networks to autonomous navigation. We present empirical results on navigation in outdoor courses using an unmanned aerial vehicle. The performance of our method is comparable to that of a supervised model with labeled real data, although our method does not require any label information for the real data. Our proposal includes a theoretical analysis that supports the applicability of our approach.",
"title": ""
},
{
"docid": "21b8998910c792d389ccd8a6d8620555",
"text": "Theory and research suggest that people can increase their happiness through simple intentional positive activities, such as expressing gratitude or practicing kindness. Investigators have recently begun to study the optimal conditions under which positive activities increase happiness and the mechanisms by which these effects work. According to our positive-activity model, features of positive activities (e.g., their dosage and variety), features of persons (e.g., their motivation and effort), and person-activity fit moderate the effect of positive activities on well-being. Furthermore, the model posits four mediating variables: positive emotions, positive thoughts, positive behaviors, and need satisfaction. Empirical evidence supporting the model and future directions are discussed.",
"title": ""
},
{
"docid": "833ec45dfe660377eb7367e179070322",
"text": "It was predicted that high self-esteem Ss (HSEs) would rationalize an esteem-threatening decision less than low self-esteem Ss (LSEs), because HSEs presumably had more favorable self-concepts with which to affirm, and thus repair, their overall sense of self-integrity. This prediction was supported in 2 experiments within the \"free-choice\" dissonance paradigm--one that manipulated self-esteem through personality feedback and the other that varied it through selection of HSEs and LSEs, but only when Ss were made to focus on their self-concepts. A 3rd experiment countered an alternative explanation of the results in terms of mood effects that may have accompanied the experimental manipulations. The results were discussed in terms of the following: (a) their support for a resources theory of individual differences in resilience to self-image threats--an extension of self-affirmation theory, (b) their implications for self-esteem functioning, and (c) their implications for the continuing debate over self-enhancement versus self-consistency motivation.",
"title": ""
},
{
"docid": "f4a703793623890b59a8f7471fc49d0e",
"text": "The authors investigate the interplay between answer quality and answer speed across question types in community question-answering sites (CQAs). The research questions addressed are the following: (a) How do answer quality and answer speed vary across question types? (b) How do the relationships between answer quality and answer speed vary across question types? (c) How do the best quality answers and the fastest answers differ in terms of answer quality and answer speed across question types? (d) How do trends in answer quality vary over time across question types? From the posting of 3,000 questions in six CQAs, 5,356 answers were harvested and analyzed. There was a significant difference in answer quality and answer speed across question types, and there were generally no significant relationships between answer quality and answer speed. The best quality answers had better overall answer quality than the fastest answers but generally took longer to arrive. In addition, although the trend in answer quality had been mostly random across all question types, the quality of answers appeared to improve gradually when given time. By highlighting the subtle nuances in answer quality and answer speed across question types, this study is an attempt to explore a territory of CQA research that has hitherto been relatively uncharted.",
"title": ""
},
{
"docid": "06a69f318c5967e99638a2adf5520e90",
"text": "In this article, a case is made for improving the school success of ethnically diverse students through culturally responsive teaching and for preparing teachers in preservice education programs with the knowledge, attitudes, and skills needed to do this. The ideas presented here are brief sketches of more thorough explanations included in my recent book, Culturally Responsive Teaching: Theory, Research, and Practice (2000). The specific components of this approach to teaching are based on research findings, theoretical claims, practical experiences, and personal stories of educators researching and working with underachieving African, Asian, Latino, and Native American students. These data were produced by individuals from a wide variety of disciplinary backgrounds including anthropology, sociology, psychology, sociolinguistics, communications, multicultural education, K-college classroom teaching, and teacher education. Five essential elements of culturally responsive teaching are examined: developing a knowledge base about cultural diversity, including ethnic and cultural diversity content in the curriculum, demonstrating caring and building learning communities, communicating with ethnically diverse students, and responding to ethnic diversity in the delivery of instruction. Culturally responsive teaching is defined as using the cultural characteristics, experiences, and perspectives of ethnically diverse students as conduits for teaching them more effectively. It is based on the assumption that when academic knowledge and skills are situated within the lived experiences and frames of reference of students, they are more personally meaningful, have higher interest appeal, and are learned more easily and thoroughly (Gay, 2000). As a result, the academic achievement of ethnically diverse students will improve when they are taught through their own cultural and experiential filters (Au & Kawakami, 1994; Foster, 1995; Gay, 2000; Hollins, 1996; Kleinfeld, 1975; Ladson-Billings, 1994, 1995).",
"title": ""
},
{
"docid": "19b537f7356da81830c8f7908af83669",
"text": "Investigation of the hippocampus has historically focused on computations within the trisynaptic circuit. However, discovery of important anatomical and functional variability along its long axis has inspired recent proposals of long-axis functional specialization in both the animal and human literatures. Here, we review and evaluate these proposals. We suggest that various long-axis specializations arise out of differences between the anterior (aHPC) and posterior hippocampus (pHPC) in large-scale network connectivity, the organization of entorhinal grid cells, and subfield compositions that bias the aHPC and pHPC towards pattern completion and separation, respectively. The latter two differences give rise to a property, reflected in the expression of multiple other functional specializations, of coarse, global representations in anterior hippocampus and fine-grained, local representations in posterior hippocampus.",
"title": ""
},
{
"docid": "fc453b8e101a0eae542cc69881bbe7d4",
"text": "The statistical properties of Clarke's fading model with a finite number of sinusoids are analyzed, and an improved reference model is proposed for the simulation of Rayleigh fading channels. A novel statistical simulation model for Rician fading channels is examined. The new Rician fading simulation model employs a zero-mean stochastic sinusoid as the specular (line-of-sight) component, in contrast to existing Rician fading simulators that utilize a non-zero deterministic specular component. The statistical properties of the proposed Rician fading simulation model are analyzed in detail. It is shown that the probability density function of the Rician fading phase is not only independent of time but also uniformly distributed over [-pi, pi). This property is different from that of existing Rician fading simulators. The statistical properties of the new simulators are confirmed by extensive simulation results, showing good agreement with theoretical analysis in all cases. An explicit formula for the level-crossing rate is derived for general Rician fading when the specular component has non-zero Doppler frequency",
"title": ""
},
{
"docid": "93adb6d22531c0ec6335a7bec65f4039",
"text": "The term stroke-based rendering collectively describes techniques where images are generated from elements that are usually larger than a pixel. These techniques lend themselves well for rendering artistic styles such as stippling and hatching. This paper presents a novel approach for stroke-based rendering that exploits multi agent systems. RenderBots are individual agents each of which in general represents one stroke. They form a multi agent system and undergo a simulation to distribute themselves in the environment. The environment consists of a source image and possibly additional G-buffers. The final image is created when the simulation is finished by having each RenderBot execute its painting function. RenderBot classes differ in their physical behavior as well as their way of painting so that different styles can be created in a very flexible way.",
"title": ""
},
{
"docid": "a9a5846e370fabfc8716f06397857aae",
"text": "A QR code is a special type of barcode that can encode information like numbers, letters, and any other characters. The capacity of a given QR code depends on the version and error correction level, as also the data type which are encoded. A QR code framework for mobile phone applications by exploiting the spectral diversity afforded by the cyan (C), magenta (M), and yellow (Y) print colorant channels commonly used for color printing and the complementary red (R), green (G), and blue (B) channels, which captures the color images had been proposed. Specifically, this spectral diversity to realize a threefold increase in the data rate by encoding independent data the C, Y, and M channels and decoding the data from the complementary R, G, and B channels. In most cases ReedSolomon error correction codes will be used for generating error correction codeword‟s and also to increase the interference cancellation rate. Experimental results will show that the proposed framework successfully overcomes both single and burst errors and also providing a low bit error rate and a high decoding rate for each of the colorant channels when used with a corresponding error correction scheme. Finally proposed system was successfully synthesized using QUARTUS II EDA tools.",
"title": ""
}
] | scidocsrr |
ab61ccde29cca0905bc0758058266af8 | Performance of a Precoding MIMO System for Decentralized Multiuser Indoor Visible Light Communications | [
{
"docid": "4583555a91527244488b9658288f4dc2",
"text": "The use of space-division multiple access (SDMA) in the downlink of a multiuser multiple-input, multiple-output (MIMO) wireless communications network can provide a substantial gain in system throughput. The challenge in such multiuser systems is designing transmit vectors while considering the co-channel interference of other users. Typical optimization problems of interest include the capacity problem - maximizing the sum information rate subject to a power constraint-or the power control problem-minimizing transmitted power such that a certain quality-of-service metric for each user is met. Neither of these problems possess closed-form solutions for the general multiuser MIMO channel, but the imposition of certain constraints can lead to closed-form solutions. This paper presents two such constrained solutions. The first, referred to as \"block-diagonalization,\" is a generalization of channel inversion when there are multiple antennas at each receiver. It is easily adapted to optimize for either maximum transmission rate or minimum power and approaches the optimal solution at high SNR. The second, known as \"successive optimization,\" is an alternative method for solving the power minimization problem one user at a time, and it yields superior results in some (e.g., low SNR) situations. Both of these algorithms are limited to cases where the transmitter has more antennas than all receive antennas combined. In order to accommodate more general scenarios, we also propose a framework for coordinated transmitter-receiver processing that generalizes the two algorithms to cases involving more receive than transmit antennas. While the proposed algorithms are suboptimal, they lead to simpler transmitter and receiver structures and allow for a reasonable tradeoff between performance and complexity.",
"title": ""
}
] | [
{
"docid": "a5ac7aa3606ebb683d4d9de5dcd89856",
"text": "Advanced persistent threats (APTs) pose a significant risk to nearly every infrastructure. Due to the sophistication of these attacks, they are able to bypass existing security systems and largely infiltrate the target network. The prevention and detection of APT campaigns is also challenging, because of the fact that the attackers constantly change and evolve their advanced techniques and methods to stay undetected. In this paper we analyze 22 different APT reports and give an overview of the used techniques and methods. The analysis is focused on the three main phases of APT campaigns that allow to identify the relevant characteristics of such attacks. For each phase we describe the most commonly used techniques and methods. Through this analysis we could reveal different relevant characteristics of APT campaigns, for example that the usage of 0-day exploit is not common for APT attacks. Furthermore, the analysis shows that the dumping of credentials is a relevant step in the lateral movement phase for most APT campaigns. Based on the identified characteristics, we also propose concrete prevention and detection approaches that make it possible to identify crucial malicious activities that are performed during APT campaigns.",
"title": ""
},
{
"docid": "e8a01490bc3407a2f8e204408e34c5b3",
"text": "This paper presents the design and implementation of a Class EF2 inverter and Class EF2 rectifier for two -W wireless power transfer (WPT) systems, one operating at 6.78 MHz and the other at 27.12 MHz. It will be shown that the Class EF2 circuits can be designed to have beneficial features for WPT applications such as reduced second-harmonic component and lower total harmonic distortion, higher power-output capability, reduction in magnetic core requirements and operation at higher frequencies in rectification compared to other circuit topologies. A model will first be presented to analyze the circuits and to derive values of its components to achieve optimum switching operation. Additional analysis regarding harmonic content, magnetic core requirements and open-circuit protection will also be performed. The design and implementation process of the two Class-EF2-based WPT systems will be discussed and compared to an equivalent Class-E-based WPT system. Experimental results will be provided to confirm validity of the analysis. A dc-dc efficiency of 75% was achieved with Class-EF2-based systems.",
"title": ""
},
{
"docid": "d62c2e7ca3040900d04f83ef4f99de4f",
"text": "Manual classification of brain tumor is time devastating and bestows ambiguous results. Automatic image classification is emergent thriving research area in medical field. In the proposed methodology, features are extracted from raw images which are then fed to ANFIS (Artificial neural fuzzy inference system).ANFIS being neuro-fuzzy system harness power of both hence it proves to be a sophisticated framework for multiobject classification. A comprehensive feature set and fuzzy rules are selected to classify an abnormal image to the corresponding tumor type. This proposed technique is fast in execution, efficient in classification and easy in implementation.",
"title": ""
},
{
"docid": "46c4b4a68e0be453148779529f235e98",
"text": "Received Feb 14, 2017 Revised Apr 14, 2017 Accepted Apr 28, 2017 This paper proposes maximum boost control for 7-level z-source cascaded h-bridge inverter and their affiliation between voltage boost gain and modulation index. Z-source network avoids the usage of external dc-dc boost converter and improves output voltage with minimised harmonic content. Z-source network utilises distinctive LC impedance combination with 7-level cascaded inverter and it conquers the conventional voltage source inverter. The maximum boost controller furnishes voltage boost and maintain constant voltage stress across power switches, which provides better output voltage with variation of duty cycles. Single phase 7-level z-source cascaded inverter simulated using matlab/simulink. Keyword:",
"title": ""
},
{
"docid": "c337226d663e69ecde67ff6f35ba7654",
"text": "In this paper, we presented a new model for cyber crime investigation procedure which is as follows: readiness phase, consulting with profiler, cyber crime classification and investigation priority decision, damaged cyber crime scene investigation, analysis by crime profiler, suspects tracking, injurer cyber crime scene investigation, suspect summon, cyber crime logical reconstruction, writing report.",
"title": ""
},
{
"docid": "80ece123483d6de02c4e621bdb8eb0fc",
"text": "Resistive-switching memory (RRAM) based on transition metal oxides is a potential candidate for replacing Flash and dynamic random access memory in future generation nodes. Although very promising from the standpoints of scalability and technology, RRAM still has severe drawbacks in terms of understanding and modeling of the resistive-switching mechanism. This paper addresses the modeling of resistive switching in bipolar metal-oxide RRAMs. Reset and set processes are described in terms of voltage-driven ion migration within a conductive filament generated by electroforming. Ion migration is modeled by drift–diffusion equations with Arrhenius-activated diffusivity and mobility. The local temperature and field are derived from the self-consistent solution of carrier and heat conduction equations in a 3-D axis-symmetric geometry. The model accounts for set–reset characteristics, correctly describing the abrupt set and gradual reset transitions and allowing scaling projections for metal-oxide RRAM.",
"title": ""
},
{
"docid": "a60752274fdae6687c713538215d0269",
"text": "Some soluble phosphate salts, heavily used in agriculture as highly effective phosphorus (P) fertilizers, cause surface water eutrophication, while solid phosphates are less effective in supplying the nutrient P. In contrast, synthetic apatite nanoparticles could hypothetically supply sufficient P nutrients to crops but with less mobility in the environment and with less bioavailable P to algae in comparison to the soluble counterparts. Thus, a greenhouse experiment was conducted to assess the fertilizing effect of synthetic apatite nanoparticles on soybean (Glycine max). The particles, prepared using one-step wet chemical method, were spherical in shape with diameters of 15.8 ± 7.4 nm and the chemical composition was pure hydroxyapatite. The data show that application of the nanoparticles increased the growth rate and seed yield by 32.6% and 20.4%, respectively, compared to those of soybeans treated with a regular P fertilizer (Ca(H2PO4)2). Biomass productions were enhanced by 18.2% (above-ground) and 41.2% (below-ground). Using apatite nanoparticles as a new class of P fertilizer can potentially enhance agronomical yield and reduce risks of water eutrophication.",
"title": ""
},
{
"docid": "da4b86329c12b0747c2df55f5a6f6cdb",
"text": "As modern societies become more dependent on IT services, the potential impact both of adversarial cyberattacks and non-adversarial service management mistakes grows. This calls for better cyber situational awareness-decision-makers need to know what is going on. The main focus of this paper is to examine the information elements that need to be collected and included in a common operational picture in order for stakeholders to acquire cyber situational awareness. This problem is addressed through a survey conducted among the participants of a national information assurance exercise conducted in Sweden. Most participants were government officials and employees of commercial companies that operate critical infrastructure. The results give insight into information elements that are perceived as useful, that can be contributed to and required from other organizations, which roles and stakeholders would benefit from certain information, and how the organizations work with creating cyber common operational pictures today. Among findings, it is noteworthy that adversarial behavior is not perceived as interesting, and that the respondents in general focus solely on their own organization.",
"title": ""
},
{
"docid": "d56e3d58fdc0ca09fe7f708c7d12122e",
"text": "About nine billion people in the world are deaf and dumb. The communication between a deaf and hearing person poses to be a serious problem compared to communication between blind and normal visual people. This creates a very little room for them with communication being a fundamental aspect of human life. The blind people can talk freely by means of normal language whereas the deaf-dumb have their own manual-visual language known as sign language. Sign language is a non-verbal form of intercourse which is found amongst deaf communities in world. The languages do not have a common origin and hence difficult to interpret. The project aims to facilitate people by means of a glove based communication interpreter system. The glove is internally equipped with five flex sensors. For each specific gesture, the flex sensor produces a proportional change in resistance. The output from the sensor is analog values it is converted to digital. The processing of these hand gestures is in Arduino Duemilanove Board which is an advance version of the microcontroller. It compares the input signal with predefined voltage levels stored in memory. According to that required output displays on the LCD in the form of text & sound is produced which is stored is memory with the help of speaker. In such a way it is easy for deaf and dumb to communicate with normal people. This system can also be use for the woman security since we are sending a message to authority with the help of smart phone.",
"title": ""
},
{
"docid": "8abbd5e2ab4f419a4ca05277a8b1b6a5",
"text": "This paper presents an innovative broadband millimeter-wave single balanced diode mixer that makes use of a substrate integrated waveguide (SIW)-based 180 hybrid. It has low conversion loss of less than 10 dB, excellent linearity, and high port-to-port isolations over a wide frequency range of 20 to 26 GHz. The proposed mixer has advantages over previously reported millimeter-wave mixer structures judging from a series of aspects such as cost, ease of fabrication, planar construction, and broadband performance. Furthermore, a receiver front-end that integrates a high-performance SIW slot-array antenna and our proposed mixer is introduced. Based on our proposed receiver front-end structure, a K-band wireless communication system with M-ary quadrature amplitude modulation is developed and demonstrated for line-of-sight channels. Excellent overall error vector magnitude performance has been obtained.",
"title": ""
},
{
"docid": "9ed3b0144df3dfa88b9bfa61ee31f40a",
"text": "OBJECTIVE\nTo determine the frequency of early relapse after achieving good initial correction in children who were on clubfoot abduction brace.\n\n\nMETHODS\nThe cross-sectional study was conducted at the Jinnah Postgraduate Medical Centre, Karachi, and included parents of children of either gender in the age range of 6 months to 3years with idiopathic clubfoot deformities who had undergone Ponseti treatment between September 2012 and June 2013, and who were on maintenance brace when the data was collected from December 2013 to March 2014. Parents of patients with follow-up duration in brace less than six months and those with syndromic clubfoot deformity were excluded. The interviews were taken through a purposive designed questionnaire. SPSS 16 was used for data analysis.\n\n\nRESULTS\nThe study included parents of 120 patients. Of them, 95(79.2%) behaved with good compliance on Denis Browne Splint, 10(8.3%) were fair and 15(12.5%)showed poor compliance. Major reason for poor and non-compliance was unaffordability of time and cost for regular follow-up. Besides, 20(16.67%) had inconsistent use due to delay inre-procurement of Foot Abduction Braceonce the child had outgrown the shoe. Only 4(3.33%) talked of cultural barriers and conflict of interest between the parents. Early relapse was observed in 23(19.16%) patients and 6(5%) of them responded to additional treatment and were put back on brace treatment; 13(10.83%) had minor relapse with forefoot varus, without functional disability, and the remaining 4(3.33%) had major relapse requiring extensive surgery. Overall success was recorded in 116(96.67%) cases.\n\n\nCONCLUSIONS\nThe positioning of shoes on abduction brace bar, comfort in shoes, affordability, initial and subsequent delay in procurement of new shoes once the child's feet overgrew the shoe, were the four containable factors on the part of Ponseti practitioner.",
"title": ""
},
{
"docid": "b6da9901abb01572b631085f97fdd1d4",
"text": "Protection against high voltage-standing-wave-ratios (VSWR) is of great importance in many power amplifier applications. Despite excellent thermal and voltage breakdown properties even gallium nitride devices may need such measures. This work focuses on the timing aspect when using barium-strontium-titanate (BST) varactors to limit power dissipation and gate current. A power amplifier was designed and fabricated, implementing a varactor and a GaN-based voltage switch as varactor modulator for VSWR protection. The response time until the protection is effective was measured by switching the voltages at varactor, gate and drain of the transistor, respectively. It was found that it takes a minimum of 50 μs for the power amplifier to reach a safe condition. Pure gate pinch-off or drain voltage reduction solutions were slower and bias-network dependent. For a thick-film BST MIM varactor, optimized for speed and power, a switching time of 160 ns was achieved.",
"title": ""
},
{
"docid": "afa7d0e5c19fea77e1bcb4fce39fbc93",
"text": "Highly Autonomous Driving (HAD) systems rely on deep neural networks for the visual perception of the driving environment. Such networks are train on large manually annotated databases. In this work, a semi-parametric approach to one-shot learning is proposed, with the aim of bypassing the manual annotation step required for training perceptions systems used in autonomous driving. The proposed generative framework, coined Generative One-Shot Learning (GOL), takes as input single one-shot objects, or generic patterns, and a small set of so-called regularization samples used to drive the generative process. New synthetic data is generated as Pareto optimal solutions from one-shot objects using a set of generalization functions built into a generalization generator. GOL has been evaluated on environment perception challenges encountered in autonomous vision.",
"title": ""
},
{
"docid": "1e32662301070a085ce4d3244673c2cd",
"text": "Conventional automatic speech recognition (ASR) based on a hidden Markov model (HMM)/deep neural network (DNN) is a very complicated system consisting of various modules such as acoustic, lexicon, and language models. It also requires linguistic resources, such as a pronunciation dictionary, tokenization, and phonetic context-dependency trees. On the other hand, end-to-end ASR has become a popular alternative to greatly simplify the model-building process of conventional ASR systems by representing complicated modules with a single deep network architecture, and by replacing the use of linguistic resources with a data-driven learning method. There are two major types of end-to-end architectures for ASR; attention-based methods use an attention mechanism to perform alignment between acoustic frames and recognized symbols, and connectionist temporal classification (CTC) uses Markov assumptions to efficiently solve sequential problems by dynamic programming. This paper proposes hybrid CTC/attention end-to-end ASR, which effectively utilizes the advantages of both architectures in training and decoding. During training, we employ the multiobjective learning framework to improve robustness and achieve fast convergence. During decoding, we perform joint decoding by combining both attention-based and CTC scores in a one-pass beam search algorithm to further eliminate irregular alignments. Experiments with English (WSJ and CHiME-4) tasks demonstrate the effectiveness of the proposed multiobjective learning over both the CTC and attention-based encoder–decoder baselines. Moreover, the proposed method is applied to two large-scale ASR benchmarks (spontaneous Japanese and Mandarin Chinese), and exhibits performance that is comparable to conventional DNN/HMM ASR systems based on the advantages of both multiobjective learning and joint decoding without linguistic resources.",
"title": ""
},
{
"docid": "4ef9dbd33461abe61f0ebeee29b462b4",
"text": "A comparison between corporate fed microstrip antenna array (MSAA) and an electromagnetically coupled microstrip antenna array (EMCP-MSAA) at Ka-band is presented. A low loss feed network is proposed based on the analysis of different line widths used in the feed network. Gain improvement of 25% (1.5 dB) is achieved using the proposed feed network in 2×2 EMCP-MSAA. A 8×8 MSAA has been designed and fabricated at Ka-band. The measured bandwidth is 4.3% with gain of 24dB. Bandwidth enhancement is done by designing and fabricating EMCP-MSAA to give bandwidth of 17% for 8×8 array.",
"title": ""
},
{
"docid": "412278d78888fc4ee28c666133c9bd24",
"text": "A future Internet of Things (IoT) system will connect the physical world into cyberspace everywhere and everything via billions of smart objects. On the one hand, IoT devices are physically connected via communication networks. The service oriented architecture (SOA) can provide interoperability among heterogeneous IoT devices in physical networks. On the other hand, IoT devices are virtually connected via social networks. In this paper we propose adaptive and scalable trust management to support service composition applications in SOA-based IoT systems. We develop a technique based on distributed collaborative filtering to select feedback using similarity rating of friendship, social contact, and community of interest relationships as the filter. Further we develop a novel adaptive filtering technique to determine the best way to combine direct trust and indirect trust dynamically to minimize convergence time and trust estimation bias in the presence of malicious nodes performing opportunistic service and collusion attacks. For scalability, we consider a design by which a capacity-limited node only keeps trust information of a subset of nodes of interest and performs minimum computation to update trust. We demonstrate the effectiveness of our proposed trust management through service composition application scenarios with a comparative performance analysis against EigenTrust and PeerTrust.",
"title": ""
},
{
"docid": "d57bd5c6426ce818328096c26f06b901",
"text": "Introduction Reflexivity is a curious term with various meanings. Finding a definition of reflexivity that demonstrates what it means and how it is achieved is difficult (Colbourne and Sque 2004). Moreover, writings on reflexivity have not been transparent in terms of the difficulties, practicalities and methods of the process (Mauthner and Doucet 2003). Nevertheless, it is argued that an attempt be made to gain ‘some kind of intellectual handle’ on reflexivity in order to make use of it as a guiding standard (Freshwater and Rolfe 2001). The role of reflexivity in the many and varied qualitative methodologies is significant. It is therefore a concept of particular relevance to nursing as qualitative methodologies play a principal function in nursing enquiry. Reflexivity assumes a pivotal role in feminist research (King 1994). It is also paramount in participatory action research (Robertson 2000), ethnographies, and hermeneutic and post-structural approaches (Koch and Harrington 1998). Furthermore, it plays an integral part in medical case study research reflexivity epistemological critical feminist ▲ ▲ ▲ ▲ k e y w o rd s",
"title": ""
},
{
"docid": "73a656b220c8f91ad1b2e2b4dbd691a9",
"text": "Music recommendation systems are well explored and commonly used but are normally based on manually tagged parameters and simple similarity calculation. Our project proposes a recommendation system based on emotional computing, automatic classification and feature extraction, which recommends music based on the emotion expressed by the song.\n To achieve this goal a set of features is extracted from the song, including the MFCC (mel-frequency cepstral coefficients) following the works of McKinney et al. [6] and a machine learning system is trained on a set of 424 songs, which are categorized by emotion. The categorization of the song is performed manually by multiple persons to avoid error. The emotional categorization is performed using a modified version of the Tellegen-Watson-Clark emotion model [7], as proposed by Trohidis et al. [8]. The System is intended as desktop application that can reliably determine similarities between the main emotion in multiple pieces of music, allowing the user to choose music by emotion. We report our findings below.",
"title": ""
},
{
"docid": "a9a8baf6dfb2526d75b0d7e49bb9b138",
"text": "Many classification problems require decisions among a large number of competing classes. These tasks, however, are not handled well by general purpose learning methods and are usually addressed in an ad-hoc fashion. We suggest a general approach – a sequential learning model that utilizes classifiers to sequentially restrict the number of competing classes while maintaining, with high probability, the presence of the true outcome in the candidates set. Some theoretical and computational properties of the model are discussed and we argue that these are important in NLP-like domains. The advantages of the model are illustrated in an experiment in partof-speech tagging.",
"title": ""
},
{
"docid": "dba3434c600ed7ddbb944f0a3adb1ba0",
"text": "Although acoustic waves are the most versatile and widely used physical layer technology for underwater wireless communication networks (UWCNs), they are adversely affected by ambient noise, multipath propagation, and fading. The large propagation delays, low bandwidth, and high bit error rates of the underwater acoustic channel hinder communication as well. These operational limits call for complementary technologies or communication alternatives when the acoustic channel is severely degraded. Magnetic induction (MI) is a promising technique for UWCNs that is not affected by large propagation delays, multipath propagation, and fading. In this paper, the MI communication channel has been modeled. Its propagation characteristics have been compared to the electromagnetic and acoustic communication systems through theoretical analysis and numerical evaluations. The results prove the feasibility of MI communication in underwater environments. The MI waveguide technique is developed to reduce path loss. The communication range between source and destination is considerably extended to hundreds of meters in fresh water due to its superior bit error rate performance.",
"title": ""
}
] | scidocsrr |
01c29580d62cd06b56382bc173e828dc | Linked Data Indexing of Distributed Ledgers | [
{
"docid": "ad1cf5892f7737944ba23cd2e44a7150",
"text": "The ‘blockchain’ is the core mechanism for the Bitcoin digital payment system. It embraces a set of inter-related technologies: the blockchain itself as a distributed record of digital events, the distributed consensus method to agree whether a new block is legitimate, automated smart contracts, and the data structure associated with each block. We propose a permanent distributed record of intellectual effort and associated reputational reward, based on the blockchain that instantiates and democratises educational reputation beyond the academic community. We are undertaking initial trials of a private blockchain or storing educational records, drawing also on our previous research into reputation management for educational systems.",
"title": ""
},
{
"docid": "3a1cc60b1b6729e06f178ab62d19c59c",
"text": "The Web 2.0 wave brings, among other aspects, the Programmable Web:increasing numbers of Web sites provide machine-oriented APIs and Web services. However, most APIs are only described with text in HTML documents. The lack of machine-readable API descriptions affects the feasibility of tool support for developers who use these services. We propose a microformat called hRESTS (HTML for RESTful Services) for machine-readable descriptions of Web APIs, backed by a simple service model. The hRESTS microformat describes main aspects of services, such as operations, inputs and outputs. We also present two extensions of hRESTS:SA-REST, which captures the facets of public APIs important for mashup developers, and MicroWSMO, which provides support for semantic automation.",
"title": ""
}
] | [
{
"docid": "c551e19208e367cc5546a3d46f7534c8",
"text": "We propose a novel approach for solving the approximate nearest neighbor search problem in arbitrary metric spaces. The distinctive feature of our approach is that we can incrementally build a non-hierarchical distributed structure for given metric space data with a logarithmic complexity scaling on the size of the structure and adjustable accuracy probabilistic nearest neighbor queries. The structure is based on a small world graph with vertices corresponding to the stored elements, edges for links between them and the greedy algorithm as base algorithm for searching. Both search and addition algorithms require only local information from the structure. The performed simulation for data in the Euclidian space shows that the structure built using the proposed algorithm has navigable small world properties with logarithmic search complexity at fixed accuracy and has weak (power law) scalability with the dimensionality of the stored data.",
"title": ""
},
{
"docid": "4e7106a78dcf6995090669b9a25c9551",
"text": "In this paper partial discharges (PD) in disc-shaped cavities in polycarbonate are measured at variable frequency (0.01-100 Hz) of the applied voltage. The advantage of PD measurements at variable frequency is that more information about the insulation system may be extracted than from traditional PD measurements at a single frequency (usually 50/60 Hz). The PD activity in the cavity is seen to depend on the applied frequency. Moreover, the PD frequency dependence changes with the applied voltage amplitude, the cavity diameter, and the cavity location (insulated or electrode bounded). It is suggested that the PD frequency dependence is governed by the statistical time lag of PD and the surface charge decay in the cavity. This is the first of two papers addressing the frequency dependence of PD in a cavity. In the second paper a physical model of PD in a cavity at variable applied frequency is presented.",
"title": ""
},
{
"docid": "52a1f1de8db1a9aca14cb4df2395868b",
"text": "We propose a new approach to localizing handle-like grasp affordances in 3-D point clouds. The main idea is to identify a set of sufficient geometric conditions for the existence of a grasp affordance and to search the point cloud for neighborhoods that satisfy these conditions. Our goal is not to find all possible grasp affordances, but instead to develop a method of localizing important types of grasp affordances quickly and reliably. The strength of this method relative to other current approaches is that it is very practical: it can have good precision/recall for the types of affordances under consideration, it runs in real-time, and it is easy to adapt to different robots and operating scenarios. We validate with a set of experiments where the approach is used to enable the Rethink Baxter robot to localize and grasp unmodelled objects.",
"title": ""
},
{
"docid": "16a12f3c626e2e749b49e99f397f3791",
"text": "We study interactive situations in which players are boundedly rational. Each player, rather than optimizing given a belief about the other players' behavior, as in the theory of Nash equilibrium, uses the following choice procedure. She rst associates one consequence with each of her actions by sampling (literally or virtually) each of her actions once. Then she chooses the action that has the best consequence. We deene a notion of equilibrium for such situations and study its properties. (JEL C72) Economists' interest in game theory was prompted by dissatisfaction with the assumption underlying the notion of competitive equilibrium that each economic agent ignores other agents' actions when making choices. Game theory analyzes the interaction of agents who \\think strategically\", making their decisions rationally after forming beliefs about their opponents' moves, beliefs that are based on an analysis of the opponents' interests.",
"title": ""
},
{
"docid": "9f21af3bc0955dcd9a05898f943f54ad",
"text": "Compressed sensing is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. In this paper we introduce a new theory for distributed compressed sensing (DCS) that enables new distributed coding algorithms for multi-signal ensembles that exploit both intraand inter-signal correlation structures. The DCS theory rests on a new concept that we term the joint sparsity of a signal ensemble. We study in detail three simple models for jointly sparse signals, propose algorithms for joint recovery of multiple signals from incoherent projections, and characterize theoretically and empirically the number of measurements per sensor required for accurate reconstruction. We establish a parallel with the Slepian-Wolf theorem from information theory and establish upper and lower bounds on the measurement rates required for encoding jointly sparse signals. In two of our three models, the results are asymptotically best-possible, meaning that both the upper and lower bounds match the performance of our practical algorithms. Moreover, simulations indicate that the asymptotics take effect with just a moderate number of signals. In some sense DCS is a framework for distributed compression of sources with memory, which has remained a challenging problem for some time. DCS is immediately applicable to a range of problems in sensor networks and arrays.",
"title": ""
},
{
"docid": "68706762443e3e340c4ac270f7fcf22e",
"text": "MOTIVATION\nAn EXCEL template has been developed for the calculation of enzyme kinetic parameters by non-linear regression techniques. The tool is accurate, inexpensive, as well as easy to use and modify.\n\n\nAVAILABILITY\nThe program is available from http://www.ebi.ac.uk/biocat/biocat.html\n\n\nCONTACT\nagustin. [email protected]",
"title": ""
},
{
"docid": "af961b3b977b37f69156c4d653b745e7",
"text": "The move to Internet news publishing is the latest in a series of technological shifts which have required journalists not merely to adapt their daily practice but which have also at least in the view of some – recast their role in society. For over a decade, proponents of the networked society as a new way of life have argued that responsibility for news selection and production will shift from publishers, editors and reporters to individual consumers, as in the scenario offered by Nicholas Negroponte:",
"title": ""
},
{
"docid": "7def0b8cfb68a8190184840c5c6e7e2f",
"text": "Fast and accurate localization of software defects continues to be a difficult problem since defects can emanate from a large variety of sources and can often be intricate in nature. In this paper, we show how version histories of a software project can be used to estimate a prior probability distribution for defect proneness associated with the files in a given version of the project. Subsequently, these priors are used in an IR (Information Retrieval) framework to determine the posterior probability of a file being the cause of a bug. We first present two models to estimate the priors, one from the defect histories and the other from the modification histories, with both types of histories as stored in the versioning tools. Referring to these as the base models, we then extend them by incorporating a temporal decay into the estimation of the priors. We show that by just including the base models, the mean average precision (MAP) for bug localization improves by as much as 30%. And when we also factor in the time decay in the estimates of the priors, the improvements in MAP can be as large as 80%.",
"title": ""
},
{
"docid": "d95fb46b3857b55602af2cf271300f5a",
"text": "This paper proposes a new active interphase transformer for 24-pulse diode rectifier. The proposed scheme injects a compensation current into the secondary winding of either of the two first-stage interphase transformers. For only one of the first-stage interphase transformers being active, the inverter conducted the injecting current is with a lower kVA rating [1.26% pu (Po)] compared to conventional active interphase transformers. Moreover, the proposal scheme draws near sinusoidal input currents and the simulated and the experimental total harmonic distortion of overall line currents are only 1.88% and 2.27% respectively. When the inverter malfunctions, the input line current still can keep in the conventional 24-pulse situation. A digital-signal-processor (DSP) based digital controller is employed to calculate the desired compensation current and deals with the trigger signals needed for the inverter. Moreover, a 6kW prototype is built for test. Both simulation and experimental results demonstrate the validity of the proposed scheme.",
"title": ""
},
{
"docid": "d3e18816fd1236a1b9988045d0ae5f6e",
"text": "This paper discussed a fast dynamic braking method of three phase induction motor. This braking method consists of two conventional braking methods i.e. direct current injection braking and capacitor self excitation braking. Those mathods were arranged in a such grading time to become a multistage dynamic braking. Simulation was done using MATLAB/Simulink software for design and predicting the behaviour. The results showed that the propossed method gave faster braking than the other two methods carried out separately.",
"title": ""
},
{
"docid": "668953b5f6fbfc440bb6f3a91ee7d06b",
"text": "Proof of Work (PoW) powered blockchains currently account for more than 90% of the total market capitalization of existing digital cryptocurrencies. Although the security provisions of Bitcoin have been thoroughly analysed, the security guarantees of variant (forked) PoW blockchains (which were instantiated with different parameters) have not received much attention in the literature. This opens the question whether existing security analysis of Bitcoin's PoW applies to other implementations which have been instantiated with different consensus and/or network parameters.\n In this paper, we introduce a novel quantitative framework to analyse the security and performance implications of various consensus and network parameters of PoW blockchains. Based on our framework, we devise optimal adversarial strategies for double-spending and selfish mining while taking into account real world constraints such as network propagation, different block sizes, block generation intervals, information propagation mechanism, and the impact of eclipse attacks. Our framework therefore allows us to capture existing PoW-based deployments as well as PoW blockchain variants that are instantiated with different parameters, and to objectively compare the tradeoffs between their performance and security provisions.",
"title": ""
},
{
"docid": "31c0dc8f0a839da9260bb9876f635702",
"text": "The application of a recently developed broadband beamformer to distinguish audio signals received from different directions is experimentally tested. The beamformer combines spatial and temporal subsampling using a nested array and multirate techniques which leads to the same region of support in the frequency domain for all subbands. This allows using the same beamformer for all subbands. The experimental set-up is presented and the recorded signals are analyzed. Results indicate that the proposed approach can be used to distinguish plane waves propagating with different direction of arrivals.",
"title": ""
},
{
"docid": "b83e784d3ec4afcf8f6ed49dbe90e157",
"text": "In this paper, the impact of an increased number of layers on the performance of axial flux permanent magnet synchronous machines (AFPMSMs) is studied. The studied parameters are the inductance, terminal voltages, PM losses, iron losses, the mean value of torque, and the ripple torque. It is shown that increasing the number of layers reduces the fundamental winding factor. In consequence, the rated torque for the same current reduces. However, the reduction of harmonics associated with a higher number of layers reduces the ripple torque, PM losses, and iron losses. Besides studying the performance of the AFPMSMs for the rated conditions, the study is broadened for the field weakening (FW) region. During the FW region, the flux of the PMs is weakened by an injection of a reversible d-axis current. This keeps the terminal voltage of the machine fixed at the rated value. The inductance plays an important role in the FW study. A complete study for the FW shows that the two layer winding has the optimum performance compared to machines with an other number of winding layers.",
"title": ""
},
{
"docid": "5d866d630f78bb81b5ce8d3dae2521ee",
"text": "In present-day high-performance electronic components, the generated heat loads result in unacceptably high junction temperatures and reduced component lifetimes. Thermoelectric modules can, in principle, enhance heat removal and reduce the temperatures of such electronic devices. However, state-of-the-art bulk thermoelectric modules have a maximum cooling flux qmax of only about 10 W cm(-2), while state-of-the art commercial thin-film modules have a qmax <100 W cm(-2). Such flux values are insufficient for thermal management of modern high-power devices. Here we show that cooling fluxes of 258 W cm(-2) can be achieved in thin-film Bi2Te3-based superlattice thermoelectric modules. These devices utilize a p-type Sb2Te3/Bi2Te3 superlattice and n-type δ-doped Bi2Te3-xSex, both of which are grown heteroepitaxially using metalorganic chemical vapour deposition. We anticipate that the demonstration of these high-cooling-flux modules will have far-reaching impacts in diverse applications, such as advanced computer processors, radio-frequency power devices, quantum cascade lasers and DNA micro-arrays.",
"title": ""
},
{
"docid": "3587732b8d855eb8a941edeb58c68fe3",
"text": "In this paper, we present a feature based approach for monocular scene reconstruction based on extended Kalman filters (EKF). Our method processes a sequence of images taken by a single camera mounted frontal on a mobile robot. Using different techniques, we are able to produce a precise reconstruction that is free from outliers and therefore can be used for reliable obstacle detection. In real-world field-tests we show that the presented approach is able to detect obstacles that are not seen by other sensors, such as laser-range-finder s. Furthermore, we show that visual obstacle detection combined with a laser-range-finder can increase the detection rate of obstacles considerably allowing the autonomous use of mobile robots in complex public environments.",
"title": ""
},
{
"docid": "0105247ab487c2d06f3ffa0d00d4b4f9",
"text": "Many distributed storage systems achieve high data access throughput via partitioning and replication, each system with its own advantages and tradeoffs. In order to achieve high scalability, however, today's systems generally reduce transactional support, disallowing single transactions from spanning multiple partitions. Calvin is a practical transaction scheduling and data replication layer that uses a deterministic ordering guarantee to significantly reduce the normally prohibitive contention costs associated with distributed transactions. Unlike previous deterministic database system prototypes, Calvin supports disk-based storage, scales near-linearly on a cluster of commodity machines, and has no single point of failure. By replicating transaction inputs rather than effects, Calvin is also able to support multiple consistency levels---including Paxos-based strong consistency across geographically distant replicas---at no cost to transactional throughput.",
"title": ""
},
{
"docid": "8f34145117004d2a66123a4b6363d853",
"text": "Our study examined the determinants of ERP knowledge transfer from implementation consultants (ICs) to key users (KUs), and vice versa. An integrated model was developed, positing that knowledge transfer was influenced by the knowledge-, source-, recipient-, and transfer context-related aspects. Data to test this model were collected from 85 ERP-implementation projects of firms that were mainly located in Zhejiang province, China. The results of the analysis demonstrated that all four aspects had a significant influence on ERP knowledge transfer. Furthermore, the results revealed the mediator role of the transfer activities and arduous relationship between ICs and KUs. The influence on knowledge transfer from the source’s willingness to transfer and the recipient’s willingness to accept knowledge was fully mediated by transfer activities, whereas the influence on knowledge transfer from the recipient’s ability to absorb knowledge was only partially mediated by transfer activities. The influence on knowledge transfer from the communication capability (including encoding and decoding competence) was fully mediated by arduous relationship. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4260077a3a48f3ed2a71208e2dd68924",
"text": "Algorithmic image-based diagnosis and prognosis of neurodegenerative diseases on longitudinal data has drawn great interest from computer vision researchers. The current state-of-the-art models for many image classification tasks are based on the Convolutional Neural Networks (CNN). However, a key challenge in applying CNN to biological problems is that the available labeled training samples are very limited. Another issue for CNN to be applied in computer aided diagnosis applications is that to achieve better diagnosis and prognosis accuracy, one usually has to deal with the longitudinal dataset, i.e., the dataset of images scanned at different time points. Here we argue that an enhanced CNN model with transfer learning for the joint analysis of tasks from multiple time points or regions of interests may have a potential to improve the accuracy of computer aided diagnosis. To reach this goal, we innovate a CNN based deep learning multi-task dictionary learning framework to address the above challenges. Firstly, we pretrain CNN on the ImageNet dataset and transfer the knowledge from the pre-trained model to the medical imaging progression representation, generating the features for different tasks. Then, we propose a novel unsupervised learning method, termed Multi-task Stochastic Coordinate Coding (MSCC), for learning different tasks by using shared and individual dictionaries and generating the sparse features required to predict the future cognitive clinical scores. We apply our new model in a publicly available neuroimaging cohort to predict clinical measures with two different feature sets and compare them with seven other state-of-theart methods. The experimental results show our proposed method achieved superior results.",
"title": ""
},
{
"docid": "081e0ad6b324e857cb6d6a5bc09bcbfd",
"text": "This paper proposes a new finger-vein recognition system that uses a binary robust invariant elementary feature from accelerated segment test feature points and an adaptive thresholding strategy. Subsequently, the proposed a multi-image quality assessments (MQA) are applied to conduct a second stage verification. As oppose to other studies, the region of interest is directly identified using a range of normalized feature point area, which reduces the complexity of pre-processing. This recognition structure allows an efficient feature points matching using a robust feature and rigorous verification using the MQA process. As a result, this method not only reduces the system computation time, comparisons against former relevant studies demonstrate the superiority of the proposed method.",
"title": ""
},
{
"docid": "572cdf84eebfe5bf28d137ce5c4179d4",
"text": "Stock market decision making is a very challenging and difficult task of financial data prediction. Prediction about stock market with high accuracy movement yield profit for investors of the stocks. Because of the complexity of stock market financial data, development of efficient models for prediction decision is very difficult, and it must be accurate. This study attempted to develop models for prediction of the stock market and to decide whether to buy/hold the stock using data mining and machine learning techniques. The classification techniques used in these models are naive bayes and random forest classification. Technical indicators are calculated from the stock prices based on time-line data and it is used as inputs of the proposed prediction models. 10 years of stock market data has been used for prediction. Based on the data set, these models are capable to generate buy/hold signal for stock market as a output. The main goal of this paper is to generate decision as per user’s requirement like amount to be invested, time duration for investment, minimum profit, maximum loss using machine learning and data analysis techniques.",
"title": ""
}
] | scidocsrr |
796d0369a1cbef976cd1d5a5d2c86987 | Actuator design for high force proprioceptive control in fast legged locomotion | [
{
"docid": "fa7da02d554957f92364d4b37219feba",
"text": "This paper shows mechanisms for artificial finger based on a planetary gear system (PGS). Using the PGS as a transmitter provides an under-actuated system for driving three joints of a finger with back-drivability that is crucial characteristics for fingers as an end-effector when it interacts with external environment. This paper also shows the artificial finger employed with the originally developed mechanism called “double planetary gear system” (DPGS). The DPGS provides not only back-drivable and under-actuated flexion-extension of the three joints of a finger, which is identical to the former, but also adduction-abduction of the MP joint. Both of the above finger mechanisms are inherently safe due to being back-drivable with no electric device or sensor in the finger part. They are also rigorously solvable in kinematics and kinetics as shown in this paper.",
"title": ""
},
{
"docid": "81b03da5e09cb1ac733c966b33d0acb1",
"text": "Abstrud In the last two years a third generation of torque-controlled light weight robots has been developed in DLR‘s robotics and mechatronics lab which is based on all the experiences that have been made with the first two generations. It aims at reaching the limits of what seems achievable with present day technologies not only with respect to light-weight, but also with respect to minimal power consumption and losses. One of the main gaps we tried to close in version III was the development of a new, robot-dedicated high energy motor designed with the best available techniques of concurrent engineering, and the renewed efforts to save weight in the links by using ultralight carbon fibres.",
"title": ""
}
] | [
{
"docid": "b9f774ccd37e0bf0e399dd2d986f258d",
"text": "Predicting the final state of a running process, the remaining time to completion or the next activity of a running process are important aspects of runtime process management. Runtime management requires the ability to identify processes that are at risk of not meeting certain criteria in order to offer case managers decision information for timely intervention. This in turn requires accurate prediction models for process outcomes and for the next process event, based on runtime information available at the prediction and decision point. In this paper, we describe an initial application of deep learning with recurrent neural networks to the problem of predicting the next process event. This is both a novel method in process prediction, which has previously relied on explicit process models in the form of Hidden Markov Models (HMM) or annotated transition systems, and also a novel application for deep learning methods.",
"title": ""
},
{
"docid": "a8477be508fab67456c5f6b61d3642b5",
"text": "Although three-phase permanent magnet (PM) motors are quite common in industry, multi-phase PM motors are used in special applications where high power and redundancy are required. Multi-phase PM motors offer higher torque/power density than conventional three-phase PM motors. In this paper, a novel multi-phase consequent pole PM (CPPM) synchronous motor is proposed. The constant power–speed range of the proposed motor is quite wide as opposed to conventional PM motors. The design and the detailed finite-element analysis of the proposed nine-phase CPPM motor and performance comparison with a nine-phase surface mounted PM motor are completed to illustrate the benefits of the proposed motor.",
"title": ""
},
{
"docid": "a95761b5a67a07d02547c542ddc7e677",
"text": "This paper examines the connection between the legal environment and financial development, and then traces this link through to long-run economic growth. Countries with legal and regulatory systems that (1) give a high priority to creditors receiving the full present value of their claims on corporations, (2) enforce contracts effectively, and (3) promote comprehensive and accurate financial reporting by corporations have better-developed financial intermediaries. The data also indicate that the exogenous component of financial intermediary development – the component of financial intermediary development defined by the legal and regulatory environment – is positively associated with economic growth. * Department of Economics, 114 Rouss Hall, University of Virginia, Charlottesville, VA 22903-3288; [email protected]. I thank Thorsten Beck, Maria Carkovic, Bill Easterly, Lant Pritchett, Andrei Shleifer, and seminar participants at the Board of Governors of the Federal Reserve System, the University of Virginia, and the World Bank for helpful comments.",
"title": ""
},
{
"docid": "6fb006066fa1a25ae348037aa1ee7be3",
"text": "Reducing redundancy in data representation leads to decreased data storage requirements and lower costs for data communication.",
"title": ""
},
{
"docid": "47df1bd26f99313cfcf82430cb98d442",
"text": "To manage supply chain efficiently, e-business organizations need to understand their sales effectively. Previous research has shown that product review plays an important role in influencing sales performance, especially review volume and rating. However, limited attention has been paid to understand how other factors moderate the effect of product review on online sales. This study aims to confirm the importance of review volume and rating on improving sales performance, and further examine the moderating roles of product category, answered questions, discount and review usefulness in such relationships. By analyzing 2939 records of data extracted from Amazon.com using a big data architecture, it is found that review volume and rating have stronger influence on sales rank for search product than for experience product. Also, review usefulness significantly moderates the effects of review volume and rating on product sales rank. In addition, the relationship between review volume and sales rank is significantly moderated by both answered questions and discount. However, answered questions and discount do not have significant moderation effect on the relationship between review rating and sales rank. The findings expand previous literature by confirming important interactions between customer review features and other factors, and the findings provide practical guidelines to manage e-businesses. This study also explains a big data architecture and illustrates the use of big data technologies in testing theoretical",
"title": ""
},
{
"docid": "2342c92f91c243474a53323a476ae3d9",
"text": "Gesture recognition has emerged recently as a promising application in our daily lives. Owing to low cost, prevalent availability, and structural simplicity, RFID shall become a popular technology for gesture recognition. However, the performance of existing RFID-based gesture recognition systems is constrained by unfavorable intrusiveness to users, requiring users to attach tags on their bodies. To overcome this, we propose GRfid, a novel device-free gesture recognition system based on phase information output by COTS RFID devices. Our work stems from the key insight that the RFID phase information is capable of capturing the spatial features of various gestures with low-cost commodity hardware. In GRfid, after data are collected by hardware, we process the data by a sequence of functional blocks, namely data preprocessing, gesture detection, profiles training, and gesture recognition, all of which are well-designed to achieve high performance in gesture recognition. We have implemented GRfid with a commercial RFID reader and multiple tags, and conducted extensive experiments in different scenarios to evaluate its performance. The results demonstrate that GRfid can achieve an average recognition accuracy of <inline-formula> <tex-math notation=\"LaTeX\">$96.5$</tex-math><alternatives><inline-graphic xlink:href=\"wu-ieq1-2549518.gif\"/> </alternatives></inline-formula> and <inline-formula><tex-math notation=\"LaTeX\">$92.8$</tex-math><alternatives> <inline-graphic xlink:href=\"wu-ieq2-2549518.gif\"/></alternatives></inline-formula> percent in the identical-position and diverse-positions scenario, respectively. Moreover, experiment results show that GRfid is robust against environmental interference and tag orientations.",
"title": ""
},
{
"docid": "02ea5b61b22d5af1b9362ca46ead0dea",
"text": "This paper describes a student project examining mechanisms with which to attack Bluetooth-enabled devices. The paper briefly describes the protocol architecture of Bluetooth and the Java interface that programmers can use to connect to Bluetooth communication services. Several types of attacks are described, along with a detailed example of two attack tools, Bloover II and BT Info.",
"title": ""
},
{
"docid": "ab132902ce21c35d4b5befb8ff2898b5",
"text": "Skip-Gram Negative Sampling (SGNS) word embedding model, well known by its implementation in “word2vec” software, is usually optimized by stochastic gradient descent. It can be shown that optimizing for SGNS objective can be viewed as an optimization problem of searching for a good matrix with the low-rank constraint. The most standard way to solve this type of problems is to apply Riemannian optimization framework to optimize the SGNS objective over the manifold of required low-rank matrices. In this paper, we propose an algorithm that optimizes SGNS objective using Riemannian optimization and demonstrates its superiority over popular competitors, such as the original method to train SGNS and SVD over SPPMI matrix.",
"title": ""
},
{
"docid": "c9be0a4079800f173cf9553b9a69581c",
"text": "A 500W classical three-way Doherty power amplifier (DPA) with LDMOS devices at 1.8GHz is presented. Optimized device ratio is selected to achieve maximum efficiency as well as linearity. With a simple passive input driving network implementation, the demonstrator exhibits more than 55% efficiency with 9.9PAR WCDMA signal from 1805MHz-1880MHz. It can be linearized at -60dBc level with 20MHz LTE signal at an average output power of 49dBm.",
"title": ""
},
{
"docid": "b7b664d1749b61f2f423d7080a240a60",
"text": "The research challenge addressed in this paper is to devise effective techniques for identifying task-based sessions, i.e. sets of possibly non contiguous queries issued by the user of a Web Search Engine for carrying out a given task. In order to evaluate and compare different approaches, we built, by means of a manual labeling process, a ground-truth where the queries of a given query log have been grouped in tasks. Our analysis of this ground-truth shows that users tend to perform more than one task at the same time, since about 75% of the submitted queries involve a multi-tasking activity. We formally define the Task-based Session Discovery Problem (TSDP) as the problem of best approximating the manually annotated tasks, and we propose several variants of well known clustering algorithms, as well as a novel efficient heuristic algorithm, specifically tuned for solving the TSDP. These algorithms also exploit the collaborative knowledge collected by Wiktionary and Wikipedia for detecting query pairs that are not similar from a lexical content point of view, but actually semantically related. The proposed algorithms have been evaluated on the above ground-truth, and are shown to perform better than state-of-the-art approaches, because they effectively take into account the multi-tasking behavior of users.",
"title": ""
},
{
"docid": "9fcda5fc485df8b69aa7bab806d95f84",
"text": "DoS attacks on sensor measurements used for industrial control can cause the controller of the process to use stale data. If the DoS attack is not timed properly, the use of stale data by the controller will have limited impact on the process; however, if the attacker is able to launch the DoS attack at the correct time, the use of stale data can cause the controller to drive the system to an unsafe state.\n Understanding the timing parameters of the physical processes does not only allow an attacker to construct a successful attack but also to maximize its impact (damage to the system). In this paper we use Tennessee Eastman challenge process to study an attacker that has to identify (in realtime) the optimal timing to launch a DoS attack. The choice of time to begin an attack is forward-looking, requiring the attacker to consider each opportunity against the possibility of a better opportunity in the future, and this lends itself to the theory of optimal stopping problems. In particular we study the applicability of the Best Choice Problem (also known as the Secretary Problem), quickest change detection, and statistical process outliers. Our analysis can be used to identify specific sensor measurements that need to be protected, and the time that security or safety teams required to respond to attacks, before they cause major damage.",
"title": ""
},
{
"docid": "9c5535f218f6228ba6b2a8e5fdf93371",
"text": "Recent analyses of organizational change suggest a growing concern with the tempo of change, understood as the characteristic rate, rhythm, or pattern of work or activity. Episodic change is contrasted with continuous change on the basis of implied metaphors of organizing, analytic frameworks, ideal organizations, intervention theories, and roles for change agents. Episodic change follows the sequence unfreeze-transition-refreeze, whereas continuous change follows the sequence freeze-rebalance-unfreeze. Conceptualizations of inertia are seen to underlie the choice to view change as episodic or continuous.",
"title": ""
},
{
"docid": "064aba7f2bd824408bd94167da5d7b3a",
"text": "Online comments submitted by readers of news articles can provide valuable feedback and critique, personal views and perspectives, and opportunities for discussion. The varying quality of these comments necessitates that publishers remove the low quality ones, but there is also a growing awareness that by identifying and highlighting high quality contributions this can promote the general quality of the community. In this paper we take a user-centered design approach towards developing a system, CommentIQ, which supports comment moderators in interactively identifying high quality comments using a combination of comment analytic scores as well as visualizations and flexible UI components. We evaluated this system with professional comment moderators working at local and national news outlets and provide insights into the utility and appropriateness of features for journalistic tasks, as well as how the system may enable or transform journalistic practices around online comments.",
"title": ""
},
{
"docid": "bcbbc8913330378af7c986549ab4bb30",
"text": "Anomaly detection involves identifying the events which do not conform to an expected pattern in data. A common approach to anomaly detection is to identify outliers in a latent space learned from data. For instance, PCA has been successfully used for anomaly detection. Variational autoencoder (VAE) is a recently-developed deep generative model which has established itself as a powerful method for learning representation from data in a nonlinear way. However, the VAE does not take the temporal dependence in data into account, so it limits its applicability to time series. In this paper we combine the echo-state network, which is a simple training method for recurrent networks, with the VAE, in order to learn representation from multivariate time series data. We present an echo-state conditional variational autoencoder (ES-CVAE) and demonstrate its useful behavior in the task of anomaly detection in multivariate time series data.",
"title": ""
},
{
"docid": "5fc3da9b59e9a2a7c26fa93445c68933",
"text": "A country's growth is strongly measured by quality of its education system. Education sector, across the globe has witnessed sea change in its functioning. Today it is recognized as an industry and like any other industry it is facing challenges, the major challenges of higher education being decrease in students' success rate and their leaving a course without completion. An early prediction of students' failure may help the management provide timely counseling as well coaching to increase success rate and student retention. We use different classification techniques to build performance prediction model based on students' social integration, academic integration, and various emotional skills which have not been considered so far. Two algorithms J48 (Implementation of C4.5) and Random Tree have been applied to the records of MCA students of colleges affiliated to Guru Gobind Singh Indraprastha University to predict third semester performance. Random Tree is found to be more accurate in predicting performance than J48 algorithm.",
"title": ""
},
{
"docid": "719783be7139d384d24202688f7fc555",
"text": "Big sensing data is prevalent in both industry and scientific research applications where the data is generated with high volume and velocity. Cloud computing provides a promising platform for big sensing data processing and storage as it provides a flexible stack of massive computing, storage, and software services in a scalable manner. Current big sensing data processing on Cloud have adopted some data compression techniques. However, due to the high volume and velocity of big sensing data, traditional data compression techniques lack sufficient efficiency and scalability for data processing. Based on specific on-Cloud data compression requirements, we propose a novel scalable data compression approach based on calculating similarity among the partitioned data chunks. Instead of compressing basic data units, the compression will be conducted over partitioned data chunks. To restore original data sets, some restoration functions and predictions will be designed. MapReduce is used for algorithm implementation to achieve extra scalability on Cloud. With real world meteorological big sensing data experiments on U-Cloud platform, we demonstrate that the proposed scalable compression approach based on data chunk similarity can significantly improve data compression efficiency with affordable data accuracy loss.",
"title": ""
},
{
"docid": "2410a4b40b833d1729fac37020ec13be",
"text": "Understanding how ecological conditions influence physiological responses is fundamental to forensic entomology. When determining the minimum postmortem interval with blow fly evidence in forensic investigations, using a reliable and accurate model of development is integral. Many published studies vary in results, source populations, and experimental designs. Accordingly, disentangling genetic causes of developmental variation from environmental causes is difficult. This study determined the minimum time of development and pupal sizes of three populations of Lucilia sericata Meigen (Diptera: Calliphoridae; from California, Michigan, and West Virginia) at two temperatures (20 degrees C and 33.5 degrees C). Development times differed significantly between strain and temperature. In addition, California pupae were the largest and fastest developing at 20 degrees C, but at 33.5 degrees C, though they still maintained their rank in size among the three populations, they were the slowest to develop. These results indicate a need to account for genetic differences in development, and genetic variation in environmental responses, when estimating a postmortem interval with entomological data.",
"title": ""
},
{
"docid": "98a820c806b392e18b38d075b91a4fe9",
"text": "This paper presents a scalable method to efficiently search for the most likely state trajectory leading to an event given only a simulator of a system. Our approach uses a reinforcement learning formulation and solves it using Monte Carlo Tree Search (MCTS). The approach places very few requirements on the underlying system, requiring only that the simulator provide some basic controls, the ability to evaluate certain conditions, and a mechanism to control the stochasticity in the system. Access to the system state is not required, allowing the method to support systems with hidden state. The method is applied to stress test a prototype aircraft collision avoidance system to identify trajectories that are likely to lead to near mid-air collisions. We present results for both single and multi-threat encounters and discuss their relevance. Compared with direct Monte Carlo search, this MCTS method performs significantly better both in finding events and in maximizing their likelihood.",
"title": ""
},
{
"docid": "a797ab99ed7983bd7372de56d34caca1",
"text": "The discovery of stem cells that can generate neural tissue has raised new possibilities for repairing the nervous system. A rush of papers proclaiming adult stem cell plasticity has fostered the notion that there is essentially one stem cell type that, with the right impetus, can create whatever progeny our heart, liver or other vital organ desires. But studies aimed at understanding the role of stem cells during development have led to a different view — that stem cells are restricted regionally and temporally, and thus not all stem cells are equivalent. Can these views be reconciled?",
"title": ""
},
{
"docid": "12be3f9c1f02ad3f26462ab841a80165",
"text": "Queries in patent prior art search are full patent applications and much longer than standard ad hoc search and web search topics. Standard information retrieval (IR) techniques are not entirely effective for patent prior art search because of ambiguous terms in these massive queries. Reducing patent queries by extracting key terms has been shown to be ineffective mainly because it is not clear what the focus of the query is. An optimal query reduction algorithm must thus seek to retain the useful terms for retrieval favouring recall of relevant patents, but remove terms which impair IR effectiveness. We propose a new query reduction technique decomposing a patent application into constituent text segments and computing the Language Modeling (LM) similarities by calculating the probability of generating each segment from the top ranked documents. We reduce a patent query by removing the least similar segments from the query, hypothesising that removal of these segments can increase the precision of retrieval, while still retaining the useful context to achieve high recall. Experiments on the patent prior art search collection CLEF-IP 2010 show that the proposed method outperforms standard pseudo-relevance feedback (PRF) and a naive method of query reduction based on removal of unit frequency terms (UFTs).",
"title": ""
}
] | scidocsrr |
8bf0224075997c84429972b9b7e70960 | Multi-Task Learning with Low Rank Attribute Embedding for Multi-Camera Person Re-Identification | [
{
"docid": "9e04e2d09e0b57a6af76ed522ede1154",
"text": "The field of surveillance and forensics research is currently shifting focus and is now showing an ever increasing interest in the task of people reidentification. This is the task of assigning the same identifier to all instances of a particular individual captured in a series of images or videos, even after the occurrence of significant gaps over time or space. People reidentification can be a useful tool for people analysis in security as a data association method for long-term tracking in surveillance. However, current identification techniques being utilized present many difficulties and shortcomings. For instance, they rely solely on the exploitation of visual cues such as color, texture, and the object’s shape. Despite the many advances in this field, reidentification is still an open problem. This survey aims to tackle all the issues and challenging aspects of people reidentification while simultaneously describing the previously proposed solutions for the encountered problems. This begins with the first attempts of holistic descriptors and progresses to the more recently adopted 2D and 3D model-based approaches. The survey also includes an exhaustive treatise of all the aspects of people reidentification, including available datasets, evaluation metrics, and benchmarking.",
"title": ""
},
{
"docid": "225204d66c371372debb3bb2a37c795b",
"text": "We present two novel methods for face verification. Our first method - “attribute” classifiers - uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance (e.g., gender, race, and age). Our second method - “simile” classifiers - removes the manual labeling required for attribute classification and instead learns the similarity of faces, or regions of faces, to specific reference people. Neither method requires costly, often brittle, alignment between image pairs; yet, both methods produce compact visual descriptions, and work on real-world images. Furthermore, both the attribute and simile classifiers improve on the current state-of-the-art for the LFW data set, reducing the error rates compared to the current best by 23.92% and 26.34%, respectively, and 31.68% when combined. For further testing across pose, illumination, and expression, we introduce a new data set - termed PubFig - of real-world images of public figures (celebrities and politicians) acquired from the internet. This data set is both larger (60,000 images) and deeper (300 images per individual) than existing data sets of its kind. Finally, we present an evaluation of human performance.",
"title": ""
}
] | [
{
"docid": "bf08d673b40109d6d6101947258684fd",
"text": "More and more medicinal mushrooms have been widely used as a miraculous herb for health promotion, especially by cancer patients. Here we report screening thirteen mushrooms for anti-cancer cell activities in eleven different cell lines. Of the herbal products tested, we found that the extract of Amauroderma rude exerted the highest activity in killing most of these cancer cell lines. Amauroderma rude is a fungus belonging to the Ganodermataceae family. The Amauroderma genus contains approximately 30 species widespread throughout the tropical areas. Since the biological function of Amauroderma rude is unknown, we examined its anti-cancer effect on breast carcinoma cell lines. We compared the anti-cancer activity of Amauroderma rude and Ganoderma lucidum, the most well-known medicinal mushrooms with anti-cancer activity and found that Amauroderma rude had significantly higher activity in killing cancer cells than Ganoderma lucidum. We then examined the effect of Amauroderma rude on breast cancer cells and found that at low concentrations, Amauroderma rude could inhibit cancer cell survival and induce apoptosis. Treated cancer cells also formed fewer and smaller colonies than the untreated cells. When nude mice bearing tumors were injected with Amauroderma rude extract, the tumors grew at a slower rate than the control. Examination of these tumors revealed extensive cell death, decreased proliferation rate as stained by Ki67, and increased apoptosis as stained by TUNEL. Suppression of c-myc expression appeared to be associated with these effects. Taken together, Amauroderma rude represented a powerful medicinal mushroom with anti-cancer activities.",
"title": ""
},
{
"docid": "1224987c5fdd228cc38bf1ee3aeb6f2d",
"text": "Many existing studies of social media focus on only one platform, but the reality of users' lived experiences is that most users incorporate multiple platforms into their communication practices in order to access the people and networks they desire to influence. In order to better understand how people make sharing decisions across multiple sites, we asked our participants (N=29) to categorize all modes of communication they used, with the goal of surfacing their mental models about managing sharing across platforms. Our interview data suggest that people simultaneously consider \"audience\" and \"content\" when sharing and these needs sometimes compete with one another; that they have the strong desire to both maintain boundaries between platforms as well as allowing content and audience to permeate across these boundaries; and that they strive to stabilize their own communication ecosystem yet need to respond to changes necessitated by the emergence of new tools, practices, and contacts. We unpack the implications of these tensions and suggest future design possibilities.",
"title": ""
},
{
"docid": "512fee2ebf2765335f07a45d8f648c03",
"text": "Dialogue Act recognition associate dialogue acts (i.e., semantic labels) to utterances in a conversation. The problem of associating semantic labels to utterances can be treated as a sequence labeling problem. In this work, we build a hierarchical recurrent neural network using bidirectional LSTM as a base unit and the conditional random field (CRF) as the top layer to classify each utterance into its corresponding dialogue act. The hierarchical network learns representations at multiple levels, i.e., word level, utterance level, and conversation level. The conversation level representations are input to the CRF layer, which takes into account not only all previous utterances but also their dialogue acts, thus modeling the dependency among both, labels and utterances, an important consideration of natural dialogue. We validate our approach on two different benchmark data sets, Switchboard and Meeting Recorder Dialogue Act, and show performance improvement over the state-of-the-art methods by 2.2% and 4.1% absolute points, respectively. It is worth noting that the inter-annotator agreement on Switchboard data set is 84%, and our method is able to achieve the accuracy of about 79% despite being trained on the noisy data.",
"title": ""
},
{
"docid": "1d273a18183c450c11ec6f3e4fa9a4e7",
"text": "Autonomous vehicles are an emerging application of automotive technology. They can recognize the scene, plan the path, and control the motion by themselves while interacting with drivers. Although they receive considerable attention, components of autonomous vehicles are not accessible to the public but instead are developed as proprietary assets. To facilitate the development of autonomous vehicles, this article introduces an open platform using commodity vehicles and sensors. Specifically, the authors present algorithms, software libraries, and datasets required for scene recognition, path planning, and vehicle control. This open platform allows researchers and developers to study the basis of autonomous vehicles, design new algorithms, and test their performance using the common interface.",
"title": ""
},
{
"docid": "d580f60d48331b37c55f1e9634b48826",
"text": "The fifth generation (5G) wireless network technology is to be standardized by 2020, where main goals are to improve capacity, reliability, and energy efficiency, while reducing latency and massively increasing connection density. An integral part of 5G is the capability to transmit touch perception type real-time communication empowered by applicable robotics and haptics equipment at the network edge. In this regard, we need drastic changes in network architecture including core and radio access network (RAN) for achieving end-to-end latency on the order of 1 ms. In this paper, we present a detailed survey on the emerging technologies to achieve low latency communications considering three different solution domains: 1) RAN; 2) core network; and 3) caching. We also present a general overview of major 5G cellular network elements such as software defined network, network function virtualization, caching, and mobile edge computing capable of meeting latency and other 5G requirements.",
"title": ""
},
{
"docid": "f071a3d699ba4b3452043b6efb14b508",
"text": "BACKGROUND\nThe medical subdomain of a clinical note, such as cardiology or neurology, is useful content-derived metadata for developing machine learning downstream applications. To classify the medical subdomain of a note accurately, we have constructed a machine learning-based natural language processing (NLP) pipeline and developed medical subdomain classifiers based on the content of the note.\n\n\nMETHODS\nWe constructed the pipeline using the clinical NLP system, clinical Text Analysis and Knowledge Extraction System (cTAKES), the Unified Medical Language System (UMLS) Metathesaurus, Semantic Network, and learning algorithms to extract features from two datasets - clinical notes from Integrating Data for Analysis, Anonymization, and Sharing (iDASH) data repository (n = 431) and Massachusetts General Hospital (MGH) (n = 91,237), and built medical subdomain classifiers with different combinations of data representation methods and supervised learning algorithms. We evaluated the performance of classifiers and their portability across the two datasets.\n\n\nRESULTS\nThe convolutional recurrent neural network with neural word embeddings trained-medical subdomain classifier yielded the best performance measurement on iDASH and MGH datasets with area under receiver operating characteristic curve (AUC) of 0.975 and 0.991, and F1 scores of 0.845 and 0.870, respectively. Considering better clinical interpretability, linear support vector machine-trained medical subdomain classifier using hybrid bag-of-words and clinically relevant UMLS concepts as the feature representation, with term frequency-inverse document frequency (tf-idf)-weighting, outperformed other shallow learning classifiers on iDASH and MGH datasets with AUC of 0.957 and 0.964, and F1 scores of 0.932 and 0.934 respectively. We trained classifiers on one dataset, applied to the other dataset and yielded the threshold of F1 score of 0.7 in classifiers for half of the medical subdomains we studied.\n\n\nCONCLUSION\nOur study shows that a supervised learning-based NLP approach is useful to develop medical subdomain classifiers. The deep learning algorithm with distributed word representation yields better performance yet shallow learning algorithms with the word and concept representation achieves comparable performance with better clinical interpretability. Portable classifiers may also be used across datasets from different institutions.",
"title": ""
},
{
"docid": "91c3734125249659df4098ba02f2d5e5",
"text": "Good performance and efficiency, in terms of high quality of service and resource utilization for example, are important goals in a cloud environment. Through extensive measurements of an n-tier application benchmark (RUBBoS), we show that overall system performance is surprisingly sensitive to appropriate allocation of soft resources (e.g., server thread pool size). Inappropriate soft resource allocation can quickly degrade overall application performance significantly. Concretely, both under-allocation and over-allocation of thread pool can lead to bottlenecks in other resources because of non-trivial dependencies. We have observed some non-obvious phenomena due to these correlated bottlenecks. For instance, the number of threads in the Apache web server can limit the total useful throughput, causing the CPU utilization of the C-JDBC clustering middleware to decrease as the workload increases. We provide a practical iterative solution approach to this challenge through an algorithmic combination of operational queuing laws and measurement data. Our results show that soft resource allocation plays a central role in the performance scalability of complex systems such as n-tier applications in cloud environments.",
"title": ""
},
{
"docid": "edaa6ccb75658c9818e48538c6135097",
"text": "Software Defined Network (SDN) is the latest network architecture in which the data and control planes do not reside on the same networking element. The control of packet forwarding in this architecture is taken out and is carried out by a programmable software component, the controller, whereas the forwarding elements are only used as packet moving devices that are driven by the controller. SDN architecture also provides Open APIs from both control and data planes. In order to provide communication between the controller and the forwarding hardware among many available protocols, OpenFlow (OF), is generally regarded as a standardized protocol for SDN. Open APIs for communication between the controller and applications enable development of network management applications easy. Therefore, SDN makes it possible to program the network thus provide numerous benefits. As a result, various vendors have developed SDN architectures. This paper summarizes as well as compares most of the common SDN architectures available till date.",
"title": ""
},
{
"docid": "d2a9cd6bfbaff70302f2d6f455e87fcc",
"text": "A Deep-learning architecture is a representation learning method with multiple levels of abstraction. It finds out complex structure of nonlinear processing layer in large datasets for pattern recognition. From the earliest uses of deep learning, Convolution Neural Network (CNN) can be trained by simple mathematical method based gradient descent. One of the most promising improvement of CNN is the integration of intelligent heuristic algorithms for learning optimization. In this paper, we use the seven layer CNN, named ConvNet, for handwriting digit classification. The Particle Swarm Optimization algorithm (PSO) is adapted to evolve the internal parameters of processing layers.",
"title": ""
},
{
"docid": "e3299737a0fb3cd3c9433f462565b278",
"text": "BACKGROUND\nMore than two-thirds of pregnant women experience low-back pain and almost one-fifth experience pelvic pain. The two conditions may occur separately or together (low-back and pelvic pain) and typically increase with advancing pregnancy, interfering with work, daily activities and sleep.\n\n\nOBJECTIVES\nTo update the evidence assessing the effects of any intervention used to prevent and treat low-back pain, pelvic pain or both during pregnancy.\n\n\nSEARCH METHODS\nWe searched the Cochrane Pregnancy and Childbirth (to 19 January 2015), and the Cochrane Back Review Groups' (to 19 January 2015) Trials Registers, identified relevant studies and reviews and checked their reference lists.\n\n\nSELECTION CRITERIA\nRandomised controlled trials (RCTs) of any treatment, or combination of treatments, to prevent or reduce the incidence or severity of low-back pain, pelvic pain or both, related functional disability, sick leave and adverse effects during pregnancy.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy.\n\n\nMAIN RESULTS\nWe included 34 RCTs examining 5121 pregnant women, aged 16 to 45 years and, when reported, from 12 to 38 weeks' gestation. Fifteen RCTs examined women with low-back pain (participants = 1847); six examined pelvic pain (participants = 889); and 13 examined women with both low-back and pelvic pain (participants = 2385). Two studies also investigated low-back pain prevention and four, low-back and pelvic pain prevention. Diagnoses ranged from self-reported symptoms to clinicians' interpretation of specific tests. All interventions were added to usual prenatal care and, unless noted, were compared with usual prenatal care. The quality of the evidence ranged from moderate to low, raising concerns about the confidence we could put in the estimates of effect. For low-back painResults from meta-analyses provided low-quality evidence (study design limitations, inconsistency) that any land-based exercise significantly reduced pain (standardised mean difference (SMD) -0.64; 95% confidence interval (CI) -1.03 to -0.25; participants = 645; studies = seven) and functional disability (SMD -0.56; 95% CI -0.89 to -0.23; participants = 146; studies = two). Low-quality evidence (study design limitations, imprecision) also suggested no significant differences in the number of women reporting low-back pain between group exercise, added to information about managing pain, versus usual prenatal care (risk ratio (RR) 0.97; 95% CI 0.80 to 1.17; participants = 374; studies = two). For pelvic painResults from a meta-analysis provided low-quality evidence (study design limitations, imprecision) of no significant difference in the number of women reporting pelvic pain between group exercise, added to information about managing pain, and usual prenatal care (RR 0.97; 95% CI 0.77 to 1.23; participants = 374; studies = two). For low-back and pelvic painResults from meta-analyses provided moderate-quality evidence (study design limitations) that: an eight- to 12-week exercise program reduced the number of women who reported low-back and pelvic pain (RR 0.66; 95% CI 0.45 to 0.97; participants = 1176; studies = four); land-based exercise, in a variety of formats, significantly reduced low-back and pelvic pain-related sick leave (RR 0.76; 95% CI 0.62 to 0.94; participants = 1062; studies = two).The results from a number of individual studies, incorporating various other interventions, could not be pooled due to clinical heterogeneity. There was moderate-quality evidence (study design limitations or imprecision) from individual studies suggesting that osteomanipulative therapy significantly reduced low-back pain and functional disability, and acupuncture or craniosacral therapy improved pelvic pain more than usual prenatal care. Evidence from individual studies was largely of low quality (study design limitations, imprecision), and suggested that pain and functional disability, but not sick leave, were significantly reduced following a multi-modal intervention (manual therapy, exercise and education) for low-back and pelvic pain.When reported, adverse effects were minor and transient.\n\n\nAUTHORS' CONCLUSIONS\nThere is low-quality evidence that exercise (any exercise on land or in water), may reduce pregnancy-related low-back pain and moderate- to low-quality evidence suggesting that any exercise improves functional disability and reduces sick leave more than usual prenatal care. Evidence from single studies suggests that acupuncture or craniosacral therapy improves pregnancy-related pelvic pain, and osteomanipulative therapy or a multi-modal intervention (manual therapy, exercise and education) may also be of benefit.Clinical heterogeneity precluded pooling of results in many cases. Statistical heterogeneity was substantial in all but three meta-analyses, which did not improve following sensitivity analyses. Publication bias and selective reporting cannot be ruled out.Further evidence is very likely to have an important impact on our confidence in the estimates of effect and change the estimates. Studies would benefit from the introduction of an agreed classification system that can be used to categorise women according to their presenting symptoms, so that treatment can be tailored accordingly.",
"title": ""
},
{
"docid": "a490c396ff6d47e11f35d2f08776b7fc",
"text": "The present study examined the nature of social support exchanged within an online HIV/AIDS support group. Content analysis was conducted with reference to five types of social support (information support, tangible assistance, esteem support, network support, and emotional support) on 85 threads (1,138 messages). Our analysis revealed that many of the messages offered informational and emotional support, followed by esteem support and network support, with tangible assistance the least frequently offered. Results suggest that this online support group is a popular forum through which individuals living with HIV/AIDS can offer social support. Our findings have implications for health care professionals who support individuals living with HIV/AIDS.",
"title": ""
},
{
"docid": "dc1093c859a1f3ed32245d4a6809fd34",
"text": "Recommender systems have been researched extensively over the past decades. Whereas several algorithms have been developed and deployed in various application domains, recent research effort s are increasingly oriented towards the user experience of recommender systems. This research goes beyond accuracy of recommendation algorithms and focuses on various human factors that affect acceptance of recommendations, such as user satisfaction, trust, transparency and sense of control. In this paper, we present an interactive visualization framework that combines recommendation with visualization techniques to support human-recommender interaction. Then, we analyze existing interactive recommender systems along the dimensions of our framework, including our work. Based on our survey results, we present future research challenges and opportunities. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7a5edda3bc5b271b6c1305c6a13d50eb",
"text": "Feature-sensitive verification pursues effective analysis of the exponentially many variants of a program family. However, researchers lack examples of concrete bugs induced by variability, occurring in real large-scale systems. Such a collection of bugs is a requirement for goal-oriented research, serving to evaluate tool implementations of feature-sensitive analyses by testing them on real bugs. We present a qualitative study of 42 variability bugs collected from bug-fixing commits to the Linux kernel repository. We analyze each of the bugs, and record the results in a database. In addition, we provide self-contained simplified C99 versions of the bugs, facilitating understanding and tool evaluation. Our study provides insights into the nature and occurrence of variability bugs in a large C software system, and shows in what ways variability affects and increases the complexity of software bugs.",
"title": ""
},
{
"docid": "6ee1666761a78989d5b17bf0de21aa9a",
"text": "Point set registration is a key component in many computer vision tasks. The goal of point set registration is to assign correspondences between two sets of points and to recover the transformation that maps one point set to the other. Multiple factors, including an unknown nonrigid spatial transformation, large dimensionality of point set, noise, and outliers, make the point set registration a challenging problem. We introduce a probabilistic method, called the Coherent Point Drift (CPD) algorithm, for both rigid and nonrigid point set registration. We consider the alignment of two point sets as a probability density estimation problem. We fit the Gaussian mixture model (GMM) centroids (representing the first point set) to the data (the second point set) by maximizing the likelihood. We force the GMM centroids to move coherently as a group to preserve the topological structure of the point sets. In the rigid case, we impose the coherence constraint by reparameterization of GMM centroid locations with rigid parameters and derive a closed form solution of the maximization step of the EM algorithm in arbitrary dimensions. In the nonrigid case, we impose the coherence constraint by regularizing the displacement field and using the variational calculus to derive the optimal transformation. We also introduce a fast algorithm that reduces the method computation complexity to linear. We test the CPD algorithm for both rigid and nonrigid transformations in the presence of noise, outliers, and missing points, where CPD shows accurate results and outperforms current state-of-the-art methods.",
"title": ""
},
{
"docid": "1c1628a582befbefa8e32be4b8053b06",
"text": "Gradient coils for magnetic resonance imaging (MRI) require large currents (> 500 A) for the gradient field strength, as well as high voltage (> 1600 V) for fast slew rates. Additionally, extremely high fidelity, reproducing the command signal, is critical for image quality. A new driver topology recently proposed can provide the high power and operate at high switching frequency allowing high bandwidth control. The paper presents additional improvements to the new driver architecture, and more importantly, describes the digital control design and implementation, crucial to achieve the required performance level. The power stage and control have been build and tested with the experimental results showing that the performance achieved with the new digital control capability, more than fulfills the system requirements",
"title": ""
},
{
"docid": "786df8b6b1231119e79c21cbb98e7b91",
"text": "Electric Vehicle (EV) drivers have an urgent demand for fast battery refueling methods for long distance trip and emergency drive. A well-planned battery swapping station (BSS) network can be a promising solution to offer timely refueling services. However, an inappropriate battery recharging process in the BSS may not only violate the stabilization of the power grid by their large power consumption, but also increase the charging cost from the BSS operators' point of view. In this paper, we aim to obtain the optimal charging policy to minimize the charging cost while ensuring the quality of service (QoS) of the BSS. A novel queueing network model is proposed to capture the operation nature for an individual BSS. Based on practical assumptions, we formulate the charging schedule problem as a stochastic control problem and achieve the optimal charging policy by dynamic programming. Monte Carlo simulation is used to evaluate the performance of different policies for both stationary and non-stationary EV arrival cases. Numerical results show the importance of determining the number of total batteries and charging outlets held in the BSS. Our work gives insight for the future infrastructure planning and operational management of BSS network.",
"title": ""
},
{
"docid": "621ae81c61bbeb4804045b3a038980d2",
"text": "A multi-functional in-memory inference processor integrated circuit (IC) in a 65-nm CMOS process is presented. The prototype employs a deep in-memory architecture (DIMA), which enhances both energy efficiency and throughput over conventional digital architectures via simultaneous access of multiple rows of a standard 6T bitcell array (BCA) per precharge, and embedding column pitch-matched low-swing analog processing at the BCA periphery. In doing so, DIMA exploits the synergy between the dataflow of machine learning (ML) algorithms and the SRAM architecture to reduce the dominant energy cost due to data movement. The prototype IC incorporates a 16-kB SRAM array and supports four commonly used ML algorithms—the support vector machine, template matching, <inline-formula> <tex-math notation=\"LaTeX\">$k$ </tex-math></inline-formula>-nearest neighbor, and the matched filter. Silicon measured results demonstrate simultaneous gains (dot product mode) in energy efficiency of 10<inline-formula> <tex-math notation=\"LaTeX\">$\\times $ </tex-math></inline-formula> and in throughput of 5.3<inline-formula> <tex-math notation=\"LaTeX\">$\\times $ </tex-math></inline-formula> leading to a 53<inline-formula> <tex-math notation=\"LaTeX\">$\\times $ </tex-math></inline-formula> reduction in the energy-delay product with negligible (<inline-formula> <tex-math notation=\"LaTeX\">$\\le $ </tex-math></inline-formula>1%) degradation in the decision-making accuracy, compared with the conventional 8-b fixed-point single-function digital implementations.",
"title": ""
},
{
"docid": "09f6fff3ec44139a305a2e3e5bed2c91",
"text": "This paper presents a novel application to detect counterfeit identity documents forged by a scan-printing operation. Texture analysis approaches are proposed to extract validation features from security background that is usually printed in documents as IDs or banknotes. The main contribution of this work is the end-to-end mobile-server architecture, which provides a service for non-expert users and therefore can be used in several scenarios. The system also provides a crowdsourcing mode so labeled images can be gathered, generating databases for incremental training of the algorithms.",
"title": ""
},
{
"docid": "db5f5f0b7599f1e9b3ebe81139eab1e6",
"text": "In the manufacturing industry, supply chain management is playing an important role in providing profit to the enterprise. Information that is useful in improving existing products and development of new products can be obtained from databases and ontology. The theory of inventive problem solving (TRIZ) supports designers of innovative product design by searching a knowledge base. The existing TRIZ ontology supports innovative design of specific products (Flashlight) for a TRIZ ontology. The research reported in this paper aims at developing a metaontology for innovative product design that can be applied to multiple products in different domain areas. The authors applied the semantic TRIZ to a product (Smart Fan) as an interim stage toward a metaontology that can manage general products and other concepts. Modeling real-world (Smart Pen and Smart Machine) ontologies is undertaken as an evaluation of the metaontology. This may open up new possibilities to innovative product designs. Innovative Product Design using Metaontology with Semantic TRIZ",
"title": ""
},
{
"docid": "222b060b4235b0d31199a74fbc630a0d",
"text": "Online bookings of hotels have increased drastically throughout recent years. Studies in tourism and hospitality have investigated the relevance of hotel attributes influencing choice but did not yet explore them in an online booking setting. This paper presents findings about consumers’ stated preferences for decision criteria from an adaptive conjoint study among 346 respondents. The results show that recommendations of friends and online reviews are the most important factors that influence online hotel booking. Partitioning the importance values of the decision criteria reveals group-specific differences indicating the presence of market segments.",
"title": ""
}
] | scidocsrr |
9303f490a97755ab2c14e154dedd900c | Graph Analytics Through Fine-Grained Parallelism | [
{
"docid": "cbf278a630fbc3e4b5c363d7cb976aa4",
"text": "Iterative computations are pervasive among data analysis applications in the cloud, including Web search, online social network analysis, recommendation systems, and so on. These cloud applications typically involve data sets of massive scale. Fast convergence of the iterative computation on the massive data set is essential for these applications. In this paper, we explore the opportunity for accelerating iterative computations and propose a distributed computing framework, PrIter, which enables fast iterative computation by providing the support of prioritized iteration. Instead of performing computations on all data records without discrimination, PrIter prioritizes the computations that help convergence the most, so that the convergence speed of iterative process is significantly improved. We evaluate PrIter on a local cluster of machines as well as on Amazon EC2 Cloud. The results show that PrIter achieves up to 50x speedup over Hadoop for a series of iterative algorithms.",
"title": ""
},
{
"docid": "efcfb0aac56068374d861f24775c9cce",
"text": "Hekaton is a new database engine optimized for memory resident data and OLTP workloads. Hekaton is fully integrated into SQL Server; it is not a separate system. To take advantage of Hekaton, a user simply declares a table memory optimized. Hekaton tables are fully transactional and durable and accessed using T-SQL in the same way as regular SQL Server tables. A query can reference both Hekaton tables and regular tables and a transaction can update data in both types of tables. T-SQL stored procedures that reference only Hekaton tables can be compiled into machine code for further performance improvements. The engine is designed for high con-currency. To achieve this it uses only latch-free data structures and a new optimistic, multiversion concurrency control technique. This paper gives an overview of the design of the Hekaton engine and reports some experimental results.",
"title": ""
},
{
"docid": "0105247ab487c2d06f3ffa0d00d4b4f9",
"text": "Many distributed storage systems achieve high data access throughput via partitioning and replication, each system with its own advantages and tradeoffs. In order to achieve high scalability, however, today's systems generally reduce transactional support, disallowing single transactions from spanning multiple partitions. Calvin is a practical transaction scheduling and data replication layer that uses a deterministic ordering guarantee to significantly reduce the normally prohibitive contention costs associated with distributed transactions. Unlike previous deterministic database system prototypes, Calvin supports disk-based storage, scales near-linearly on a cluster of commodity machines, and has no single point of failure. By replicating transaction inputs rather than effects, Calvin is also able to support multiple consistency levels---including Paxos-based strong consistency across geographically distant replicas---at no cost to transactional throughput.",
"title": ""
}
] | [
{
"docid": "b3c947eb12abdc0abf7f3bc0de9e74fc",
"text": "This paper describes the development of two nine-storey elevators control system for a residential building. The control system adopts PLC as controller, and uses a parallel connection dispatching rule based on \"minimum waiting time\" to run two elevators in parallel mode. The paper gives the basic structure, control principle and realization method of the PLC control system in detail. It also presents the ladder diagram of the key aspects of the system. The system has simple peripheral circuit and the operation result showed that it enhanced the reliability and pe.rformance of the elevators.",
"title": ""
},
{
"docid": "349caca78b6d21b5f8853b41a8201429",
"text": "OBJECTIVE\nTo evaluate the effectiveness of a functional thumb orthosis on the dominant hand of patients with rheumatoid arthritis and boutonniere thumb.\n\n\nMETHODS\nForty patients with rheumatoid arthritis and boutonniere deformity of the thumb were randomly distributed into two groups. The intervention group used the orthosis daily and the control group used the orthosis only during the evaluation. Participants were evaluated at baseline as well as after 45 and 90 days. Assessments were preformed using the O'Connor Dexterity Test, Jamar dynamometer, pinch gauge, goniometry and the Health Assessment Questionnaire. A visual analogue scale was used to assess thumb pain in the metacarpophalangeal joint.\n\n\nRESULTS\nPatients in the intervention group experienced a statistically significant reduction in pain. The thumb orthosis did not disrupt grip and pinch strength, function, Health Assessment Questionnaire score or dexterity in either group.\n\n\nCONCLUSION\nThe use of thumb orthosis for type I and type II boutonniere deformities was effective in relieving pain.",
"title": ""
},
{
"docid": "f4d4e87dd292377115ff815cc56c001c",
"text": "We present the design and implementation of a real-time, distributed light field camera. Our system allows multiple viewers to navigate virtual cameras in a dynamically changing light field that is captured in real-time. Our light field camera consists of 64 commodity video cameras that are connected to off-the-shelf computers. We employ a distributed rendering algorithm that allows us to overcome the data bandwidth problems inherent in dynamic light fields. Our algorithm works by selectively transmitting only those portions of the video streams that contribute to the desired virtual views. This technique not only reduces the total bandwidth, but it also allows us to scale the number of cameras in our system without increasing network bandwidth. We demonstrate our system with a number of examples.",
"title": ""
},
{
"docid": "441e22ca7323b7490cbdf7f5e6e85a80",
"text": "Familial gigantiform cementoma (FGC) is a rare autosomal dominant, benign fibro-cemento-osseous lesion of the jaws that can cause severe facial deformity. True FGC with familial history is extremely rare and there has been no literature regarding the radiological follow-up of FGC. We report a case of recurrent FGC in an Asian female child who has been under our observation for 6 years since she was 15 months old. After repeated recurrences and subsequent surgeries, the growth of the tumor had seemed to plateau on recent follow-up CT images. The transition from an enhancing soft tissue lesion to a homogeneous bony lesion on CT may indicate decreased growth potential of FGC.",
"title": ""
},
{
"docid": "1a2f2e75691e538c867b6ce58591a6a5",
"text": "Despite the profusion of NIALM researches and products using complex algorithms, addressing the market for low cost, compact, real-time and effective NIALM smart meters is still a challenge. This paper talks about the design of a NIALM smart meter for home appliances, with the ability to self-detect and disaggregate most home appliances. In order to satisfy the compact, real-time, low price requirements and to solve the challenge in slow transient and multi-state appliances, two algorithms are used: the CUSUM to improve the event detection and the Genetic Algorithm (GA) for appliance disaggregation. Evaluation of these algorithms has been done according to public NIALM REDD data set [6]. They are now in first stage of architecture design using Labview FPGA methodology. KeywordsNIALM, CUSUM, Genetic Algorithm, K-mean, classification, smart meter, FPGA.",
"title": ""
},
{
"docid": "2f566d97cf0949ae54276525b805239e",
"text": "The paper analyzes some forms of linguistic ambiguity in English in a specific register, i.e. newspaper headlines. In particular, the focus of the research is on examples of lexical and syntactic ambiguity that result in sources of voluntary or involuntary humor. The study is based on a corpus of 135 verbally ambiguous headlines found on web sites presenting humorous bits of information. The linguistic phenomena that contribute to create this kind of semantic confusion in headlines will be analyzed and divided into the three main categories of lexical, syntactic, and phonological ambiguity, and examples from the corpus will be discussed for each category. The main results of the study were that, firstly, contrary to the findings of previous research on jokes, syntactically ambiguous headlines were found in good percentage in the corpus and that this might point to di¤erences in genre. Secondly, two new configurations for the processing of the disjunctor/connector order were found. In the first of these configurations the disjunctor appears before the connector, instead of being placed after or coinciding with the ambiguous element, while in the second one two ambiguous elements are present, each of which functions both as a connector and",
"title": ""
},
{
"docid": "8069999c95b31e8c847091f72b694af7",
"text": "Software defined radio (SDR) is a rapidly evolving technology which implements some functional modules of a radio system in software executing on a programmable processor. SDR provides a flexible mechanism to reconfigure the radio, enabling networked devices to easily adapt to user preferences and the operating environment. However, the very mechanisms that provide the ability to reconfigure the radio through software also give rise to serious security concerns such as unauthorized modification of the software, leading to radio malfunction and interference with other users' communications. Both the SDR device and the network need to be protected from such malicious radio reconfiguration.\n In this paper, we propose a new architecture to protect SDR devices from malicious reconfiguration. The proposed architecture is based on robust separation of the radio operation environment and user application environment through the use of virtualization. A secure radio middleware layer is used to intercept all attempts to reconfigure the radio, and a security policy monitor checks the target configuration against security policies that represent the interests of various parties. Therefore, secure reconfiguration can be ensured in the radio operation environment even if the operating system in the user application environment is compromised. We have prototyped the proposed secure SDR architecture using VMware and the GNU Radio toolkit, and demonstrate that the overheads incurred by the architecture are small and tolerable. Therefore, we believe that the proposed solution could be applied to address SDR security concerns in a wide range of both general-purpose and embedded computing systems.",
"title": ""
},
{
"docid": "d1d1b85b0675c59f01c61c6f144ee8a7",
"text": "We propose a novel adaptive test of goodness-of-fit, with computational cost linear in the number of samples. We learn the test features that best indicate the differences between observed samples and a reference model, by minimizing the false negative rate. These features are constructed via Stein’s method, meaning that it is not necessary to compute the normalising constant of the model. We analyse the asymptotic Bahadur efficiency of the new test, and prove that under a mean-shift alternative, our test always has greater relative efficiency than a previous linear-time kernel test, regardless of the choice of parameters for that test. In experiments, the performance of our method exceeds that of the earlier linear-time test, and matches or exceeds the power of a quadratic-time kernel test. In high dimensions and where model structure may be exploited, our goodness of fit test performs far better than a quadratic-time two-sample test based on the Maximum Mean Discrepancy, with samples drawn from the model.",
"title": ""
},
{
"docid": "05366166b02ebd29abeb2dcf67710981",
"text": "Wireless access to the Internet via PDAs (personal digital assistants) provides Web type services in the mobile world. What we are lacking are design guidelines for such PDA services. For Web publishing, however, there are many resources to look for guidelines. The guidelines can be classified according to which aspect of the Web media they are related: software/hardware, content and its organization, or aesthetics and layout. In order to be applicable to PDA services, these guidelines have to be modified. In this paper we analyze the main characteristics of PDAs and their influence to the guidelines.",
"title": ""
},
{
"docid": "428fea9d583921320c0377b483b1280e",
"text": "Purpose: The purpose of this paper is to perform a systematic review of articles that have used the unified theory of acceptance and use of technology (UTAUT). Design/methodology/approach: The results produced in this research are based on the literature analysis of 174 existing articles on the UTAUT model. This has been performed by collecting data including demographic details, methodological details, limitations, and significance of relationships between the constructs from the available articles based on the UTAUT. Findings: The findings were categorised by dividing the articles that used the UTAUT model into types of information systems used, research approach and methods employed, and tools and techniques implemented to analyse results. We also perform the weight analysis of variables and found that performance expectancy and behavioural intention qualified for the best predictor category. The research also analysed and presented the limitations of existing studies. Research limitations/implications: The search activities were centered on occurrences of keywords to avoid tracing a large number of publications where these keywords might have been used as casual words in the main text. However, we acknowledge that there may be a number of studies, which lack keywords in the title, but still focus upon UTAUT in some form. Originality/value: This is the first research of its type, which has extensively examined the literature on the UTAUT and provided the researchers with the accumulative knowledge about the model.",
"title": ""
},
{
"docid": "ef6678881f503c1cec330ddde3e30929",
"text": "Complex queries over high speed data streams often need to rely on approximations to keep up with their input. The research community has developed a rich literature on approximate streaming algorithms for this application. Many of these algorithms produce samples of the input stream, providing better properties than conventional random sampling. In this paper, we abstract the stream sampling process and design a new stream sample operator. We show how it can be used to implement a wide variety of algorithms that perform sampling and sampling-based aggregations. Also, we show how to implement the operator in Gigascope - a high speed stream database specialized for IP network monitoring applications. As an example study, we apply the operator within such an enhanced Gigascope to perform subset-sum sampling which is of great interest for IP network management. We evaluate this implemention on a live, high speed internet traffic data stream and find that (a) the operator is a flexible, versatile addition to Gigascope suitable for tuning and algorithm engineering, and (b) the operator imposes only a small evaluation overhead. This is the first operational implementation we know of, for a wide variety of stream sampling algorithms at line speed within a data stream management system.",
"title": ""
},
{
"docid": "e0fd648da901ed99ddbed3457bc83cfe",
"text": "This clinical trial assessed the ability of Gluma Dentin Bond to inhibit dentinal sensitivity in teeth prepared to receive complete cast restorations. Twenty patients provided 76 teeth for the study. Following tooth preparation, dentinal surfaces were coated with either sterile water (control) or two 30-second applications of Gluma Dentin Bond (test) on either intact or removed smear layers. Patients were recalled after 14 days for a test of sensitivity of the prepared dentin to compressed air, osmotic stimulus (saturated CaCl2 solution), and tactile stimulation via a scratch test under controlled loads. A significantly lower number of teeth responded to the test stimuli for both Gluma groups when compared to the controls (P less than .01). No difference was noted between teeth with smear layers intact or removed prior to treatment with Gluma.",
"title": ""
},
{
"docid": "ea304e700faa3d3cae4bff89cf01c397",
"text": "Ternary logic is a promising alternative to the conventional binary logic in VLSI design as it provides the advantages of reduced interconnects, higher operating speeds, and smaller chip area. This paper presents a pair of circuits for implementing a ternary half adder using carbon nanotube field-effect transistors. The proposed designs combine both futuristic ternary and conventional binary logic design approach. One of the proposed circuits for ternary to binary decoder simplifies further circuit implementation and provides excellent delay and power advantages in data path circuit such as adder. These circuits have been extensively simulated using HSPICE to obtain power, delay, and power delay product. The circuit performances are compared with alternative designs reported in recent literature. One of the proposed ternary adders has been demonstrated power, power delay product improvement up to 63% and 66% respectively, with lesser transistor count. So, the use of these half adders in complex arithmetic circuits will be advantageous.",
"title": ""
},
{
"docid": "d0a41ebc758439b91f96b44c40dd711b",
"text": "Chirp signals are very common in radar, communication, sonar, and etc. Little is known about chirp images, i.e., 2-D chirp signals. In fact, such images frequently appear in optics and medical science. Newton's rings fringe pattern is a classical example of the images, which is widely used in optical metrology. It is known that the fractional Fourier transform(FRFT) is a convenient method for processing chirp signals. Furthermore, it can be extended to 2-D fractional Fourier transform for processing 2-D chirp signals. It is interesting to observe the chirp images in the 2-D fractional Fourier transform domain and extract some physical parameters hidden in the images. Besides that, in the FRFT domain, it is easy to separate the 2-D chirp signal from other signals to obtain the desired image.",
"title": ""
},
{
"docid": "76081fd0b4e06c6ee5d7f1e5cef7fe84",
"text": "Systematic procedure is described for designing bandpass filters with wide bandwidths based on parallel coupled three-line microstrip structures. It is found that the tight gap sizes between the resonators of end stages and feed lines, required for wideband filters based on traditional coupled line design, can be greatly released. The relation between the circuit parameters of a three-line coupling section and an admittance inverter circuit is derived. A design graph for substrate with /spl epsiv//sub r/=10.2 is provided. Two filters of orders 3 and 5 with fractional bandwidths 40% and 50%, respectively, are fabricated and measured. Good agreement between prediction and measurement is obtained.",
"title": ""
},
{
"docid": "6490b984de3a9769cdae92208e7bb26d",
"text": "A new perspective on the topic of antibiotic resistance is beginning to emerge based on a broader evolutionary and ecological understanding rather than from the traditional boundaries of clinical research of antibiotic-resistant bacterial pathogens. Phylogenetic insights into the evolution and diversity of several antibiotic resistance genes suggest that at least some of these genes have a long evolutionary history of diversification that began well before the 'antibiotic era'. Besides, there is no indication that lateral gene transfer from antibiotic-producing bacteria has played any significant role in shaping the pool of antibiotic resistance genes in clinically relevant and commensal bacteria. Most likely, the primary antibiotic resistance gene pool originated and diversified within the environmental bacterial communities, from which the genes were mobilized and penetrated into taxonomically and ecologically distant bacterial populations, including pathogens. Dissemination and penetration of antibiotic resistance genes from antibiotic producers were less significant and essentially limited to other high G+C bacteria. Besides direct selection by antibiotics, there is a number of other factors that may contribute to dissemination and maintenance of antibiotic resistance genes in bacterial populations.",
"title": ""
},
{
"docid": "1e7d55b2d45b44ab93c39894c2ea0838",
"text": "Simulink Stateflow is widely used for the model-driven development of software. However, the increasing demand of rigorous verification for safety critical applications brings new challenge to the Simulink Stateflow because of the lack of formal semantics. In this paper, we present STU, a self-contained toolkit to bridge the Simulink Stateflow and a well-defined rigorous verification. The tool translates the Simulink Stateflow into the Uppaal timed automata for verification. Compared to existing work, more advanced and complex modeling features in Stateflow such as the event stack, conditional action and timer are supported. Then, with the strong verification power of Uppaal, we can not only find design defects that are missed by the Simulink Design Verifier, but also check more important temporal properties. The evaluation on artificial examples and real industrial applications demonstrates the effectiveness.",
"title": ""
},
{
"docid": "257ffbc75578916dc89a703598ac0447",
"text": "Implant surgery in mandibular anterior region may turn from an easy minor surgery into a complicated one for the surgeon, due to inadequate knowledge of the anatomy of the surgical area and/or ignorance toward the required surgical protocol. Hence, the purpose of this article is to present an overview on the: (a) Incidence of massive bleeding and its consequences after implant placement in mandibular anterior region. (b) Its etiology, the precautionary measures to be taken to avoid such an incidence in clinical practice and management of such a hemorrhage if at all happens. An inclusion criterion for selection of article was defined, and an electronic Medline search through different database using different keywords and manual search in journals and books was executed. Relevant articles were selected based upon inclusion criteria to form the valid protocols for implant surgery in the anterior mandible. Further, from the selected articles, 21 articles describing case reports were summarized separately in a table to alert the dental surgeons about the morbidity they could come across while operating in this region. If all the required adequate measures for diagnosis and treatment planning are taken and appropriate surgical protocol is followed, mandibular anterior region is no doubt a preferable area for implant placement.",
"title": ""
},
{
"docid": "940b907c28adeaddc2515f304b1d885e",
"text": "In this study, we intend to identify the evolutionary footprints of the South Iberian population focusing on the Berber and Arab influence, which has received little attention in the literature. Analysis of the Y-chromosome variation represents a convenient way to assess the genetic contribution of North African populations to the present-day South Iberian genetic pool and could help to reconstruct other demographic events that could have influenced on that region. A total of 26 Y-SNPs and 17 Y-STRs were genotyped in 144 samples from 26 different districts of South Iberia in order to assess the male genetic composition and the level of substructure of male lineages in this area. To obtain a more comprehensive picture of the genetic structure of the South Iberian region as a whole, our data were compared with published data on neighboring populations. Our analyses allow us to confirm the specific impact of the Arab and Berber expansion and dominion of the Peninsula. Nevertheless, our results suggest that this influence is not bigger in Andalusia than in other Iberian populations.",
"title": ""
},
{
"docid": "1328ced6939005175d3fbe2ef95fd067",
"text": "We present SNIPER, an algorithm for performing efficient multi-scale training in instance level visual recognition tasks. Instead of processing every pixel in an image pyramid, SNIPER processes context regions around ground-truth instances (referred to as chips) at the appropriate scale. For background sampling, these context-regions are generated using proposals extracted from a region proposal network trained with a short learning schedule. Hence, the number of chips generated per image during training adaptively changes based on the scene complexity. SNIPER only processes 30% more pixels compared to the commonly used single scale training at 800x1333 pixels on the COCO dataset. But, it also observes samples from extreme resolutions of the image pyramid, like 1400x2000 pixels. As SNIPER operates on resampled low resolution chips (512x512 pixels), it can have a batch size as large as 20 on a single GPU even with a ResNet-101 backbone. Therefore it can benefit from batch-normalization during training without the need for synchronizing batch-normalization statistics across GPUs. SNIPER brings training of instance level recognition tasks like object detection closer to the protocol for image classification and suggests that the commonly accepted guideline that it is important to train on high resolution images for instance level visual recognition tasks might not be correct. Our implementation based on Faster-RCNN with a ResNet-101 backbone obtains an mAP of 47.6% on the COCO dataset for bounding box detection and can process 5 images per second during inference with a single GPU. Code is available at https://github.com/mahyarnajibi/SNIPER/.",
"title": ""
}
] | scidocsrr |
e1a1b3ef0672815c3094694ae21d711a | Densely Connected Convolutional Neural Network for Multi-purpose Image Forensics under Anti-forensic Attacks | [
{
"docid": "8fda8068ce2cc06b3bcdf06b7e761ca0",
"text": "Image forensics has attracted wide attention during the past decade. However, most existing works aim at detecting a certain operation, which means that their proposed features usually depend on the investigated image operation and they consider only binary classification. This usually leads to misleading results if irrelevant features and/or classifiers are used. For instance, a JPEG decompressed image would be classified as an original or median filtered image if it was fed into a median filtering detector. Hence, it is important to develop forensic methods and universal features that can simultaneously identify multiple image operations. Based on extensive experiments and analysis, we find that any image operation, including existing anti-forensics operations, will inevitably modify a large number of pixel values in the original images. Thus, some common inherent statistics such as the correlations among adjacent pixels cannot be preserved well. To detect such modifications, we try to analyze the properties of local pixels within the image in the residual domain rather than the spatial domain considering the complexity of the image contents. Inspired by image steganalytic methods, we propose a very compact universal feature set and then design a multiclass classification scheme for identifying many common image operations. In our experiments, we tested the proposed features as well as several existing features on 11 typical image processing operations and four kinds of anti-forensic methods. The experimental results show that the proposed strategy significantly outperforms the existing forensic methods in terms of both effectiveness and universality.",
"title": ""
},
{
"docid": "ef0d7de77d25cc574fe361178138d310",
"text": "This paper proposes a new, conceptually simple and effective forensic method to address both the generality and the fine-grained tampering localization problems of image forensics. Corresponding to each kind of image operation, a rich GMM (Gaussian Mixture Model) is learned as the image statistical model for small image patches. Thereafter, the binary classification problem, whether a given image block has been previously processed, can be solved by comparing the average patch log-likelihood values calculated on overlapping image patches under different GMMs of original and processed images. With comparisons to a powerful steganalytic feature, experimental results demonstrate the efficiency of the proposed method, for multiple image operations, on whole images and small blocks.",
"title": ""
}
] | [
{
"docid": "1dcbd0c9fad30fcc3c0b6f7c79f5d04c",
"text": "Anvil is a tool for the annotation of audiovisual material containing multimodal dialogue. Annotation takes place on freely definable, multiple layers (tracks) by inserting time-anchored elements that hold a number of typed attribute-value pairs. Higher-level elements (suprasegmental) consist of a sequence of elements. Attributes contain symbols or cross-level links to arbitrary other elements. Anvil is highly generic (usable with different annotation schemes), platform-independent, XMLbased and fitted with an intuitive graphical user interface. For project integration, Anvil offers the import of speech transcription and export of text and table data for further statistical processing.",
"title": ""
},
{
"docid": "aa12fd5752d85d80ff33f620546cc288",
"text": "Sentiment Analysis(SA) is a combination of emotions, opinions and subjectivity of text. Today, social networking sites like Twitter are tremendously used in expressing the opinions about a particular entity in the form of tweets which are limited to 140 characters. Reviews and opinions play a very important role in understanding peoples satisfaction regarding a particular entity. Such opinions have high potential for knowledge discovery. The main target of SA is to find opinions from tweets, extract sentiments from them and then define their polarity, i.e, positive, negative or neutral. Most of the work in this domain has been done for English Language. In this paper, we discuss and propose sentiment analysis using Hindi language. We will discuss an unsupervised lexicon method for classification.",
"title": ""
},
{
"docid": "7ec9f6b40242a732282520f1a4808d49",
"text": "In this paper, a novel technique to enhance the bandwidth of substrate integrated waveguide cavity backed slot antenna is demonstrated. The feeding technique to the cavity backed antenna has been modified by introducing offset feeding of microstrip line along with microstrip to grounded coplanar waveguide transition which helps to excite TE120 mode in the cavity and also to get improvement in impedance matching to the slot antenna simultaneously. The proposed antenna is designed to resonate in X band (8-12 GHz) and shows a resonance at 10.2 GHz with a bandwidth of 4.2% and a gain of 5.6 dBi, 15.6 dB front to back ratio and -30 dB maximum cross polarization level.",
"title": ""
},
{
"docid": "2c328d1dd45733ad8063ea89a6b6df43",
"text": "We present Residual Policy Learning (RPL): a simple method for improving nondifferentiable policies using model-free deep reinforcement learning. RPL thrives in complex robotic manipulation tasks where good but imperfect controllers are available. In these tasks, reinforcement learning from scratch remains data-inefficient or intractable, but learning a residual on top of the initial controller can yield substantial improvement. We study RPL in five challenging MuJoCo tasks involving partial observability, sensor noise, model misspecification, and controller miscalibration. By combining learning with control algorithms, RPL can perform long-horizon, sparse-reward tasks for which reinforcement learning alone fails. Moreover, we find that RPL consistently and substantially improves on the initial controllers. We argue that RPL is a promising approach for combining the complementary strengths of deep reinforcement learning and robotic control, pushing the boundaries of what either can achieve independently.",
"title": ""
},
{
"docid": "a57e7eae346ee2aa7bbcaf08b8ac3481",
"text": "A large number of problems in AI and other areas of computer science can be viewed as special cases of the constraint-satisfaction problem. Some examples are machine vision, belief maintenance, scheduling, temporal reasoning, graph problems, floor plan design, the planning of genetic experiments, and the satisfiability problem. A number of different approaches have been developed for solving these problems. Some of them use constraint propagation to simplify the original problem. Others use backtracking to directly search for possible solutions. Some are a combination of these two techniques. This article overviews many of these approaches in a tutorial fashion. Articles",
"title": ""
},
{
"docid": "f8ac5a0dbd0bf8228b8304c1576189b9",
"text": "The importance of cost planning for solid waste management (SWM) in industrialising regions (IR) is not well recognised. The approaches used to estimate costs of SWM can broadly be classified into three categories - the unit cost method, benchmarking techniques and developing cost models using sub-approaches such as cost and production function analysis. These methods have been developed into computer programmes with varying functionality and utility. IR mostly use the unit cost and benchmarking approach to estimate their SWM costs. The models for cost estimation, on the other hand, are used at times in industrialised countries, but not in IR. Taken together, these approaches could be viewed as precedents that can be modified appropriately to suit waste management systems in IR. The main challenges (or problems) one might face while attempting to do so are a lack of cost data, and a lack of quality for what data do exist. There are practical benefits to planners in IR where solid waste problems are critical and budgets are limited.",
"title": ""
},
{
"docid": "fabe804ea92785764b9e7e1b0b0fea9c",
"text": "Many emerging applications such as intruder detection and border protection drive the fast increasing development of device-free passive (DfP) localization techniques. In this paper, we present Pilot, a Channel State Information (CSI)-based DfP indoor localization system in WLAN. Pilot design is motivated by the observations that PHY layer CSI is capable of capturing the environment variance due to frequency diversity of wideband channel, such that the position where the entity located can be uniquely identified by monitoring the CSI feature pattern shift. Therefore, a ``passive'' radio map is constructed as prerequisite which include fingerprints for entity located in some crucial reference positions, as well as clear environment. Unlike device-based approaches that directly percepts the current state of entities, the first challenge for DfP localization is to detect their appearance in the area of interest. To this end, we design an essential anomaly detection block as the localization trigger relying on the CSI feature shift when entity emerges. Afterwards, a probabilistic algorithm is proposed to match the abnormal CSI to the fingerprint database to estimate the positions of potential existing entities. Finally, a data fusion block is developed to address the multiple entities localization challenge. We have implemented Pilot system with commercial IEEE 802.11n NICs and evaluated the performance in two typical indoor scenarios. It is shown that our Pilot system can greatly outperform the corresponding best RSS-based scheme in terms of anomaly detection and localization accuracy.",
"title": ""
},
{
"docid": "161643d403819a0b9815da64c9c472ae",
"text": "The Domain Name System (DNS) is an essential network infrastructure component since it supports the operation of the Web, Email, Voice over IP (VoIP) and other business- critical applications running over the network. Events that compromise the security of DNS can have a significant impact on the Internet since they can affect its availability and its intended operation. This paper describes algorithms used to monitor and detect certain types of attacks to the DNS infrastructure using flow data. Our methodology is based on algorithms that do not rely on known signature attack vectors. The effectiveness of our solution is illustrated with real and simulated traffic examples. In one example, we were able to detect a tunneling attack well before the appearance of public reports of it.",
"title": ""
},
{
"docid": "096b09f064643cbd2cd80f310981c5a6",
"text": "A Ku-band 200-W pulsed solid-state power amplifier has been presented and designed by using a hybrid radial-/rectangular-waveguide spatially power-combining technique. The hybrid radial-/rectangular-waveguide power-dividing/power-combining circuit employed in this design provides not only a high power-combining efficiency over a wide bandwidth but also efficient heat sinking for the active power devices. A simple design approach of the presented power-dividing/power-combining structure has been developed. The measured small-signal gain of the pulsed power amplifier is about 51.3 dB over the operating frequency range, while the measured maximum output power at 1-dB compression is 209 W at 13.9 GHz, with an active power-combining efficiency of about 91%. Furthermore, the active power-combining efficiency is greater than 82% from 13.75 to 14.5 GHz.",
"title": ""
},
{
"docid": "8a9680ae0d35a1c53773ccf7dcef4df7",
"text": "Support Vector Machines SVMs have proven to be highly e ective for learning many real world datasets but have failed to establish them selves as common machine learning tools This is partly due to the fact that they are not easy to implement and their standard imple mentation requires the use of optimization packages In this paper we present simple iterative algorithms for training support vector ma chines which are easy to implement and guaranteed to converge to the optimal solution Furthermore we provide a technique for automati cally nding the kernel parameter and best learning rate Extensive experiments with real datasets are provided showing that these al gorithms compare well with standard implementations of SVMs in terms of generalisation accuracy and computational cost while being signi cantly simpler to implement",
"title": ""
},
{
"docid": "8e65630f39f96c281e206bdacf7a1748",
"text": "Precise measurement of the local position of moveable targets in three dimensions is still considered to be a challenge. With the presented local position measurement technology, a novel system, consisting of small and lightweight measurement transponders and a number of fixed base stations, is introduced. The system is operating in the 5.8-GHz industrial-scientific-medical band and can handle up to 1000 measurements per second with accuracies down to a few centimeters. Mathematical evaluation is based on a mechanical equivalent circuit. Measurement results obtained with prototype boards demonstrate the feasibility of the proposed technology in a practical application at a race track.",
"title": ""
},
{
"docid": "df896e48cb4b5a364006b3a8e60a96ac",
"text": "This paper describes a monocular vision based parking-slot-markings recognition algorithm, which is used to automate the target position selection of automatic parking assist system. Peak-pair detection and clustering in Hough space recognize marking lines. Specially, one-dimensional filter in Hough space is designed to utilize a priori knowledge about the characteristics of marking lines in bird's eye view edge image. Modified distance between point and line-segment is used to distinguish guideline from recognized marking line-segments. Once the guideline is successfully recognized, T-shape template matching easily recognizes dividing marking line-segments. Experiments show that proposed algorithm successfully recognizes parking slots even when adjacent vehicles occlude parking-slot-markings severely",
"title": ""
},
{
"docid": "1997a007b2eb9a314c4e9320d22293b4",
"text": "Face detection constitutes a key visual information analysis task in Machine Learning. The rise of Big Data has resulted in the accumulation of a massive volume of visual data which requires proper and fast analysis. Deep Learning methods are powerful approaches towards this task as training with large amounts of data exhibiting high variability has been shown to significantly enhance their effectiveness, but often requires expensive computations and leads to models of high complexity. When the objective is to analyze visual content in massive datasets, the complexity of the model becomes crucial to the success of the model. In this paper, a lightweight deep Convolutional Neural Network (CNN) is introduced for the purpose of face detection, designed with a view to minimize training and testing time, and outperforms previously published deep convolutional networks in this task, in terms of both effectiveness and efficiency. To train this lightweight deep network without compromising its efficiency, a new training method of progressive positive and hard negative sample mining is introduced and shown to drastically improve training speed and accuracy. Additionally, a separate deep network was trained to detect individual facial features and a model that combines the outputs of the two networks was created and evaluated. Both methods are capable of detecting faces under severe occlusion and unconstrained pose variation and meet the difficulties of large scale real-world, real-time face detection, and are suitable for deployment even in mobile environments such as Unmanned Aerial Vehicles (UAVs).",
"title": ""
},
{
"docid": "5203f520e6992ae6eb2e8cb28f523f6a",
"text": "Integrons can insert and excise antibiotic resistance genes on plasmids in bacteria by site-specific recombination. Class 1 integrons code for an integrase, IntI1 (337 amino acids in length), and are generally borne on elements derived from Tn5090, such as that found in the central part of Tn21. A second class of integron is found on transposon Tn7 and its relatives. We have completed the sequence of the Tn7 integrase gene, intI2, which contains an internal stop codon. This codon was found to be conserved among intI2 genes on three other Tn7-like transposons harboring different cassettes. The predicted peptide sequence (IntI2*) is 325 amino acids long and is 46% identical to IntI1. In order to detect recombination activity, the internal stop codon at position 179 in the parental allele was changed to a triplet coding for glutamic acid. The sequences flanking the cassette arrays in the class 1 and 2 integrons are not closely related, but a common pool of mobile cassettes is used by the different integron classes; two of the three antibiotic resistance cassettes on Tn7 and its close relatives are also found in various class 1 integrons. We also observed a fourth excisable cassette downstream of those described previously in Tn7. The fourth cassette encodes a 165-amino-acid protein of unknown function with 6.5 contiguous repeats of a sequence coding for 7 amino acids. IntI2*179E promoted site-specific excision of each of the cassettes in Tn7 at different frequencies. The integrases from Tn21 and Tn7 showed limited cross-specificity in that IntI1 could excise all cassettes from both Tn21 and Tn7. However, we did not observe a corresponding excision of the aadA1 cassette from Tn21 by IntI2*179E.",
"title": ""
},
{
"docid": "a85e4925e82baf96f507494c91126361",
"text": "Contractile myocytes provide a test of the hypothesis that cells sense their mechanical as well as molecular microenvironment, altering expression, organization, and/or morphology accordingly. Here, myoblasts were cultured on collagen strips attached to glass or polymer gels of varied elasticity. Subsequent fusion into myotubes occurs independent of substrate flexibility. However, myosin/actin striations emerge later only on gels with stiffness typical of normal muscle (passive Young's modulus, E approximately 12 kPa). On glass and much softer or stiffer gels, including gels emulating stiff dystrophic muscle, cells do not striate. In addition, myotubes grown on top of a compliant bottom layer of glass-attached myotubes (but not softer fibroblasts) will striate, whereas the bottom cells will only assemble stress fibers and vinculin-rich adhesions. Unlike sarcomere formation, adhesion strength increases monotonically versus substrate stiffness with strongest adhesion on glass. These findings have major implications for in vivo introduction of stem cells into diseased or damaged striated muscle of altered mechanical composition.",
"title": ""
},
{
"docid": "fb43f7e740f4a2cc6c63e3cad9bc3fc7",
"text": "The prediction task in national language processing means to guess the missing letter, word, phrase, or sentence that likely follow in a given segment of a text. Since 1980s many systems with different methods were developed for different languages. In this paper an overview of the existing prediction methods that have been used for more than two decades are described and a general classification of the approaches is presented. The three main categories of the classification are statistical modeling, knowledge-based modeling, and heuristic modeling (adaptive).",
"title": ""
},
{
"docid": "b963250b3fd1cb874c6caa93796ca1e7",
"text": "Context awareness was introduced recently in several fields in quotidian human activities. Among context aware applications, health care systems are the most important ones. Such applications, in order to perceive the context, rely on sensors which may be physical or virtual. However, these applications lack of standardization in handling the context and the perceived sensors data. In this work, we propose a formal context aware application architecture model to deal with the context taking into account the scalability and interoperability as key features towards an abstraction of the context relatively to end user applications. As a proof of concept, we present also a case study and simulation explaining the operational aspect of this architecture in health care systems.",
"title": ""
},
{
"docid": "e8b486ce556a0193148ffd743661bce9",
"text": "This chapter presents the fundamentals and applications of the State Machine Replication (SMR) technique for implementing consistent fault-tolerant services. Our focus here is threefold. First we present some fundamentals about distributed computing and three “practical” SMR protocols for different fault models. Second, we discuss some recent work aiming to improve the performance, modularity and robustness of SMR protocols. Finally, we present some prominent applications for SMR and an example of the real code needed for implementing a dependable service using the BFT-SMART replication library.",
"title": ""
},
{
"docid": "d2b06786b6daa023dfd9f58ac99e8186",
"text": "A systematic method for deriving soft-switching three-port converters (TPCs), which can interface multiple energy, is proposed in this paper. Novel full-bridge (FB) TPCs featuring single-stage power conversion, reduced conduction loss, and low-voltage stress are derived. Two nonisolated bidirectional power ports and one isolated unidirectional load port are provided by integrating an interleaved bidirectional Buck/Boost converter and a bridgeless Boost rectifier via a high-frequency transformer. The switching bridges on the primary side are shared; hence, the number of active switches is reduced. Primary-side pulse width modulation and secondary-side phase shift control strategy are employed to provide two control freedoms. Voltage and power regulations over two of the three power ports are achieved. Furthermore, the current/voltage ripples on the primary-side power ports are reduced due to the interleaving operation. Zero-voltage switching and zero-current switching are realized for the active switches and diodes, respectively. A typical FB-TPC with voltage-doubler rectifier developed by the proposed method is analyzed in detail. Operation principles, control strategy, and characteristics of the FB-TPC are presented. Experiments have been carried out to demonstrate the feasibility and effectiveness of the proposed topology derivation method.",
"title": ""
},
{
"docid": "921f141ac96c707aa2abc0c4071053d5",
"text": "When a mesh of simplicial elements (triangles or tetrahedra) is used to form a piecewise linear approximation of a function, the accuracy of the approximation depends on the sizes and shapes of the elements. In finite element methods, the conditioning of the stiffness matrices also depends on the sizes and shapes of the elements. This paper explains the mathematical connections between mesh geometry, interpolation errors, and stiffness matrix conditioning. These relationships are expressed by error bounds and element quality measures that determine the fitness of a triangle or tetrahedron for interpolation or for achieving low condition numbers. Unfortunately, the quality measures for these two purposes do not agree with each other; for instance, small angles are bad for matrix conditioning but not for interpolation. Several of the upper and lower bounds on interpolation errors and element stiffness matrix conditioning given here are tighter than those that have appeared in the literature before, so the quality measures are likely to be unusually precise indicators of element fitness.",
"title": ""
}
] | scidocsrr |
6b86364641ab8e2bb17cb12913780e8c | Time for a paradigm change in meniscal repair: save the meniscus! | [
{
"docid": "c8c82af8fc9ca5e0adac5b8b6a14031d",
"text": "PURPOSE\nTo systematically review the results of arthroscopic transtibial pullout repair (ATPR) for posterior medial meniscus root tears.\n\n\nMETHODS\nA systematic electronic search of the PubMed database and the Cochrane Library was performed in September 2014 to identify studies that reported clinical, radiographic, or second-look arthroscopic outcomes of ATPR for posterior medial meniscus root tears. Included studies were abstracted regarding study characteristics, patient demographic characteristics, surgical technique, rehabilitation, and outcome measures. The methodologic quality of the included studies was assessed with the modified Coleman Methodology Score.\n\n\nRESULTS\nSeven studies with a total of 172 patients met the inclusion criteria. The mean patient age was 55.3 years, and 83% of patients were female patients. Preoperative and postoperative Lysholm scores were reported for all patients. After a mean follow-up period of 30.2 months, the Lysholm score increased from 52.4 preoperatively to 85.9 postoperatively. On conventional radiographs, 64 of 76 patients (84%) showed no progression of Kellgren-Lawrence grading. Magnetic resonance imaging showed no progression of cartilage degeneration in 84 of 103 patients (82%) and showed reduced medial meniscal extrusion in 34 of 61 patients (56%). On the basis of second-look arthroscopy and magnetic resonance imaging in 137 patients, the healing status was rated as complete in 62%, partial in 34%, and failed in 3%. Overall, the methodologic quality of the included studies was fair, with a mean modified Coleman Methodology Score of 63.\n\n\nCONCLUSIONS\nATPR significantly improves functional outcome scores and seems to prevent the progression of osteoarthritis in most patients, at least during a short-term follow-up. Complete healing of the repaired root and reduction of meniscal extrusion seem to be less predictable, being observed in only about 60% of patients. Conclusions about the progression of osteoarthritis and reduction of meniscal extrusion are limited by the small portion of patients undergoing specific evaluation (44% and 35% of the study group, respectively).\n\n\nLEVEL OF EVIDENCE\nLevel IV, systematic review of Level III and IV studies.",
"title": ""
}
] | [
{
"docid": "483881d2c4ab6b25b019bdf1ebd75913",
"text": "Copyright: © 2018 The Author(s) Abstract. In the last few years, leading-edge research from information systems, strategic management, and economics have separately informed our understanding of platforms and infrastructures in the digital age. Our motivation for undertaking this special issue rests in the conviction that it is significant to discuss platforms and infrastructures concomitantly, while enabling knowledge from diverse disciplines to cross-pollinate to address critical, pressing policy challenges and inform strategic thinking across both social and business spheres. In this editorial, we review key insights from the literature on digital infrastructures and platforms, present emerging research themes, highlight the contributions developed from each of the six articles in this special issue, and conclude with suggestions for further research.",
"title": ""
},
{
"docid": "d64b30b463245e7e3b1690a04f1748e2",
"text": "Grasping-force optimization of multifingered robotic hands can be formulated as a problem for minimizing an objective function subject to form-closure constraints and balance constraints of external force. This paper presents a novel recurrent neural network for real-time dextrous hand-grasping force optimization. The proposed neural network is shown to be globally convergent to the optimal grasping force. Compared with existing approaches to grasping-force optimization, the proposed neural-network approach has the advantages that the complexity for implementation is reduced, and the solution accuracy is increased, by avoiding the linearization of quadratic friction constraints. Simulation results show that the proposed neural network can achieve optimal grasping force in real time.",
"title": ""
},
{
"docid": "9847936462257d8f0d03473c9a78f27d",
"text": "In this paper, a vision-guided autonomous quadrotor in an air-ground multi-robot system has been proposed. This quadrotor is equipped with a monocular camera, IMUs and a flight computer, which enables autonomous flights. Two complementary pose/motion estimation methods, respectively marker-based and optical-flow-based, are developed by considering different altitudes in a flight. To achieve smooth take-off, stable tracking and safe landing with respect to a moving ground robot and desired trajectories, appropriate controllers are designed. Additionally, data synchronization and time delay compensation are applied to improve the system performance. Real-time experiments are conducted in both indoor and outdoor environments.",
"title": ""
},
{
"docid": "0349cf3ec02acb10afd94db3b2910ac5",
"text": "Reaction of WF6 with air-exposed 27and 250-nm-thick Ti films has been studied using Rutherford backscattering spectroscopy, scanning and high-resolution transmission electron microscopy, electron and x-ray diffraction, and x-ray photoelectron spectroscopy. We show that W nucleates and grows rapidly at localized sites on Ti during short WF 6 exposures~'6 s! at 445 °C at low partial pressurespWF6,0.2 Torr. Large amounts of F, up to '2.0310 atoms/cm corresponding to an average F/Ti ratio of 1.5 in a 27-nm-thick Ti layer, penetrate the Ti film, forming a solid solution and nonvolatile TiF3. The large stresses developed due to volume expansion during fluorination of the Ti layer result in local delamination at the W/Ti and the Ti/SiO 2 interfaces at low and high WF 6 exposures, respectively. WF 6 exposure atpWF6.0.35 results in the formation of a network of elongated microcracks in the W film which allow WF 6 to diffuse through and attack the underlying Ti, consuming the 27-nm-thick Ti film through the evolution of gaseous TiF 4. © 1999 American Institute of Physics. @S0021-8979 ~99!10303-7#",
"title": ""
},
{
"docid": "b906dc1c2fc89824fd25a455dcf1475b",
"text": "Compelling evidence indicates that the CRISPR-Cas system protects prokaryotes from viruses and other potential genome invaders. This adaptive prokaryotic immune system arises from the clustered regularly interspaced short palindromic repeats (CRISPRs) found in prokaryotic genomes, which harbor short invader-derived sequences, and the CRISPR-associated (Cas) protein-coding genes. Here, we have identified a CRISPR-Cas effector complex that is comprised of small invader-targeting RNAs from the CRISPR loci (termed prokaryotic silencing (psi)RNAs) and the RAMP module (or Cmr) Cas proteins. The psiRNA-Cmr protein complexes cleave complementary target RNAs at a fixed distance from the 3' end of the integral psiRNAs. In Pyrococcus furiosus, psiRNAs occur in two size forms that share a common 5' sequence tag but have distinct 3' ends that direct cleavage of a given target RNA at two distinct sites. Our results indicate that prokaryotes possess a unique RNA silencing system that functions by homology-dependent cleavage of invader RNAs.",
"title": ""
},
{
"docid": "d46415de07d618b5127602b614415c83",
"text": "In many cases, the topology of communcation systems can be abstracted and represented as graph. Graph theories and algorithms are useful in these situations. In this paper, we introduced an algorithm to enumerate all cycles in a graph. It can be applied on digraph or undirected graph. Multigraph can also be used on for this purpose. It can be used to enumerate given length cycles without enumerating all cycles. This algorithm is simple and easy to be implemented.",
"title": ""
},
{
"docid": "f3f441c2cf1224746c0bfbb6ce02706d",
"text": "This paper addresses the task of finegrained opinion extraction – the identification of opinion-related entities: the opinion expressions, the opinion holders, and the targets of the opinions, and the relations between opinion expressions and their targets and holders. Most existing approaches tackle the extraction of opinion entities and opinion relations in a pipelined manner, where the interdependencies among different extraction stages are not captured. We propose a joint inference model that leverages knowledge from predictors that optimize subtasks of opinion extraction, and seeks a globally optimal solution. Experimental results demonstrate that our joint inference approach significantly outperforms traditional pipeline methods and baselines that tackle subtasks in isolation for the problem of opinion extraction.",
"title": ""
},
{
"docid": "138cd401515c3367428f88d4ef5d5cc7",
"text": "BACKGROUND\nThe present study was designed to implement an interprofessional simulation-based education program for nursing students and evaluate the influence of this program on nursing students' attitudes toward interprofessional education and knowledge about operating room nursing.\n\n\nMETHODS\nNursing students were randomly assigned to either the interprofessional simulation-based education or traditional course group. A before-and-after study of nursing students' attitudes toward the program was conducted using the Readiness for Interprofessional Learning Scale. Responses to an open-ended question were categorized using thematic content analysis. Nursing students' knowledge about operating room nursing was measured.\n\n\nRESULTS\nNursing students from the interprofessional simulation-based education group showed statistically different responses to four of the nineteen questions in the Readiness for Interprofessional Learning Scale, reflecting a more positive attitude toward interprofessional learning. This was also supported by thematic content analysis of the open-ended responses. Furthermore, nursing students in the simulation-based education group had a significant improvement in knowledge about operating room nursing.\n\n\nCONCLUSIONS\nThe integrated course with interprofessional education and simulation provided a positive impact on undergraduate nursing students' perceptions toward interprofessional learning and knowledge about operating room nursing. Our study demonstrated that this course may be a valuable elective option for undergraduate nursing students in operating room nursing education.",
"title": ""
},
{
"docid": "69a9dddda7590fb4f7b44216c6fc5a83",
"text": "We have developed a fast, perceptual method for selecting color scales for data visualization that takes advantage of our sensitivity to luminance variations in human faces. To do so, we conducted experiments in which we mapped various color scales onto the intensitiy values of a digitized photograph of a face and asked observers to rate each image. We found a very strong correlation between the perceived naturalness of the images and the degree to which the underlying color scales increased monotonically in luminance. Color scales that did not include a monotonically-increasing luminance component produced no positive rating scores. Since color scales with monotonic luminance profiles are widely recommended for visualizing continuous scalar data, a purely visual technique for identifying such color scales could be very useful, especially in situations where color calibration is not integrated into the visualization environment, such as over the Internet.",
"title": ""
},
{
"docid": "4e7003b497dc59c373347d8814c8f83e",
"text": "The present experiment was designed to test whether specific recordable changes in the neuromuscular system could be associated with specific alterations in soft- and hard-tissue morphology in the craniofacial region. The effect of experimentally induced neuromuscular changes on the craniofacial skeleton and dentition of eight rhesus monkeys was studied. The neuromuscular changes were triggered by complete nasal airway obstruction and the need for an oral airway. Alterations were also triggered 2 years later by removal of the obstruction and the return to nasal breathing. Changes in neuromuscular recruitment patterns resulted in changed function and posture of the mandible, tongue, and upper lip. There was considerable variation among the animals. Statistically significant morphologic effects of the induced changes were documented in several of the measured variables after the 2-year experimental period. The anterior face height increased more in the experimental animals than in the control animals; the occlusal and mandibular plane angles measured to the sella-nasion line increased; and anterior crossbites and malposition of teeth occurred. During the postexperimental period some of these changes were reversed. Alterations in soft-tissue morphology were also observed during both experimental periods. There was considerable variation in morphologic response among the animals. It was concluded that the marked individual variations in skeletal morphology and dentition resulting from the procedures were due to the variation in nature and degree of neuromuscular and soft-tissue adaptations in response to the altered function. The recorded neuromuscular recruitment patterns could not be directly related to specific changes in morphology.",
"title": ""
},
{
"docid": "3755f56410365a498c3a1ff4b61e77de",
"text": "Both high switching frequency and high efficiency are critical in reducing power adapter size. The active clamp flyback (ACF) topology allows zero voltage soft switching (ZVS) under all line and load conditions, eliminates leakage inductance and snubber losses, and enables high frequency and high power density power conversion. Traditional ACF ZVS operation relies on the resonance between leakage inductance and a small primary-side clamping capacitor, which leads to increased rms current and high conduction loss. This also causes oscillatory output rectifier current and impedes the implementation of synchronous rectification. This paper proposes a secondary-side resonance scheme to shape the primary current waveform in a way that significantly improves synchronous rectifier operation and reduces primary rms current. The concept is verified with a ${\\mathbf{25}}\\hbox{--}{\\text{W/in}}^{3}$ high-density 45-W adapter prototype using a monolithic gallium nitride power IC. Over 93% full-load efficiency was demonstrated at the worst case 90-V ac input and maximum full-load efficiency was 94.5%.",
"title": ""
},
{
"docid": "c8722cd243c552811c767fc160020b75",
"text": "Touché proposes a novel Swept Frequency Capacitive Sensing technique that can not only detect a touch event, but also recognize complex configurations of the human hands and body. Such contextual information significantly enhances touch interaction in a broad range of applications, from conventional touchscreens to unique contexts and materials. For example, in our explorations we add touch and gesture sensitivity to the human body and liquids. We demonstrate the rich capabilities of Touché with five example setups from different application domains and conduct experimental studies that show gesture classification accuracies of 99% are achievable with our technology.",
"title": ""
},
{
"docid": "79020f32ea93c9e9789bb3546cde1016",
"text": "Within software engineering, requirements engineering starts from imprecise and vague user requirements descriptions and infers precise, formalized specifications. Techniques, such as interviewing by requirements engineers, are typically applied to identify the user's needs. We want to partially automate even this first step of requirements elicitation by methods of evolutionary computation. The idea is to enable users to specify their desired software by listing examples of behavioral descriptions. Users initially specify two lists of operation sequences, one with desired behaviors and one with forbidden behaviors. Then, we search for the appropriate formal software specification in the form of a deterministic finite automaton. We solve this problem known as grammatical inference with an active coevolutionary approach following Bongard and Lipson [2]. The coevolutionary process alternates between two phases: (A) additional training data is actively proposed by an evolutionary process and the user is interactively asked to label it; (B) appropriate automata are then evolved to solve this extended grammatical inference problem. Our approach leverages multi-objective evolution in both phases and outperforms the state-of-the-art technique [2] for input alphabet sizes of three and more, which are relevant to our problem domain of requirements specification.",
"title": ""
},
{
"docid": "6c9f3107fbf14f5bef1b8edae1b9d059",
"text": "Syntax definitions are pervasive in modern software systems, and serve as the basis for language processing tools like parsers and compilers. Mainstream parser generators pose restrictions on syntax definitions that follow from their implementation algorithm. They hamper evolution, maintainability, and compositionality of syntax definitions. The pureness and declarativity of syntax definitions is lost. We analyze how these problems arise for different aspects of syntax definitions, discuss their consequences for language engineers, and show how the pure and declarative nature of syntax definitions can be regained.",
"title": ""
},
{
"docid": "ae1705c0b7be3c218c1fcb42cc53ea9a",
"text": "We examine the relation between executive compensation and corporate fraud. Executives at fraud firms have significantly larger equity-based compensation and greater financial incentives to commit fraud than do executives at industryand sizematched control firms. Executives at fraud firms also earn significantly more total compensation by exercising significantly larger fractions of their vested options than the control executives during the fraud years. Operating and stock performance measures suggest executives who commit corporate fraud attempt to offset declines in performance that would otherwise occur. Our results imply that optimal governance measures depend on the strength of executives’ financial incentives.",
"title": ""
},
{
"docid": "40f8240220dad82a7a2da33932fb0e73",
"text": "The incidence of clinically evident Curling's ulcer among 109 potentially salvageable severely burned patients was reviewed. These patients, who had greater than a 40 per cent body surface area burn, received one of these three treatment regimens: antacids hourly until autografting was complete, antacids hourly during the early postburn period followed by nutritional supplementation with Vivonex until autografting was complete or no antacids during the early postburn period but subsequent nutritional supplementation with Vivonex until autografting was complete. Clinically evident Curling's ulcer occurred in three patients. This incidence approximates the lowest reported among severely burned patients treated prophylactically with acid-reducing regimens to minimize clinically evident Curling's ulcer. In addition to its protective effect on Curling's ulcer, Vivonex, when used in combination with a high protein, high caloric diet, meets the caloric needs of the severely burned patient. Probably, Vivonex, which has a pH range of 4.5 to 5.4 protects against clinically evident Curling's ulcer by a dilutional alkalinization of gastric secretion.",
"title": ""
},
{
"docid": "8628e1073017a7dc0fec1d22e46280db",
"text": "Narita for their comments. Some of the results and ideas in this paper are similar to those in a working paper that I wrote in 2009, \"Bursting Bubbles: Consequences and Cures.\"",
"title": ""
},
{
"docid": "9c452434ad1c25d0fbe71138b6c39c4b",
"text": "Dual control frameworks for systems subject to uncertainties aim at simultaneously learning the unknown parameters while controlling the system dynamics. We propose a robust dual model predictive control algorithm for systems with bounded uncertainty with application to soft landing control. The algorithm exploits a robust control invariant set to guarantee constraint enforcement in spite of the uncertainty, and a constrained estimation algorithm to guarantee admissible parameter estimates. The impact of the control input on parameter learning is accounted for by including in the cost function a reference input, which is designed online to provide persistent excitation. The reference input design problem is non-convex, and here is solved by a sequence of relaxed convex problems. The results of the proposed method in a soft-landing control application in transportation systems are shown.",
"title": ""
},
{
"docid": "11b11bf5be63452e28a30b4494c9a704",
"text": "Advertisement and Brand awareness plays an important role in brand building, brand recognition, brand loyalty and boost up the sales performance which is regarded as the foundation for brand development. To some degree advertisement and brand awareness can directly influence consumers’ buying behavior. The female consumers from IT industry have been taken as main consumers for the research purpose. The researcher seeks to inspect and investigate brand’s intention factors and consumer’s individual factors in influencing advertisement and its impact of brand awareness on fast moving consumer goods especially personal care products .The aim of the paper is to examine the advertising and its impact of brand awareness towards FMCG Products, on the other hand, to analyze the influence of advertising on personal care products among female consumers in IT industry and finally to study the impact of media on advertising & brand awareness. The prescribed survey were conducted in the form of questionnaire and found valid and reliable for this research. After evaluating some questions, better questionnaires were developed. Then the questionnaires were distributed among 200 female consumers with a response rate of 100%. We found that advertising has constantly a significant positive effect on brand awareness and consumers perceive the brand awareness with positive attitude. Findings depicts that advertising and brand awareness have strong positive influence and considerable relationship with purchase intention of the consumer. This research highlights that female consumers of personal care products in IT industry are more brand conscious and aware about their personal care products. Advertisement and brand awareness affects their purchase intention positively; also advertising media positively influences the brand awareness and purchase intention of the female consumers. The obtained data were then processed by Pearson correlation, multiple regression analysis and ANOVA. A Study On Advertising And Its Impact Of Brand Awareness On Fast Moving Consumer Goods With Reference To Personal Care Products In Chennai Paper ID IJIFR/ V2/ E9/ 068 Page No. 3325-3333 Subject Area Business Administration",
"title": ""
},
{
"docid": "a1d2e6238e0ee4abf10facba6e9c0ef0",
"text": "The recent successes of deep learning have led to a wave of interest from non-experts. Gaining an understanding of this technology, however, is difficult. While the theory is important, it is also helpful for novices to develop an intuitive feel for the effect of different hyperparameters and structural variations. We describe TensorFlow Playground, an interactive, open sourced visualization that allows users to experiment via direct manipulation rather than coding, enabling them to quickly build an intuition about neural nets.",
"title": ""
}
] | scidocsrr |
cb0369c903de5d406e56fe9ecff85597 | Effective Botnet Detection Through Neural Networks on Convolutional Features | [
{
"docid": "8f9b22630f9bc0b86b8e51776d47de6e",
"text": "HTTP is becoming the most preferred channel for command and control (C&C) communication of botnets. One of the main reasons is that it is very easy to hide the C&C traffic in the massive amount of browser generated Web traffic. However, detecting these HTTP-based C&C packets which constitute only a minuscule portion of the overall everyday HTTP traffic is a formidable task. In this paper, we present an anomaly detection based approach to detect HTTP-based C&C traffic using statistical features based on client generated HTTP request packets and DNS server generated response packets. We use three different unsupervised anomaly detection techniques to isolate suspicious communications that have a high probability of being part of a botnet's C&C communication. Results indicate that our method can achieve more than 90% detection rate while maintaining a reasonably low false positive rate.",
"title": ""
},
{
"docid": "48f8c5ac58e9133c82242de9aff34fc1",
"text": "In recent years, the botnet phenomenon is one of the most dangerous threat to Internet security, which supports a wide range of criminal activities, including distributed denial of service (DDoS) attacks, click fraud, phishing, malware distribution, spam emails, etc. An increasing number of botnets use Domain Generation Algorithms (DGAs) to avoid detection and exclusion by the traditional methods. By dynamically and frequently generating a large number of random domain names for candidate command and control (C&C) server, botnet can be still survive even when a C&C server domain is identified and taken down. This paper presents a novel method to detect DGA botnets using Collaborative Filtering and Density-Based Clustering. We propose a combination of clustering and classification algorithm that relies on the similarity in characteristic distribution of domain names to remove noise and group similar domains. Collaborative Filtering (CF) technique is applied to find out bots in each botnet, help finding out offline malwares infected-machine. We implemented our prototype system, carried out the analysis of a huge amount of DNS traffic log of Viettel Group and obtain positive results.",
"title": ""
}
] | [
{
"docid": "4c410bb0390cc4611da4df489c89fca0",
"text": "In this work, we propose a generalized product of experts (gPoE) framework for combining the predictions of multiple probabilistic models. We identify four desirable properties that are important for scalability, expressiveness and robustness, when learning and inferring with a combination of multiple models. Through analysis and experiments, we show that gPoE of Gaussian processes (GP) have these qualities, while no other existing combination schemes satisfy all of them at the same time. The resulting GP-gPoE is highly scalable as individual GP experts can be independently learned in parallel; very expressive as the way experts are combined depends on the input rather than fixed; the combined prediction is still a valid probabilistic model with natural interpretation; and finally robust to unreliable predictions from individual experts.",
"title": ""
},
{
"docid": "b5ab4c11feee31195fdbec034b4c99d9",
"text": "Abstract Traditionally, firewalls and access control have been the most important components used in order to secure servers, hosts and computer networks. Today, intrusion detection systems (IDSs) are gaining attention and the usage of these systems is increasing. This thesis covers commercial IDSs and the future direction of these systems. A model and taxonomy for IDSs and the technologies behind intrusion detection is presented. Today, many problems exist that cripple the usage of intrusion detection systems. The decreasing confidence in the alerts generated by IDSs is directly related to serious problems like false positives. By studying IDS technologies and analyzing interviews conducted with security departments at Swedish banks, this thesis identifies the major problems within IDSs today. The identified problems, together with recent IDS research reports published at the RAID 2002 symposium, are used to recommend the future direction of commercial intrusion detection systems. Intrusion Detection Systems – Technologies, Weaknesses and Trends",
"title": ""
},
{
"docid": "5505f3e227ebba96e34e022bc59fe57a",
"text": "Social media has quickly risen to prominence as a news source, yet lingering doubts remain about its ability to spread rumor and misinformation. Systematically studying this phenomenon, however, has been difficult due to the need to collect large-scale, unbiased data along with in-situ judgements of its accuracy. In this paper we present CREDBANK, a corpus designed to bridge this gap by systematically combining machine and human computation. Specifically, CREDBANK is a corpus of tweets, topics, events and associated human credibility judgements. It is based on the real-time tracking of more than 1 billion streaming tweets over a period of more than three months, computational summarizations of those tweets, and intelligent routings of the tweet streams to human annotators—within a few hours of those events unfolding on Twitter. In total CREDBANK comprises more than 60 million tweets grouped into 1049 real-world events, each annotated by 30 human annotators. As an example, with CREDBANK one can quickly calculate that roughly 24% of the events in the global tweet stream are not perceived as credible. We have made CREDBANK publicly available, and hope it will enable new research questions related to online information credibility in fields such as social science, data mining and health.",
"title": ""
},
{
"docid": "1ecb4bd0073c16fa4d07355c12496194",
"text": "This paper gives an overview of MOSFET mismatch effects that form a performance/yield limitation for many designs. After a general description of (mis)matching, a comparison over past and future process generations is presented. The application of the matching model in CAD and analog circuit design is discussed. Mismatch effects gain importance as critical dimensions and CMOS power supply voltages decrease.",
"title": ""
},
{
"docid": "77045e77d653bfa37dfbd1a80bb152da",
"text": "We propose a new technique for training deep neural networks (DNNs) as data-driven feature front-ends for large vocabulary continuous speech recognition (LVCSR) in low resource settings. To circumvent the lack of sufficient training data for acoustic modeling in these scenarios, we use transcribed multilingual data and semi-supervised training to build the proposed feature front-ends. In our experiments, the proposed features provide an absolute improvement of 16% in a low-resource LVCSR setting with only one hour of in-domain training data. While close to three-fourths of these gains come from DNN-based features, the remaining are from semi-supervised training.",
"title": ""
},
{
"docid": "5b4e2380172b90c536eb974268a930b6",
"text": "This paper addresses the problem of road scene segmentation in conventional RGB images by exploiting recent advances in semantic segmentation via convolutional neural networks (CNNs). Segmentation networks are very large and do not currently run at interactive frame rates. To make this technique applicable to robotics we propose several architecture refinements that provide the best trade-off between segmentation quality and runtime. This is achieved by a new mapping between classes and filters at the expansion side of the network. The network is trained end-to-end and yields precise road/lane predictions at the original input resolution in roughly 50ms. Compared to the state of the art, the network achieves top accuracies on the KITTI dataset for road and lane segmentation while providing a 20× speed-up. We demonstrate that the improved efficiency is not due to the road segmentation task. Also on segmentation datasets with larger scene complexity, the accuracy does not suffer from the large speed-up.",
"title": ""
},
{
"docid": "598a45d251ae032d97db0162a9de347f",
"text": "In this paper, a 2×2 broadside array of 3D printed half-wave dipole antennas is presented. The array design leverages direct digital manufacturing (DDM) technology to realize a shaped substrate structure that is used to control the array beamwidth. The non-planar substrate allows the element spacing to be changed without affecting the length of the feed network or the distance to the underlying ground plane. The 4-element array has a broadside gain that varies between 7.0–8.5 dBi depending on the out-of-plane angle of the substrate. Acrylonitrile Butadiene Styrene (ABS) is deposited using fused deposition modeling to form the array structure (relative permittivity of 2.7 and loss tangent of 0.008) and Dupont CB028 silver paste is used to form the conductive traces.",
"title": ""
},
{
"docid": "e12e2f0d2e190d269f426a2bfefd3545",
"text": "Mordeson, J.N., Fuzzy line graphs, Pattern Recognition Letters 14 (1993) 381 384. The notion of a fuzzy line graph of a fuzzy graph is introduced. We give a necessary and sufficient condition for a fuzzy graph to be isomorphic to its corresponding fuzzy line graph. We examine when an isomorphism between two fuzzy graphs follows from an isomorphism of their corresponding fuzzy line graphs. We give a necessary and sufficient condition for a fuzzy graph to be the fuzzy line graph of some fuzzy graph.",
"title": ""
},
{
"docid": "9516d06751aa51edb0b0a3e2b75e0bde",
"text": "This paper presents a pilot-based compensation algorithm for mitigation of frequency-selective I/Q imbalances in direct-conversion OFDM transmitters. By deploying a feedback loop from RF to baseband, together with a properly-designed pilot signal structure, the I/Q imbalance properties of the transmitter are efficiently estimated in a subcarrier-wise manner. Based on the obtained I/Q imbalance knowledge, the imbalance effects on the actual transmit waveform are then mitigated by baseband pre-distortion acting on the mirror-subcarrier signals. The compensation performance of the proposed structure is analyzed using extensive computer simulations, indicating that very high image rejection ratios can be achieved in practical system set-ups with reasonable pilot signal lengths.",
"title": ""
},
{
"docid": "852391aa93e00f9aebdbc65c2e030abf",
"text": "The iSTAR Micro Air Vehicle (MAV) is a unique 9-inch diameter ducted air vehicle weighing approximately 4 lb. The configuration consists of a ducted fan with control vanes at the duct exit plane. This VTOL aircraft not only hovers, but it can also fly at high forward speed by pitching over to a near horizontal attitude. The duct both increases propulsion efficiency and produces lift in horizontal flight, similar to a conventional planar wing. The vehicle is controlled using a rate based control system with piezo-electric gyroscopes. The Flight Control Computer (FCC) processes the pilot’s commands and the rate data from the gyroscopes to stabilize and control the vehicle. First flight of the iSTAR MAV was successfully accomplished in October 2000. Flight at high pitch angles and high speed took place in November 2000. This paper describes the vehicle, control system, and ground and flight-test results . Presented at the American Helicopter Society 57 Annual forum, Washington, DC, May 9-11, 2001. Copyright 2001 by the American Helicopter Society International, Inc. All rights reserved. Introduction The Micro Craft Inc. iSTAR is a Vertical Take-Off and Landing air vehicle (Figure 1) utilizing ducted fan technology to hover and fly at high forward speed. The duct both increases the propulsion efficiency and provides direct lift in forward flight similar to a conventional planar wing. However, there are many other benefits inherent in the iSTAR design. In terms of safety, the duct protects personnel from exposure to the propeller. The vehicle also has a very small footprint, essentially a circle equal to the diameter of the duct. This is beneficial for stowing, transporting, and in operations where space is critical, such as on board ships. The simplicity of the design is another major benefit. The absence of complex mechanical systems inherent in other VTOL designs (e.g., gearboxes, articulating blades, and counter-rotating propellers) benefits both reliability and cost. Figure 1: iSTAR Micro Air Vehicle The Micro Craft iSTAR VTOL aircraft is able to both hover and fly at high speed by pitching over towards a horizontal attitude (Figure 2). Although many aircraft in history have utilized ducted fans, most of these did not attempt to transition to high-speed forward flight. One of the few aircraft that did successfully transition was the Bell X-22 (Reference 1), first flown in 1965. The X-22, consisted of a fuselage and four ducted fans that rotated relative to the fuselage to transition the vehicle forward. The X-22 differed from the iSTAR in that its fuselage remained nearly level in forward flight, and the ducts rotated relative to the fuselage. Also planar tandem wings, not the ducts themselves, generated a large portion of the lift in forward flight. 1 Micro Craft Inc. is a division of Allied Aerospace Industry Incorporated (AAII) One of the first aircraft using an annular wing for direct lift was the French Coleoptère (Reference 1) built in the late 1950s. This vehicle successfully completed transition from hovering flight using an annular wing, however a ducted propeller was not used. Instead, a single jet engine was mounted inside the center-body for propulsion. Control was achieved by deflecting vanes inside the jet exhaust, with small external fins attached to the duct, and also with deployable strakes on the nose. Figure 2: Hover & flight at forward speed Less well-known are the General Dynamics ducted-fan Unmanned Air Vehicles, which were developed and flown starting in 1960 with the PEEK (Reference 1) aircraft. These vehicles, a precursor to the Micro Craft iSTAR, demonstrated stable hover and low speed flight in free-flight tests, and transition to forward flight in tethered ground tests. In 1999, Micro Craft acquired the patent, improved and miniaturized the design, and manufactured two 9-inch diameter flight test vehicles under DARPA funding (Reference 1). Working in conjunction with BAE systems (formerly Lockheed Sanders) and the Army/NASA Rotorcraft Division, these vehicles have recently completed a proof-ofconcept flight test program and have been demonstrated to DARPA and the US Army. Military applications of the iSTAR include intelligence, surveillance, target acquisition, and reconnaissance. Commercial applications include border patrol, bridge inspection, and police surveillance. Vehicle Description The iSTAR is composed of four major assemblies as shown in Figure 3: (1) the upper center-body, (2) the lower center body, (3) the duct, and (4) the landing ring. The majority of the vehicle’s structure is composed of Kevlar composite material resulting in a very strong and lightweight structure. Kevlar also lacks the brittleness common to other composite materials. Components that are not composite include the engine bulkhead (aluminum) and the landing ring (steel wire). The four major assemblies are described below. The upper center-body (UCB) is cylindrical in shape and contains the engine, engine controls, propeller, and payload. Three sets of hollow struts support the UCB and pass fuel and wiring to the duct. The propulsion Hover Low Speed High Speed system is a commercial-off-the-shelf (COTS) OS-32 SX single cylinder engine. This engine develops 1.2 hp and weighs approximately 250 grams (~0.5 lb.). Fuel consists of a mixture of alcohol, nitro-methane, and oil. The fixed-pitch propeller is attached directly to the engine shaft (without a gearbox). Starting the engine is accomplished by inserting a cylindrical shaft with an attached gear into the upper center-body and meshing it with a gear fit onto the propeller shaft (see Figure 4). The shaft is rotated using an off-board electric starter (Micro Craft is also investigating on-board starting systems). Figure 3: iSTAR configuration A micro video camera is mounted inside the nose cone, which is easily removable to accommodate modular payloads. The entire UCB can be removed in less than five minutes by removing eight screws securing the struts, and then disconnecting one fuel line and one electrical connector. Figure 4: Engine starting The lower center-body (LCB) is cylindrical in shape and is supported by eight stators. The sensor board is housed in the LCB, and contains three piezo-electric gyroscopes, three accelerometers, a voltage regulator, and amplifiers. The sensor signals are routed to the processor board in the duct via wires integrated into the stators. The duct is nine inches in diameter and contains a significant amount of volume for packaging. The fuel tank, flight control Computer (FCC), voltage regulator, batteries, servos, and receiver are all housed inside the duct. Fuel is contained in the leading edge of the duct. This tank is non-structural, and easily removable. It is attached to the duct with tape. Internal to the duct are eight fixed stators. The angle of the stators is set so that they produce an aerodynamic rolling moment countering the torque of the engine. Control vanes are attached to the trailing edge of the stators, providing roll, yaw, and pitch control. Four servos mounted inside the duct actuate the control vanes. Many different landing systems have been studied in the past. These trade studies have identified the landing ring as superior overall to other systems. The landing ring stabilizes the vehicle in close proximity to the ground by providing a restoring moment in dynamic situations. For example, if the vehicle were translating slowly and contacted the ground, the ring would pitch the vehicle upright. The ring also reduces blockage of the duct during landing and take-off by raising the vehicle above the ground. Blocking the duct can lead to reduced thrust and control power. Landing feet have also been considered because of their reduced weight. However, landing ‘feet’ lack the self-stabilizing characteristics of the ring in dynamic situations and tend to ‘catch’ on uneven surfaces. Electronics and Control System The Flight Control Computer (FCC) is housed in the duct (Figure 5). The computer processes the sensor output and pilot commands and generates pulse width modulated (PWM) signals to drive the servos. Pilot commands are generated using two conventional joysticks. The left joystick controls throttle position and heading. The right joystick controls pitch and yaw rate. The aircraft axis system is defined such that the longitudinal axis is coaxial with the engine shaft. Therefore, in hover the pitch attitude is 90 degrees and rolling the aircraft produces a heading change. Dedicated servos are used for pitch and yaw control. However, all control vanes are used for roll control (four quadrant roll control). The FCC provides the appropriate mixing for each servo. In each axis, the control system architecture consists of a conventional Proportional-Integral-Derivative (PID) controller with single-input and single-output. Initially, an attitude-based control system was desired, however Upper Center-body Fuel Tank Fixed Stator Control Vane Actuator Landing Ring Lower Center-body Duct Engine and Controls Prop/Fan Support struts due to the lack of acceleration information and the high gyroscope drift rates, accurate attitudes could not be calculated. For this reason, a rate system was ultimately implemented. Three Murata micro piezo-electric gyroscopes provide rates about all three axes. These gyroscopes are approximately 0.6”x0.3”x0.15” in size and weigh 1 gram each (Figure 6). Figure 5: Flight Control Computer Four COTS servos are located in the duct to actuate the control surfaces. Each servo weighs 28 grams and is 1.3”x1.3”x0.6” in size. Relative to typical UAV servos, they can generate high rates, but have low bandwidth. Bandwidth is defined by how high a frequency the servo can accurately follow an input signal. For all servos, the output lags behind the input and the signal degrades in magnitude as the frequency increases. At low frequency, the iSTAR MAV servo output signal lags by approximately 30°,",
"title": ""
},
{
"docid": "bc4d9587ba33464d74302045336ddc38",
"text": "Deep learning is a popular technique in modern online and offline services. Deep neural network based learning systems have made groundbreaking progress in model size, training and inference speed, and expressive power in recent years, but to tailor the model to specific problems and exploit data and problem structures is still an ongoing research topic. We look into two types of deep ‘‘multi-’’ objective learning problems: multi-view learning, referring to learning from data represented by multiple distinct feature sets, and multi-label learning, referring to learning from data instances belonging to multiple class labels that are not mutually exclusive. Research endeavors of both problems attempt to base on existing successful deep architectures and make changes of layers, regularization terms or even build hybrid systems to meet the problem constraints. In this report we first explain the original artificial neural network (ANN) with the backpropagation learning algorithm, and also its deep variants, e.g. deep belief network (DBN), convolutional neural network (CNN) and recurrent neural network (RNN). Next we present a survey of some multi-view and multi-label learning frameworks based on deep neural networks. At last we introduce some applications of deep multi-view and multi-label learning, including e-commerce item categorization, deep semantic hashing, dense image captioning, and our preliminary work on x-ray scattering image classification.",
"title": ""
},
{
"docid": "6dbe5a46a96857b58fc6c3d0ca7ded94",
"text": "High-school grades are often viewed as an unreliable criterion for college admissions, owing to differences in grading standards across high schools, while standardized tests are seen as methodologically rigorous, providing a more uniform and valid yardstick for assessing student ability and achievement. The present study challenges that conventional view. The study finds that high-school grade point average (HSGPA) is consistently the best predictor not only of freshman grades in college, the outcome indicator most often employed in predictive-validity studies, but of four-year college outcomes as well. A previous study, UC and the SAT (Geiser with Studley, 2003), demonstrated that HSGPA in college-preparatory courses was the best predictor of freshman grades for a sample of almost 80,000 students admitted to the University of California. Because freshman grades provide only a short-term indicator of college performance, the present study tracked four-year college outcomes, including cumulative college grades and graduation, for the same sample in order to examine the relative contribution of high-school record and standardized tests in predicting longerterm college performance. Key findings are: (1) HSGPA is consistently the strongest predictor of four-year college outcomes for all academic disciplines, campuses and freshman cohorts in the UC sample; (2) surprisingly, the predictive weight associated with HSGPA increases after the freshman year, accounting for a greater proportion of variance in cumulative fourth-year than first-year college grades; and (3) as an admissions criterion, HSGPA has less adverse impact than standardized tests on disadvantaged and underrepresented minority students. The paper concludes with a discussion of the implications of these findings for admissions policy and argues for greater emphasis on the high-school record, and a corresponding de-emphasis on standardized tests, in college admissions. * The study was supported by a grant from the Koret Foundation. Geiser and Santelices: VALIDITY OF HIGH-SCHOOL GRADES 2 CSHE Research & Occasional Paper Series Introduction and Policy Context This study examines the relative contribution of high-school grades and standardized admissions tests in predicting students’ long-term performance in college, including cumulative grade-point average and college graduation. The relative emphasis on grades vs. tests as admissions criteria has become increasingly visible as a policy issue at selective colleges and universities, particularly in states such as Texas and California, where affirmative action has been challenged or eliminated. Compared to high-school gradepoint average (HSGPA), scores on standardized admissions tests such as the SAT I are much more closely correlated with students’ socioeconomic background characteristics. As shown in Table 1, for example, among our study sample of almost 80,000 University of California (UC) freshmen, SAT I verbal and math scores exhibit a strong, positive relationship with measures of socioeconomic status (SES) such as family income, parents’ education and the academic ranking of a student’s high school, whereas HSGPA is only weakly associated with such measures. As a result, standardized admissions tests tend to have greater adverse impact than HSGPA on underrepresented minority students, who come disproportionately from disadvantaged backgrounds. The extent of the difference can be seen by rank-ordering students on both standardized tests and highschool grades and comparing the distributions. Rank-ordering students by test scores produces much sharper racial/ethnic stratification than when the same students are ranked by HSGPA, as shown in Table 2. It should be borne in mind the UC sample shown here represents a highly select group of students, drawn from the top 12.5% of California high-school graduates under the provisions of the state’s Master Plan for Higher Education. Overall, under-represented minority students account for about 17 percent of that group, although their percentage varies considerably across different HSGPA and SAT levels within the sample. When students are ranked by HSGPA, underrepresented minorities account for 28 percent of students in the bottom Family Parents' School API Income Education Decile SAT I verbal 0.32 0.39 0.32 SAT I math 0.24 0.32 0.39 HSGPA 0.04 0.06 0.01 Source: UC Corporate Student System data on 79,785 first-time freshmen entering between Fall 1996 and Fall 1999. Correlation of Admissions Factors with SES Table 1",
"title": ""
},
{
"docid": "ddd7aaa70841b172b4dc58263cc8a94e",
"text": "Fingerprint-spoofing attack often occurs when imposters gain access illegally by using artificial fingerprints, which are made of common fingerprint materials, such as silicon, latex, etc. Thus, to protect our privacy, many fingerprint liveness detection methods are put forward to discriminate fake or true fingerprint. Current work on liveness detection for fingerprint images is focused on the construction of complex handcrafted features, but these methods normally destroy or lose spatial information between pixels. Different from existing methods, convolutional neural network (CNN) can generate high-level semantic representations by learning and concatenating low-level edge and shape features from a large amount of labeled data. Thus, CNN is explored to solve the above problem and discriminate true fingerprints from fake ones in this paper. To reduce the redundant information and extract the most distinct features, ROI and PCA operations are performed for learned features of convolutional layer or pooling layer. After that, the extracted features are fed into SVM classifier. Experimental results based on the LivDet (2013) and the LivDet (2011) datasets, which are captured by using different fingerprint materials, indicate that the classification performance of our proposed method is both efficient and convenient compared with the other previous methods.",
"title": ""
},
{
"docid": "63b2c2634ec0d9507f0974203e5cc4e9",
"text": "In this paper we describe a deep network architecture that maps visual input to control actions for a robotic planar reaching task with 100% reliability in real-world trials. Our network is trained in simulation and fine-tuned with a limited number of real-world images. The policy search is guided by a kinematics-based controller (K-GPS), which works more effectively and efficiently than ε-Greedy. A critical insight in our system is the need to introduce a bottleneck in the network between the perception and control networks, and to initially train these networks independently.",
"title": ""
},
{
"docid": "82cb3db6b4738738a78fea332b075add",
"text": "This paper presents a semi-supervised learning framework for a customized semantic segmentation task using multiview image streams. A key challenge of the customized task lies in the limited accessibility of the labeled data due to the requirement of prohibitive manual annotation effort. We hypothesize that it is possible to leverage multiview image streams that are linked through the underlying 3D geometry, which can provide an additional supervisionary signal to train a segmentation model. We formulate a new cross-supervision method using a shape belief transfer—the segmentation belief in one image is used to predict that of the other image through epipolar geometry analogous to shape-from-silhouette. The shape belief transfer provides the upper and lower bounds of the segmentation for the unlabeled data where its gap approaches asymptotically to zero as the number of the labeled views increases. We integrate this theory to design a novel network that is agnostic to camera calibration, network model, and semantic category and bypasses the intermediate process of suboptimal 3D reconstruction. We validate this network by recognizing a customized semantic category per pixel from realworld visual data including non-human species and a subject of interest in social videos where attaining large-scale annotation data is infeasible.",
"title": ""
},
{
"docid": "a759ddc24cebbbf0ac71686b179962df",
"text": "Most proteins must fold into defined three-dimensional structures to gain functional activity. But in the cellular environment, newly synthesized proteins are at great risk of aberrant folding and aggregation, potentially forming toxic species. To avoid these dangers, cells invest in a complex network of molecular chaperones, which use ingenious mechanisms to prevent aggregation and promote efficient folding. Because protein molecules are highly dynamic, constant chaperone surveillance is required to ensure protein homeostasis (proteostasis). Recent advances suggest that an age-related decline in proteostasis capacity allows the manifestation of various protein-aggregation diseases, including Alzheimer's disease and Parkinson's disease. Interventions in these and numerous other pathological states may spring from a detailed understanding of the pathways underlying proteome maintenance.",
"title": ""
},
{
"docid": "47ef46ef69a23e393d8503154f110a81",
"text": "Question answering (Q&A) communities have been gaining popularity in the past few years. The success of such sites depends mainly on the contribution of a small number of expert users who provide a significant portion of the helpful answers, and so identifying users that have the potential of becoming strong contributers is an important task for owners of such communities.\n We present a study of the popular Q&A website StackOverflow (SO), in which users ask and answer questions about software development, algorithms, math and other technical topics. The dataset includes information on 3.5 million questions and 6.9 million answers created by 1.3 million users in the years 2008--2012. Participation in activities on the site (such as asking and answering questions) earns users reputation, which is an indicator of the value of that user to the site.\n We describe an analysis of the SO reputation system, and the participation patterns of high and low reputation users. The contributions of very high reputation users to the site indicate that they are the primary source of answers, and especially of high quality answers. Interestingly, we find that while the majority of questions on the site are asked by low reputation users, on average a high reputation user asks more questions than a user with low reputation. We consider a number of graph analysis methods for detecting influential and anomalous users in the underlying user interaction network, and find they are effective in detecting extreme behaviors such as those of spam users. Lastly, we show an application of our analysis: by considering user contributions over first months of activity on the site, we predict who will become influential long-term contributors.",
"title": ""
},
{
"docid": "729cd7bb3f7346143db6005a56a46279",
"text": "The feature extraction stage of speech recognition is important historically and is the subject of much current research, particularly to promote robustness to acoustic disturbances such as additive noise and reverberation. Biologically inspired and biologically related approaches are an important subset of feature extraction methods for ASR.",
"title": ""
},
{
"docid": "5467003778aa2c120c36ac023f0df704",
"text": "We consider the task of automated estimation of facial expression intensity. This involves estimation of multiple output variables (facial action units — AUs) that are structurally dependent. Their structure arises from statistically induced co-occurrence patterns of AU intensity levels. Modeling this structure is critical for improving the estimation performance; however, this performance is bounded by the quality of the input features extracted from face images. The goal of this paper is to model these structures and estimate complex feature representations simultaneously by combining conditional random field (CRF) encoded AU dependencies with deep learning. To this end, we propose a novel Copula CNN deep learning approach for modeling multivariate ordinal variables. Our model accounts for ordinal structure in output variables and their non-linear dependencies via copula functions modeled as cliques of a CRF. These are jointly optimized with deep CNN feature encoding layers using a newly introduced balanced batch iterative training algorithm. We demonstrate the effectiveness of our approach on the task of AU intensity estimation on two benchmark datasets. We show that joint learning of the deep features and the target output structure results in significant performance gains compared to existing deep structured models for analysis of facial expressions.",
"title": ""
},
{
"docid": "ab662b1dd07a7ae868f70784408e1ce1",
"text": "We use autoencoders to create low-dimensional embeddings of underlying patient phenotypes that we hypothesize are a governing factor in determining how different patients will react to different interventions. We compare the performance of autoencoders that take fixed length sequences of concatenated timesteps as input with a recurrent sequence-to-sequence autoencoder. We evaluate our methods on around 35,500 patients from the latest MIMIC III dataset from Beth Israel Deaconess Hospital.",
"title": ""
}
] | scidocsrr |
031f2a9df778ba103d06dd671c1edfda | A 10-Bit 0.5 V 100 KS/S SAR ADC with a New rail-to-rail Comparator for Energy Limited Applications | [
{
"docid": "082bf5d1d7285ce01de1f72abea5c505",
"text": "A novel switched-current successive approximation ADC is presented in this paper with high speed and low power consumption. The proposed ADC contains a new high-accuracy and power-e±cient switched-current S/H circuit and a speed-improved current comparator. Designed and simulated in a 0:18m CMOS process, this 8-bit ADC achieves 46.23 dB SNDR at 1.23 MS/s consuming 73:19 W under 1.2 V voltage supply, resulting in an ENOB of 7.38-bit and an FOM of 0.357 pJ/Conv.-step.",
"title": ""
},
{
"docid": "3f37793db0be4f874dd073972f40e1c7",
"text": "The matching properties of the threshold voltage, substrate factor and current factor of MOS transistors have been analysed and measured. Improvements of the existing theory are given, as well as extensions for long distance matching and rotation of devices. The matching results have been verified by measurements and calculations on a band-gap reference circuit.",
"title": ""
}
] | [
{
"docid": "f8c7f0fc1fb365d874766f6d1da2215c",
"text": "Different works have shown that the combination of multiple loss functions is beneficial when training deep neural networks for a variety of prediction tasks. Generally, such multi-loss approaches are implemented via a weighted multi-loss objective function in which each term encodes a different desired inference criterion. The importance of each term is often set using empirically tuned hyper-parameters. In this work, we analyze the importance of the relative weighting between the different terms of a multi-loss function and propose to leverage the model’s uncertainty with respect to each loss as an automatically learned weighting parameter. We consider the application of colon gland analysis from histopathology images for which various multi-loss functions have been proposed. We show improvements in classification and segmentation accuracy when using the proposed uncertainty driven multi-loss function.",
"title": ""
},
{
"docid": "a29ee41e8f46d1feebeb67886b657f70",
"text": "Feeling emotion is a critical characteristic to distinguish people from machines. Among all the multi-modal resources for emotion detection, textual datasets are those containing the least additional information in addition to semantics, and hence are adopted widely for testing the developed systems. However, most of the textual emotional datasets consist of emotion labels of only individual words, sentences or documents, which makes it challenging to discuss the contextual flow of emotions. In this paper, we introduce EmotionLines, the first dataset with emotions labeling on all utterances in each dialogue only based on their textual content. Dialogues in EmotionLines are collected from Friends TV scripts and private Facebook messenger dialogues. Then one of seven emotions, six Ekman’s basic emotions plus the neutral emotion, is labeled on each utterance by 5 Amazon MTurkers. A total of 29,245 utterances from 2,000 dialogues are labeled in EmotionLines. We also provide several strong baselines for emotion detection models on EmotionLines in this paper.",
"title": ""
},
{
"docid": "5fe1fa98c953d778ee27a104802e5f2b",
"text": "We describe two general approaches to creating document-level maps of science. To create a local map one defines and directly maps a sample of data, such as all literature published in a set of information science journals. To create a global map of a research field one maps ‘all of science’ and then locates a literature sample within that full context. We provide a deductive argument that global mapping should create more accurate partitions of a research field than local mapping, followed by practical reasons why this may not be so. The field of information science is then mapped at the document level using both local and global methods to provide a case illustration of the differences between the methods. Textual coherence is used to assess the accuracies of both maps. We find that document clusters in the global map have significantly higher coherence than those in the local map, and that the global map provides unique insights into the field of information science that cannot be discerned from the local map. Specifically, we show that information science and computer science have a large interface and that computer science is the more progressive discipline at that interface. We also show that research communities in temporally linked threads have a much higher coherence than isolated communities, and that this feature can be used to predict which threads will persist into a subsequent year. Methods that could increase the accuracy of both local and global maps in the future are also discussed.",
"title": ""
},
{
"docid": "bf1cbc78576e8631fa5a28f3f0f3c218",
"text": "A current-mode dc-dc converter with an on-chip current sensor is presented in this letter. The current sensor has significant improvement on the current-sensing speed. The sensing ratio of the current sensor has low sensitivity to the variation of the process, voltage, temperature and loading. The current sensor combines the sensed inductor current signal with the compensation ramp signal and the output of the error amplifier smoothly. The settling time of the current sensor is less than 10 ns. In the current-mode dc-dc converter application, the differential output of the current sensor can be directly sent to the pulse-width modulation comparator. With the proposed current sensor, the dc-dc converter could realize a low duty cycle with a high switching frequency. The dc-dc converter has been fabricated by CSMC 0.5-μm 5-V CMOS process with a die size of 2.25 mm 2. Experimental results show that the current-mode converter can achieve a duty cycle down to 0.11 with a switching frequency up to 4 MHz. The measured transient response time is less than 6 μs as the load current changes between 50 and 600 mA, rapidly.",
"title": ""
},
{
"docid": "c7ab6bc685029cc61a02f4596fef8818",
"text": "UPON Lite focuses on users, typically domain experts without ontology expertise, minimizing the role of ontology engineers.",
"title": ""
},
{
"docid": "7d42fd2db675eb5aa3573d3437a4d124",
"text": "This paper presents a new solution for filtering current harmonics in three-phase four-wire networks. The original four-branch star (FBS) filter topology presented in this paper is characterized by a particular layout of single-phase inductances and capacitors, without using any transformer or special electromagnetic device. Via this layout, a power filter, with two different and simultaneous resonance frequencies and sequences, is achieved-one frequency for positive-/negative-sequence and another one for zero-sequence components. This filter topology can work either as a passive filter, when only passive components are employed, or as a hybrid filter, when its behavior is improved by integrating a power converter into the filter structure. The paper analyzes the proposed topology, and derives fundamental concepts about the control of the resulting hybrid power filter. From this analysis, a specific implementation of a three-phase four-wire hybrid power filter is presented as an illustrative application of the filtering topology. An extensive evaluation using simulation and experimental results from a DSP-based laboratory prototype is conducted in order to verify and validate the good performance achieved by the proposed FBS passive/hybrid power filter.",
"title": ""
},
{
"docid": "b57859a76aea1fb5d4219068bde83283",
"text": "Software vulnerabilities are the root cause of a wide range of attacks. Existing vulnerability scanning tools are able to produce a set of suspects. However, they often suffer from a high false positive rate. Convicting a suspect and vindicating false positives are mostly a highly demanding manual process, requiring a certain level of understanding of the software. This limitation significantly thwarts the application of these tools by system administrators or regular users who are concerned about security but lack of understanding of, or even access to, the source code. It is often the case that even developers are reluctant to inspect/fix these numerous suspects unless they are convicted by evidence. In this paper, we propose a lightweight dynamic approach which generates evidence for various security vulnerabilities in software, with the goal of relieving the manual procedure. It is based on data lineage tracing, a technique that associates each execution point precisely with a set of relevant input values. These input values can be mutated by an offline analysis to generate exploits. We overcome the efficiency challenge by using Binary Decision Diagrams (BDD). Our tool successfully generates exploits for all the known vulnerabilities we studied. We also use it to uncover a number of new vulnerabilities, proved by evidence.",
"title": ""
},
{
"docid": "7603ee2e0519b727de6dc29e05b2049f",
"text": "To what extent do we share feelings with others? Neuroimaging investigations of the neural mechanisms involved in the perception of pain in others may cast light on one basic component of human empathy, the interpersonal sharing of affect. In this fMRI study, participants were shown a series of still photographs of hands and feet in situations that are likely to cause pain, and a matched set of control photographs without any painful events. They were asked to assess on-line the level of pain experienced by the person in the photographs. The results demonstrated that perceiving and assessing painful situations in others was associated with significant bilateral changes in activity in several regions notably, the anterior cingulate, the anterior insula, the cerebellum, and to a lesser extent the thalamus. These regions are known to play a significant role in pain processing. Finally, the activity in the anterior cingulate was strongly correlated with the participants' ratings of the others' pain, suggesting that the activity of this brain region is modulated according to subjects' reactivity to the pain of others. Our findings suggest that there is a partial cerebral commonality between perceiving pain in another individual and experiencing it oneself. This study adds to our understanding of the neurological mechanisms implicated in intersubjectivity and human empathy.",
"title": ""
},
{
"docid": "4608c8ca2cf58ca9388c25bb590a71df",
"text": "Life expectancy in most countries has been increasing continually over the several few decades thanks to significant improvements in medicine, public health, as well as personal and environmental hygiene. However, increased life expectancy combined with falling birth rates are expected to engender a large aging demographic in the near future that would impose significant burdens on the socio-economic structure of these countries. Therefore, it is essential to develop cost-effective, easy-to-use systems for the sake of elderly healthcare and well-being. Remote health monitoring, based on non-invasive and wearable sensors, actuators and modern communication and information technologies offers an efficient and cost-effective solution that allows the elderly to continue to live in their comfortable home environment instead of expensive healthcare facilities. These systems will also allow healthcare personnel to monitor important physiological signs of their patients in real time, assess health conditions and provide feedback from distant facilities. In this paper, we have presented and compared several low-cost and non-invasive health and activity monitoring systems that were reported in recent years. A survey on textile-based sensors that can potentially be used in wearable systems is also presented. Finally, compatibility of several communication technologies as well as future perspectives and research challenges in remote monitoring systems will be discussed.",
"title": ""
},
{
"docid": "7bb0ea76acaf4e23312ae62d0b6321db",
"text": "The European honey bee exploits floral resources efficiently and may therefore compete with solitary wild bees. Hence, conservationists and bee keepers are debating about the consequences of beekeeping for the conservation of wild bees in nature reserves. We observed flower-visiting bees on flowers of Calluna vulgaris in sites differing in the distance to the next honey-bee hive and in sites with hives present and absent in the Lüneburger Heath, Germany. Additionally, we counted wild bee ground nests in sites that differ in their distance to the next hive and wild bee stem nests and stem-nesting bee species in sites with hives present and absent. We did not observe fewer honey bees or higher wild bee flower visits in sites with different distances to the next hive (up to 1,229 m). However, wild bees visited fewer flowers and honey bee visits increased in sites containing honey-bee hives and in sites containing honey-bee hives we found fewer stem-nesting bee species. The reproductive success, measured as number of nests, was not affected by distance to honey-bee hives or their presence but by availability and characteristics of nesting resources. Our results suggest that beekeeping in the Lüneburg Heath can affect the conservation of stem-nesting bee species richness but not the overall reproduction either of stem-nesting or of ground-nesting bees. Future experiments need control sites with larger distances than 500 m to hives. Until more information is available, conservation efforts should forgo to enhance honey bee stocking rates but enhance the availability of nesting resources.",
"title": ""
},
{
"docid": "f0a7d1543bb056d7ea02c4f11a684d28",
"text": "The computer vision community has reached a point when it can start considering high-level reasoning tasks such as the \"communicative intents\" of images, or in what light an image portrays its subject. For example, an image might imply that a politician is competent, trustworthy, or energetic. We explore a variety of features for predicting these communicative intents. We study a number of facial expressions and body poses as cues for the implied nuances of the politician's personality. We also examine how the setting of an image (e.g. kitchen or hospital) influences the audience's perception of the portrayed politician. Finally, we improve the performance of an existing approach on this problem, by learning intermediate cues using convolutional neural networks. We show state of the art results on the Visual Persuasion dataset of Joo et al. [11].",
"title": ""
},
{
"docid": "9f6f00bf0872c54fbf2ec761bf73f944",
"text": "Nanoscience emerged in the late 1980s and is developed and applied in China since the middle of the 1990s. Although nanotechnologies have been less developed in agronomy than other disciplines, due to less investment, nanotechnologies have the potential to improve agricultural production. Here, we review more than 200 reports involving nanoscience in agriculture, livestock, and aquaculture. The major points are as follows: (1) nanotechnologies used for seeds and water improved plant germination, growth, yield, and quality. (2) Nanotechnologies could increase the storage period for vegetables and fruits. (3) For livestock and poultry breeding, nanotechnologies improved animals immunity, oxidation resistance, and production and decreased antibiotic use and manure odor. For instance, the average daily gain of pig increased by 9.9–15.3 %, the ratio of feedstuff to weight decreased by 7.5–10.3 %, and the diarrhea rate decreased by 55.6–66.7 %. (4) Nanotechnologies for water disinfection in fishpond increased water quality and increased yields and survivals of fish and prawn. (5) Nanotechnologies for pesticides increased pesticide performance threefold and reduced cost by 50 %. (6) Nano urea increased the agronomic efficiency of nitrogen fertilization by 44.5 % and the grain yield by 10.2 %, versus normal urea. (7) Nanotechnologies are widely used for rapid detection and diagnosis, notably for clinical examination, food safety testing, and animal epidemic surveillance. (8) Nanotechnologies may also have adverse effects that are so far not well known.",
"title": ""
},
{
"docid": "caa04ee7fb10167fea167a89b7228c9b",
"text": "Using dedicated hardware to do machine learning typically ends up in disaster because of cost, obsolescence, and poor software. The popularization of graphic processing units (GPUs), which are now available on every PC, provides an attractive alternative. We propose a generic 2-layer fully connected neural network GPU implementation which yields over 3/spl times/ speedup for both training and testing with respect to a 3 GHz P4 CPU.",
"title": ""
},
{
"docid": "e7bedfa690b456a7a93e5bdae8fff79c",
"text": "During the past several years, there have been a significant number of researches conducted in the area of semiconductor final test scheduling problems (SFTSP). As specific example of simultaneous multiple resources scheduling problem (SMRSP), intelligent manufacturing planning and scheduling based on meta-heuristic methods, such as Genetic Algorithm (GA), Simulated Annealing (SA), and Particle Swarm Optimization (PSO), have become the common tools for finding satisfactory solutions within reasonable computational times in real settings. However, limited researches were aiming at analyze the effects of interdependent relations during group decision-making activities. Moreover for complex and large problems, local constraints and objectives from each managerial entity, and their contributions towards the global objectives cannot be effectively represented in a single model. In this paper, we propose a novel Cooperative Estimation of Distribution Algorithm (CEDA) to overcome the challenges mentioned before. The CEDA is established based on divide-and-conquer strategy and a co-evolutionary framework. Considerable experiments have been conducted and the results confirmed that CEDA outperforms recent research results for scheduling problems in FMS (Flexible Manufacturing Systems).",
"title": ""
},
{
"docid": "9828a83e8b28b3b0d302a25da9120763",
"text": "For robotic manipulators that are redundant or with high degrees of freedom (dof ), an analytical solution to the inverse kinematics is very difficult or impossible. Pioneer 2 robotic arm (P2Arm) is a recently developed and widely used 5-dof manipulator. There is no effective solution to its inverse kinematics to date. This paper presents a first complete analytical solution to the inverse kinematics of the P2Arm, which makes it possible to control the arm to any reachable position in an unstructured environment. The strategies developed in this paper could also be useful for solving the inverse kinematics problem of other types of robotic arms.",
"title": ""
},
{
"docid": "9117bb0ed6ab5fb573f16b5a09798711",
"text": "When does knowledge transfer benefit performance? Combining field data from a global consulting firm with an agent-based model, we examine how efforts to supplement one’s knowledge from coworkers interact with individual, organizational, and environmental characteristics to impact organizational performance. We find that once cost and interpersonal exchange are included in the analysis, the impact of knowledge transfer is highly contingent. Depending on specific characteristics and circumstances, knowledge transfer can better, matter little to, or even harm performance. Three illustrative studies clarify puzzling past results and offer specific boundary conditions: (1) At the individual level, better organizational support for employee learning diminishes the benefit of knowledge transfer for organizational performance. (2) At the organization level, broader access to organizational memory makes global knowledge transfer less beneficial to performance. (3) When the organizational environment becomes more turbulent, the organizational performance benefits of knowledge transfer decrease. The findings imply that organizations may forgo investments in both organizational memory and knowledge exchange, that wide-ranging knowledge exchange may be unimportant or even harmful for performance, and that organizations operating in turbulent environments may find that investment in knowledge exchange undermines performance rather than enhances it. At a time when practitioners are urged to make investments in facilitating knowledge transfer and collaboration, appreciation of the complex relationship between knowledge transfer and performance will help in reaping benefits while avoiding liabilities.",
"title": ""
},
{
"docid": "864adf6f82a0d1af98339f92035b15fc",
"text": "Typically in neuroimaging we are looking to extract some pertinent information from imperfect, noisy images of the brain. This might be the inference of percent changes in blood flow in perfusion FMRI data, segmentation of subcortical structures from structural MRI, or inference of the probability of an anatomical connection between an area of cortex and a subthalamic nucleus using diffusion MRI. In this article we will describe how Bayesian techniques have made a significant impact in tackling problems such as these, particularly in regards to the analysis tools in the FMRIB Software Library (FSL). We shall see how Bayes provides a framework within which we can attempt to infer on models of neuroimaging data, while allowing us to incorporate our prior belief about the brain and the neuroimaging equipment in the form of biophysically informed or regularising priors. It allows us to extract probabilistic information from the data, and to probabilistically combine information from multiple modalities. Bayes can also be used to not only compare and select between models of different complexity, but also to infer on data using committees of models. Finally, we mention some analysis scenarios where Bayesian methods are impractical, and briefly discuss some practical approaches that we have taken in these cases.",
"title": ""
},
{
"docid": "a2082f1b4154cd11e94eff18a016e91e",
"text": "1 During the summer of 2005, I discovered that there was not a copy of my dissertation available from the library at McGill University. I was, however, able to obtain a copy of it on microfilm from another university that had initially obtained it on interlibrary loan. I am most grateful to Vicki Galbraith who typed this version from that copy, which except for some minor variations due to differences in type size and margins (plus this footnote, of course) is identical to that on the microfilm. ACKNOWLEDGEMENTS 1 The writer is grateful to Dr. J. T. McIlhone, Associate General Director in Charge of English Classes of the Montreal Catholic School Board, for his kind cooperation in making subjects available, and to the Principals and French teachers of each high school for their assistance and cooperation during the testing programs. advice on the statistical analysis. In addition, the writer would like to express his appreciation to Mr. K. Tunstall for his assistance in the difficult task of interviewing the parents of each student. Finally, the writer would like to express his gratitude to Janet W. Gardner for her invaluable assistance in all phases of the research program.",
"title": ""
},
{
"docid": "e483d914e00fa46a6be188fabd396165",
"text": "Assessing distance betweeen the true and the sample distribution is a key component of many state of the art generative models, such as Wasserstein Autoencoder (WAE). Inspired by prior work on Sliced-Wasserstein Autoencoders (SWAE) and kernel smoothing we construct a new generative model – Cramer-Wold AutoEncoder (CWAE). CWAE cost function, based on introduced Cramer-Wold distance between samples, has a simple closed-form in the case of normal prior. As a consequence, while simplifying the optimization procedure (no need of sampling necessary to evaluate the distance function in the training loop), CWAE performance matches quantitatively and qualitatively that of WAE-MMD (WAE using maximum mean discrepancy based distance function) and often improves upon SWAE.",
"title": ""
},
{
"docid": "737f75e39cbf1b5226985e866a44c106",
"text": "A security-enhanced agile software development process, SEAP, is introduced in the development of a mobile money transfer system at Ericsson Corp. A specific characteristic of SEAP is that it includes a security group consisting of four different competences, i.e., Security manager, security architect, security master and penetration tester. Another significant feature of SEAP is an integrated risk analysis process. In analyzing risks in the development of the mobile money transfer system, a general finding was that SEAP either solves risks that were previously postponed or solves a larger proportion of the risks in a timely manner. The previous software development process, i.e., The baseline process of the comparison outlined in this paper, required 2.7 employee hours spent for every risk identified in the analysis process compared to, on the average, 1.5 hours for the SEAP. The baseline development process left 50% of the risks unattended in the software version being developed, while SEAP reduced that figure to 22%. Furthermore, SEAP increased the proportion of risks that were corrected from 12.5% to 67.1%, i.e., More than a five times increment. This is important, since an early correction may avoid severe attacks in the future. The security competence in SEAP accounts for 5% of the personnel cost in the mobile money transfer system project. As a comparison, the corresponding figure, i.e., For security, was 1% in the previous development process.",
"title": ""
}
] | scidocsrr |
1bc8b083b81954925146ea8e9941badf | Experimental Investigation of Light-Gauge Steel Plate Shear Walls | [
{
"docid": "8f3b3611ee8a52753e026625f6ccd12e",
"text": "plate is ntation of by plastic plex, wall ection of procedure Abstract: A revised procedure for the design of steel plate shear walls is proposed. In this procedure the thickness of the infill found using equations that are derived from the plastic analysis of the strip model, which is an accepted model for the represe steel plate shear walls. Comparisons of experimentally obtained ultimate strengths of steel plate shear walls and those predicted analysis are given and reasonable agreement is observed. Fundamental plastic collapse mechanisms for several, more com configurations are also given. Additionally, an existing codified procedure for the design of steel plate walls is reviewed and a s this procedure which could lead to designs with less-than-expected ultimate strength is identified. It is shown that the proposed eliminates this possibility without changing the other valid sections of the current procedure.",
"title": ""
}
] | [
{
"docid": "bf8a24b974553d21849e9b066d78e6d4",
"text": "Dense video captioning aims to generate text descriptions for all events in an untrimmed video. This involves both detecting and describing events. Therefore, all previous methods on dense video captioning tackle this problem by building two models, i.e. an event proposal and a captioning model, for these two sub-problems. The models are either trained separately or in alternation. This prevents direct influence of the language description to the event proposal, which is important for generating accurate descriptions. To address this problem, we propose an end-to-end transformer model for dense video captioning. The encoder encodes the video into appropriate representations. The proposal decoder decodes from the encoding with different anchors to form video event proposals. The captioning decoder employs a masking network to restrict its attention to the proposal event over the encoding feature. This masking network converts the event proposal to a differentiable mask, which ensures the consistency between the proposal and captioning during training. In addition, our model employs a self-attention mechanism, which enables the use of efficient non-recurrent structure during encoding and leads to performance improvements. We demonstrate the effectiveness of this end-to-end model on ActivityNet Captions and YouCookII datasets, where we achieved 10.12 and 6.58 METEOR score, respectively.",
"title": ""
},
{
"docid": "05a76f64a6acbcf48b7ac36785009db3",
"text": "Mixed methods research is an approach that combines quantitative and qualitative research methods in the same research inquiry. Such work can help develop rich insights into various phenomena of interest that cannot be fully understood using only a quantitative or a qualitative method. Notwithstanding the benefits and repeated calls for such work, there is a dearth of mixed methods research in information systems. Building on the literature on recent methodological advances in mixed methods research, we develop a set of guidelines for conducting mixed methods research in IS. We particularly elaborate on three important aspects of conducting mixed methods research: (1) appropriateness of a mixed methods approach; (2) development of meta-inferences (i.e., substantive theory) from mixed methods research; and (3) assessment of the quality of meta-inferences (i.e., validation of mixed methods research). The applicability of these guidelines is illustrated using two published IS papers that used mixed methods.",
"title": ""
},
{
"docid": "9414f4f7164c69f67b4bf200da9f1358",
"text": "Experience replay is one of the most commonly used approaches to improve the sample efficiency of reinforcement learning algorithms. In this work, we propose an approach to select and replay sequences of transitions in order to accelerate the learning of a reinforcement learning agent in an off-policy setting. In addition to selecting appropriate sequences, we also artificially construct transition sequences using information gathered from previous agent-environment interactions. These sequences, when replayed, allow value function information to trickle down to larger sections of the state/state-action space, thereby making the most of the agent's experience. We demonstrate our approach on modified versions of standard reinforcement learning tasks such as the mountain car and puddle world problems and empirically show that it enables faster, and more accurate learning of value functions as compared to other forms of experience replay. Further, we briefly discuss some of the possible extensions to this work, as well as applications and situations where this approach could be particularly useful.",
"title": ""
},
{
"docid": "73c1f5b8e8df783c976427b64734f909",
"text": "XTS-AES is an advanced mode of AES for data protection of sector-based devices. Compared to other AES modes, it features two secret keys instead of one, and an additional tweak for each data block. These characteristics make the mode not only resistant against cryptoanalysis attacks, but also more challenging for side-channel attack. In this paper, we propose two attack methods on XTS-AES overcoming these challenges. In the first attack, we analyze side-channel leakage of the particular modular multiplication in XTS-AES mode. In the second one, we utilize the relationship between two consecutive block tweaks and propose a method to work around the masking of ciphertext by the tweak. These attacks are verified on an FPGA implementation of XTS-AES. The results show that XTS-AES is susceptible to side-channel power analysis attacks, and therefore dedicated protections are required for security of XTS-AES in storage devices.",
"title": ""
},
{
"docid": "9e451fe70d74511d2cc5a58b667da526",
"text": "Convolutional Neural Networks (CNNs) are propelling advances in a range of different computer vision tasks such as object detection and object segmentation. Their success has motivated research in applications of such models for medical image analysis. If CNN-based models are to be helpful in a medical context, they need to be precise, interpretable, and uncertainty in predictions must be well understood. In this paper, we develop and evaluate recent advances in uncertainty estimation and model interpretability in the context of semantic segmentation of polyps from colonoscopy images. We evaluate and enhance several architectures of Fully Convolutional Networks (FCNs) for semantic segmentation of colorectal polyps and provide a comparison between these models. Our highest performing model achieves a 76.06% mean IOU accuracy on the EndoScene dataset, a considerable improvement over the previous state-of-the-art.",
"title": ""
},
{
"docid": "2687cb8fc5cde18e53c580a50b33e328",
"text": "Social network sites (SNSs) are becoming an increasingly popular resource for both students and adults, who use them to connect with and maintain relationships with a variety of ties. For many, the primary function of these sites is to consume and distribute personal content about the self. Privacy concerns around sharing information in a public or semi-public space are amplified by SNSs’ structural characteristics, which may obfuscate the true audience of these disclosures due to their technical properties (e.g., persistence, searchability) and dynamics of use (e.g., invisible audiences, context collapse) (boyd, 2008b). Early work on the topic focused on the privacy pitfalls of Facebook and other SNSs (e.g., Acquisti & Gross, 2006; Barnes, 2006; Gross & Acquisti, 2005) and argued that individuals were (perhaps inadvertently) disclosing information that might be inappropriate for some audiences, such as future employers, or that might enable identity theft or other negative outcomes.",
"title": ""
},
{
"docid": "f6f22580071dc149a8dc544835123977",
"text": "This paper describes MITRE’s participation in the Paraphrase and Semantic Similarity in Twitter task (SemEval-2015 Task 1). This effort placed first in Semantic Similarity and second in Paraphrase Identification with scores of Pearson’s r of 61.9%, F1 of 66.7%, and maxF1 of 72.4%. We detail the approaches we explored including mixtures of string matching metrics, alignments using tweet-specific distributed word representations, recurrent neural networks for modeling similarity with those alignments, and distance measurements on pooled latent semantic features. Logistic regression is used to tie the systems together into the ensembles submitted for evaluation.",
"title": ""
},
{
"docid": "b713da979bc3d01153eaae8827779b7b",
"text": "Chronic lower leg pain results from various conditions, most commonly, medial tibial stress syndrome, stress fracture, chronic exertional compartment syndrome, nerve entrapment, and popliteal artery entrapment syndrome. Symptoms associated with these conditions often overlap, making a definitive diagnosis difficult. As a result, an algorithmic approach was created to aid in the evaluation of patients with complaints of lower leg pain and to assist in defining a diagnosis by providing recommended diagnostic studies for each condition. A comprehensive physical examination is imperative to confirm a diagnosis and should begin with an inquiry regarding the location and onset of the patient's pain and tenderness. Confirmation of the diagnosis requires performing the appropriate diagnostic studies, including radiographs, bone scans, magnetic resonance imaging, magnetic resonance angiography, compartmental pressure measurements, and arteriograms. Although most conditions causing lower leg pain are treated successfully with nonsurgical management, some syndromes, such as popliteal artery entrapment syndrome, may require surgical intervention. Regardless of the form of treatment, return to activity must be gradual and individualized for each patient to prevent future athletic injury.",
"title": ""
},
{
"docid": "1b990fd9a3506f821519faad113f59ee",
"text": "The primary focus of this study is to understand the current port operating condition and recommend short term measures to improve traffic condition in the port of Chennai. The cause of congestion is identified based on the data collected and observation made at port gates as well as at terminal gates in Chennai port. A simulation model for the existing road layout is developed in micro-simulation software VISSIM and is calibrated to reflect the prevailing condition inside the port. The data such as truck origin/destination, hourly inflow and outflow of trucks, speed, and stopping time at checking booths are used as input. Routing data is used to direct traffic to specific terminal or dock within the port. Several alternative scenarios are developed and simulated to get results of the key performance indicators. A comparative and detailed analysis of these indicators is used to evaluate recommendations to reduce congestion inside the port.",
"title": ""
},
{
"docid": "435da20d6285a8b57a35fb407b96c802",
"text": "This paper attempts to review examples of the use of storytelling and narrative in immersive virtual reality worlds. Particular attention is given to the way narrative is incorporated in artistic, cultural, and educational applications through the development of specific sensory and perceptual experiences that are based on characteristics inherent to virtual reality, such as immersion, interactivity, representation, and illusion. Narrative development is considered on three axes: form (visual representation), story (emotional involvement), and history (authenticated cultural content) and how these can come together.",
"title": ""
},
{
"docid": "ebbc0b7aea9fafa1258f337fab4d20e8",
"text": "This paper presents a new design of high frequency DC/AC inverter for home applications using fuel cells or photovoltaic array sources. A battery bank parallel to the DC link is provided to take care of the slow dynamic response of the source. The design is based on a push-pull DC/DC converter followed by a full-bridge PWM inverter topology. The nominal power rating is 10 kW. Actual design parameters, procedure and experimental results of a 1.5 kW prototype are provided. The objective of this paper is to explore the possibility of making renewable sources of energy utility interactive by means of low cost power electronic interface.",
"title": ""
},
{
"docid": "f4d6cd6f6cd453077e162b64ae485c62",
"text": "Effects of Music Therapy on Prosocial Behavior of Students with Autism and Developmental Disabilities by Catherine L. de Mers Dr. Matt Tincani, Examination Committee Chair Assistant Professor o f Special Education University o f Nevada, Las Vegas This researeh study employed a multiple baseline across participants design to investigate the effects o f music therapy intervention on hitting, screaming, and asking o f three children with autism and/or developmental disabilities. Behaviors were observed and recorded during 10-minute free-play sessions both during baseline and immediately after musie therapy sessions during intervention. Interobserver agreement and procedural fidelity data were collected. Music therapy sessions were modeled on literature pertaining to music therapy with children with autism. In addition, social validity surveys were eollected to answer research questions pertaining to the social validity of music therapy as an intervention. Findings indicate that music therapy produced moderate and gradual effects on hitting, screaming, and asking. Hitting and sereaming deereased following intervention, while asking increased. Intervention effects were maintained three weeks following",
"title": ""
},
{
"docid": "6fdd0c7d239417234cfc4706a82b5a0f",
"text": "We propose a method of generating teaching policies for use in intelligent tutoring systems (ITS) for concept learning tasks <xref ref-type=\"bibr\" rid=\"ref1\">[1]</xref> , e.g., teaching students the meanings of words by showing images that exemplify their meanings à la Rosetta Stone <xref ref-type=\"bibr\" rid=\"ref2\">[2]</xref> and Duo Lingo <xref ref-type=\"bibr\" rid=\"ref3\">[3]</xref> . The approach is grounded in control theory and capitalizes on recent work by <xref ref-type=\"bibr\" rid=\"ref4\">[4] </xref> , <xref ref-type=\"bibr\" rid=\"ref5\">[5]</xref> that frames the “teaching” problem as that of finding approximately optimal teaching policies for approximately optimal learners (AOTAOL). Our work expands on <xref ref-type=\"bibr\" rid=\"ref4\">[4]</xref> , <xref ref-type=\"bibr\" rid=\"ref5\">[5]</xref> in several ways: (1) We develop a novel student model in which the teacher's actions can <italic>partially </italic> eliminate hypotheses about the curriculum. (2) With our student model, inference can be conducted <italic> analytically</italic> rather than numerically, thus allowing computationally efficient planning to optimize learning. (3) We develop a reinforcement learning-based hierarchical control technique that allows the teaching policy to search through <italic>deeper</italic> learning trajectories. We demonstrate our approach in a novel ITS for foreign language learning similar to Rosetta Stone and show that the automatically generated AOTAOL teaching policy performs favorably compared to two hand-crafted teaching policies.",
"title": ""
},
{
"docid": "e8dd0edd4ae06d53b78662f9acca09c5",
"text": "A new methodology based on mixed linear models was developed for mapping QTLs with digenic epistasis and QTL×environment (QE) interactions. Reliable estimates of QTL main effects (additive and epistasis effects) can be obtained by the maximum-likelihood estimation method, while QE interaction effects (additive×environment interaction and epistasis×environment interaction) can be predicted by the-best-linear-unbiased-prediction (BLUP) method. Likelihood ratio and t statistics were combined for testing hypotheses about QTL effects and QE interactions. Monte Carlo simulations were conducted for evaluating the unbiasedness, accuracy, and power for parameter estimation in QTL mapping. The results indicated that the mixed-model approaches could provide unbiased estimates for both positions and effects of QTLs, as well as unbiased predicted values for QE interactions. Additionally, the mixed-model approaches also showed high accuracy and power in mapping QTLs with epistatic effects and QE interactions. Based on the models and the methodology, a computer software program (QTLMapper version 1.0) was developed, which is suitable for interval mapping of QTLs with additive, additive×additive epistasis, and their environment interactions.",
"title": ""
},
{
"docid": "83f88cbaed86220e0047b51c965a77ba",
"text": "There are two conflicting perspectives regarding the relationship between profanity and dishonesty. These two forms of norm-violating behavior share common causes and are often considered to be positively related. On the other hand, however, profanity is often used to express one's genuine feelings and could therefore be negatively related to dishonesty. In three studies, we explored the relationship between profanity and honesty. We examined profanity and honesty first with profanity behavior and lying on a scale in the lab (Study 1; N = 276), then with a linguistic analysis of real-life social interactions on Facebook (Study 2; N = 73,789), and finally with profanity and integrity indexes for the aggregate level of U.S. states (Study 3; N = 50 states). We found a consistent positive relationship between profanity and honesty; profanity was associated with less lying and deception at the individual level and with higher integrity at the society level.",
"title": ""
},
{
"docid": "4706f9e8d9892543aaeb441c45816b24",
"text": "The mood of a text and the intention of the writer can be reflected in the typeface. However, in designing a typeface, it is difficult to keep the style of various characters consistent, especially for languages with lots of morphological variations such as Chinese. In this paper, we propose a Typeface Completion Network (TCN) which takes one character as an input, and automatically completes the entire set of characters in the same style as the input characters. Unlike existing models proposed for image-to-image translation, TCN embeds a character image into two separate vectors representing typeface and content. Combined with a reconstruction loss from the latent space, and with other various losses, TCN overcomes the inherent difficulty in designing a typeface. Also, compared to previous image-to-image translation models, TCN generates high quality character images of the same typeface with a much smaller number of model parameters. We validate our proposed model on the Chinese and English character datasets, which is paired data, and the CelebA dataset, which is unpaired data. In these datasets, TCN outperforms recently proposed state-of-the-art models for image-to-image translation. The source code of our model is available at https://github.com/yongqyu/TCN.",
"title": ""
},
{
"docid": "2b314587816255285bf985a086719572",
"text": "Tomatoes are well-known vegetables, grown and eaten around the world due to their nutritional benefits. The aim of this research was to determine the chemical composition (dry matter, soluble solids, titritable acidity, vitamin C, lycopene), the taste index and maturity in three cherry tomato varieties (Sakura, Sunstream, Mathew) grown and collected from greenhouse at different stages of ripening. The output of the analyses showed that there were significant differences in the mean values among the analysed parameters according to the stage of ripening and variety. During ripening, the content of soluble solids increases on average two times in all analyzed varieties; the highest content of vitamin C and lycopene was determined in tomatoes of Sunstream variety in red stage. The highest total acidity expressed as g of citric acid 100 g was observed in pink stage (variety Sakura) or a breaker stage (varieties Sunstream and Mathew). The taste index of the variety Sakura was higher at all analyzed ripening stages in comparison with other varieties. This shows that ripening stages have a significant effect on tomato biochemical composition along with their variety.",
"title": ""
},
{
"docid": "eac86562382c4ec9455f1422b6f50e9f",
"text": "In this paper we look at how to sparsify a graph i.e. how to reduce the edgeset while keeping the nodes intact, so as to enable faster graph clustering without sacrificing quality. The main idea behind our approach is to preferentially retain the edges that are likely to be part of the same cluster. We propose to rank edges using a simple similarity-based heuristic that we efficiently compute by comparing the minhash signatures of the nodes incident to the edge. For each node, we select the top few edges to be retained in the sparsified graph. Extensive empirical results on several real networks and using four state-of-the-art graph clustering and community discovery algorithms reveal that our proposed approach realizes excellent speedups (often in the range 10-50), with little or no deterioration in the quality of the resulting clusters. In fact, for at least two of the four clustering algorithms, our sparsification consistently enables higher clustering accuracies.",
"title": ""
},
{
"docid": "93c9ffa6c83de5fece14eb351315fbed",
"text": "nature protocols | VOL.7 NO.11 | 2012 | 1983 IntroDuctIon In a typical histology study, it is necessary to make thin sections of blocks of frozen or fixed tissue for microscopy. This process has major limitations for obtaining a 3D picture of structural components and the distribution of cells within tissues. For example, in axon regeneration studies, after labeling the injured axons, it is common that the tissue of interest (e.g., spinal cord, optic nerve) is sectioned. Subsequently, when tissue sections are analyzed under the microscope, only short fragments of axons are observed within each section; hence, the 3D information of axonal structures is lost. Because of this confusion, these fragmented axonal profiles might be interpreted as regenerated axons even though they could be spared axons1. In addition, the growth trajectories and target regions of the regenerating axons cannot be identified by visualization of axonal fragments. Similar problems could occur in cancer and immunology studies when only small fractions of target cells are observed within large organs. To avoid these limitations and problems, tissues ideally should be imaged at high spatial resolution without sectioning. However, optical imaging of thick tissues is limited mostly because of scattering of imaging light through the thick tissues, which contain various cellular and extracellular structures with different refractive indices. The imaging light traveling through different structures scatters and loses its excitation and emission efficiency, resulting in a lower resolution and imaging depth2,3. Optical clearing of tissues by organic solvents, which make the biological tissue transparent by matching the refractory indexes of different tissue layers to the solvent, has become a prominent method for imaging thick tissues2,4. In cleared tissues, the imaging light does not scatter and travels unobstructed throughout the different tissue layers. For this purpose, the first tissue clearing method was developed about a century ago by Spalteholz, who used a mixture of benzyl alcohol and methyl salicylate to clear large organs such as the heart5,6. In general, the first step of tissue clearing is tissue dehydration, owing to the low refractive index of water compared with cellular structures containing proteins and lipids4. Subsequently, dehydrated tissue is impregnated with an optical clearing agent, such as glucose7, glycerol8, benzyl alcohol–benzyl benzoate (BABB, also known as Murray’s clear)4,9–13 or dibenzyl ether (DBE)13,14, which have approximately the same refractive index as the impregnated tissue. At the end of the clearing procedure, the cleared tissue hardens and turns transparent, and thus resembles glass.",
"title": ""
},
{
"docid": "6f22283e5142035d6f6f9d5e06ab1cd2",
"text": "We present a novel technique to automatically colorize grayscale images that combines both global priors and local image features. Based on Convolutional Neural Networks, our deep network features a fusion layer that allows us to elegantly merge local information dependent on small image patches with global priors computed using the entire image. The entire framework, including the global and local priors as well as the colorization model, is trained in an end-to-end fashion. Furthermore, our architecture can process images of any resolution, unlike most existing approaches based on CNN. We leverage an existing large-scale scene classification database to train our model, exploiting the class labels of the dataset to more efficiently and discriminatively learn the global priors. We validate our approach with a user study and compare against the state of the art, where we show significant improvements. Furthermore, we demonstrate our method extensively on many different types of images, including black-and-white photography from over a hundred years ago, and show realistic colorizations.",
"title": ""
}
] | scidocsrr |
1cf79b316f5fa8001a961a72d59179b6 | Beyond the Prince : Race and Gender Role Portrayal in | [
{
"docid": "b4dcc5c36c86f9b1fef32839d3a1484d",
"text": "The popular Disney Princess line includes nine films (e.g., Snow White, Beauty and the Beast) and over 25,000 marketable products. Gender role depictions of the prince and princess characters were examined with a focus on their behavioral characteristics and climactic outcomes in the films. Results suggest that the prince and princess characters differ in their portrayal of traditionally masculine and feminine characteristics, these gender role portrayals are complex, and trends towards egalitarian gender roles are not linear over time. Content coding analyses demonstrate that all of the movies portray some stereotypical representations of gender, including the most recent film, The Princess and the Frog. Although both the male and female roles have changed over time in the Disney Princess line, the male characters exhibit more androgyny throughout and less change in their gender role portrayals.",
"title": ""
}
] | [
{
"docid": "c4183c8b08da8d502d84a650d804cac8",
"text": "A three-phase current source gate turn-off (GTO) thyristor rectifier is described with a high power factor, low line current distortion, and a simple main circuit. It adopts pulse-width modulation (PWM) control techniques obtained by analyzing the PWM patterns of three-phase current source rectifiers/inverters, and it uses a method of generating such patterns. In addition, by using an optimum set-up of the circuit constants, the GTO switching frequency is reduced to 500 Hz. This rectifier is suitable for large power conversion, because it can reduce GTO switching loss and its snubber loss.<<ETX>>",
"title": ""
},
{
"docid": "e066f0670583195b9ad2f3c888af1dd2",
"text": "Deep learning has received much attention as of the most powerful approaches for multimodal representation learning in recent years. An ideal model for multimodal data can reason about missing modalities using the available ones, and usually provides more information when multiple modalities are being considered. All the previous deep models contain separate modality-specific networks and find a shared representation on top of those networks. Therefore, they only consider high level interactions between modalities to find a joint representation for them. In this paper, we propose a multimodal deep learning framework (MDLCW) that exploits the cross weights between representation of modalities, and try to gradually learn interactions of the modalities in a deep network manner (from low to high level interactions). Moreover, we theoretically show that considering these interactions provide more intra-modality information, and introduce a multi-stage pre-training method that is based on the properties of multi-modal data. In the proposed framework, as opposed to the existing deep methods for multi-modal data, we try to reconstruct the representation of each modality at a given level, with representation of other modalities in the previous layer. Extensive experimental results show that the proposed model outperforms state-of-the-art information retrieval methods for both image and text queries on the PASCAL-sentence and SUN-Attribute databases.",
"title": ""
},
{
"docid": "932813bc4a6ccbb81c9a9698b96f3694",
"text": "The fast growing deep learning technologies have become the main solution of many machine learning problems for medical image analysis. Deep convolution neural networks (CNNs), as one of the most important branch of the deep learning family, have been widely investigated for various computer-aided diagnosis tasks including long-term problems and continuously emerging new problems. Image contour detection is a fundamental but challenging task that has been studied for more than four decades. Recently, we have witnessed the significantly improved performance of contour detection thanks to the development of CNNs. Beyond purusing performance in existing natural image benchmarks, contour detection plays a particularly important role in medical image analysis. Segmenting various objects from radiology images or pathology images requires accurate detection of contours. However, some problems, such as discontinuity and shape constraints, are insufficiently studied in CNNs. It is necessary to clarify the challenges to encourage further exploration. The performance of CNN based contour detection relies on the state-of-the-art CNN architectures. Careful investigation of their design principles and motivations is critical and beneficial to contour detection. In this paper, we first review recent development of medical image contour detection and point out the current confronting challenges and problems. We discuss the development of general CNNs and their applications in image contours (or edges) detection. We compare those methods in detail, clarify their strengthens and weaknesses. Then we review their recent applications in medical image analysis and point out limitations, with the goal to light some potential directions in medical image analysis. We expect the paper to cover comprehensive technical ingredients of advanced CNNs to enrich the study in the medical image domain. 1E-mail: [email protected] Preprint submitted to arXiv August 26, 2018 ar X iv :1 70 8. 07 28 1v 1 [ cs .C V ] 2 4 A ug 2 01 7",
"title": ""
},
{
"docid": "2b68a925b9056e150a67d794b993e7c7",
"text": "The rise and development of O2O e-commerce has brought new opportunities for the enterprise, and also proposed the new challenge to the traditional electronic commerce. The formation process of customer loyalty of O2O e-commerce environment is a complex psychological process. This paper will combine the characteristics of O2O e-commerce, customer's consumer psychology and consumer behavior characteristics to build customer loyalty formation mechanism model which based on the theory of reasoned action model. The related factors of the model including the customer perceived value, customer satisfaction, customer trust and customer switching costs. By exploring the factors affecting customer’ loyalty of O2O e-commerce can provide reference and basis for enterprises to develop e-commerce and better for O2O e-commerce enterprises to develop marketing strategy and enhance customer loyalty. At the end of this paper will also put forward some targeted suggestions for O2O e-commerce enterprises.",
"title": ""
},
{
"docid": "d4f47babcd5840a3f2b5614244835c94",
"text": "This paper presents new in-line pseudoelliptic bandpass filters with nonresonating nodes. Microwave bandpass filters based on dual- and triple-mode cavities are introduced. In each case, the transmission zeros (TZs) are individually generated and controlled by dedicated resonators. Dual- and triple-mode cavities are kept homogeneous and contain no coupling or tuning elements. A third-order filter with a TZ extracted at its center is designed by cascading two dual-mode cavities. A direct design technique of this filter is introduced and shown to produce accurate initial designs for narrow-band cases. A six-pole filter is designed by cascading two triple-mode cavities. Measured results are presented to demonstrate the validity of this novel approach.",
"title": ""
},
{
"docid": "78d33d767f9eb15ef79a6d016ffcfb3a",
"text": "Healthcare scientific applications, such as body area network, require of deploying hundreds of interconnected sensors to monitor the health status of a host. One of the biggest challenges is the streaming data collected by all those sensors, which needs to be processed in real time. Follow-up data analysis would normally involve moving the collected big data to a cloud data center for status reporting and record tracking purpose. Therefore, an efficient cloud platform with very elastic scaling capacity is needed to support such kind of real time streaming data applications. The current cloud platform either lacks of such a module to process streaming data, or scales in regard to coarse-grained compute nodes. In this paper, we propose a task-level adaptive MapReduce framework. This framework extends the generic MapReduce architecture by designing each Map and Reduce task as a consistent running loop daemon. The beauty of this new framework is the scaling capability being designed at the Map and Task level, rather than being scaled from the compute-node level. This strategy is capable of not only scaling up and down in real time, but also leading to effective use of compute resources in cloud data center. As a first step towards implementing this framework in real cloud, we developed a simulator that captures workload strength, and provisions the amount of Map and Reduce tasks just in need and in real time. To further enhance the framework, we applied two streaming data workload prediction methods, smoothing and Kalman filter, to estimate the unknown workload characteristics. We see 63.1% performance improvement by using the Kalman filter method to predict the workload. We also use real streaming data workload trace to test the framework. Experimental results show that this framework schedules the Map and Reduce tasks very efficiently, as the streaming data changes its arrival rate. © 2014 Elsevier B.V. All rights reserved. ∗ Corresponding author at: Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. Tel.: +1",
"title": ""
},
{
"docid": "f8e6f97f5c797d490e2490dad676f62a",
"text": "Both patients and clinicians may incorrectly diagnose vulvovaginitis symptoms. Patients often self-treat with over-the-counter antifungals or home remedies, although they are unable to distinguish among the possible causes of their symptoms. Telephone triage practices and time constraints on office visits may also hamper effective diagnosis. This review is a guide to distinguish potential causes of vulvovaginal symptoms. The first section describes both common and uncommon conditions associated with vulvovaginitis, including infectious vulvovaginitis, allergic contact dermatitis, systemic dermatoses, rare autoimmune diseases, and neuropathic vulvar pain syndromes. The focus is on the clinical presentation, specifically 1) the absence or presence and characteristics of vaginal discharge; 2) the nature of sensory symptoms (itch and/or pain, localized or generalized, provoked, intermittent, or chronic); and 3) the absence or presence of mucocutaneous changes, including the types of lesions observed and the affected tissue. Additionally, this review describes how such features of the clinical presentation can help identify various causes of vulvovaginitis.",
"title": ""
},
{
"docid": "9152c55c35305bcaf56bc586e87f1575",
"text": "Information practices that use personal, financial, and health-related information are governed by US laws and regulations to prevent unauthorized use and disclosure. To ensure compliance under the law, the security and privacy requirements of relevant software systems must properly be aligned with these regulations. However, these regulations describe stakeholder rules, called rights and obligations, in complex and sometimes ambiguous legal language. These \"rules\" are often precursors to software requirements that must undergo considerable refinement and analysis before they become implementable. To support the software engineering effort to derive security requirements from regulations, we present a methodology for directly extracting access rights and obligations from regulation texts. The methodology provides statement-level coverage for an entire regulatory document to consistently identify and infer six types of data access constraints, handle complex cross references, resolve ambiguities, and assign required priorities between access rights and obligations to avoid unlawful information disclosures. We present results from applying this methodology to the entire regulation text of the US Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule.",
"title": ""
},
{
"docid": "56fc185890f9bbf391e2617e0967e736",
"text": "Automated Facial Expression Recognition has remained a challenging and interesting problem in computer vision. The recognition of facial expressions is difficult problem for machine learning techniques, since people can vary significantly in the way they show their expressions. Deep learning is a new area of research within machine learning method which can classify images of human faces into emotion categories using Deep Neural Networks (DNN). Convolutional neural networks (CNN) have been widely used to overcome the difficulties in facial expression classification. In this paper, we present a new architecture network based on CNN for facial expressions recognition. We fine tuned our architecture with Visual Geometry Group model (VGG) to improve results. To evaluate our architecture we tested it with many largely public databases (CK+, MUG, and RAFD). Obtained results show that the CNN approach is very effective in image expression recognition on many public databases which achieve an improvements in facial expression analysis.",
"title": ""
},
{
"docid": "26b592326edeac03578d8b52ce33f2e2",
"text": "This paper proposes a model of information aesthetics in the context of information visualization. It addresses the need to acknowledge a recently emerging number of visualization projects that combine information visualization techniques with principles of creative design. The proposed model contributes to a better understanding of information aesthetics as a potentially independent research field within visualization that specifically focuses on the experience of aesthetics, dataset interpretation and interaction. The proposed model is based on analysing existing visualization techniques by their interpretative intent and data mapping inspiration. It reveals information aesthetics as the conceptual link between information visualization and visualization art, and includes the fields of social and ambient visualization. This model is unique in its focus on aesthetics as the artistic influence on the technical implementation and intended purpose of a visualization technique, rather than subjective aesthetic judgments of the visualization outcome. This research provides a framework for understanding aesthetics in visualization, and allows for new design guidelines and reviewing criteria.",
"title": ""
},
{
"docid": "89c3f876494506aceeb9b9ccf0da0ff1",
"text": "With the prevalence of accessible depth sensors, dynamic human body skeletons have attracted much attention as a robust modality for action recognition. Previous methods model skeletons based on RNN or CNN, which has limited expressive power for irregular joints. In this paper, we represent skeletons naturally on graphs and propose a generalized graph convolutional neural networks (GGCN) for skeleton-based action recognition, aiming to capture space-time variation via spectral graph theory. In particular, we construct a generalized graph over consecutive frames, where each joint is not only connected to its neighboring joints in the same frame strongly or weakly, but also linked with relevant joints in the previous and subsequent frames. The generalized graphs are then fed into GGCN along with the coordinate matrix of the skeleton sequence for feature learning, where we deploy high-order and fast Chebyshev approximation of spectral graph convolution in the network. Experiments show that we achieve the state-of-the-art performance on the widely used NTU RGB+D, UT-Kinect and SYSU 3D datasets.",
"title": ""
},
{
"docid": "fa240a48947a43b9130ee7f48c3ad463",
"text": "Content distribution on today's Internet operates primarily in two modes: server-based and peer-to-peer (P2P). To leverage the advantages of both modes while circumventing their key limitations, a third mode: peer-to-server/peer (P2SP) has emerged in recent years. Although P2SP can provide efficient hybrid server-P2P content distribution, P2SP generally works in a closed manner by only utilizing its private owned servers to accelerate its private organized peer swarms. Consequently, P2SP still has its limitations in both content abundance and server bandwidth. To this end, the fourth mode (or says a generalized mode of P2SP) has appeared as \"open-P2SP\" that integrates various third-party servers, contents, and data transfer protocols all over the Internet into a large, open, and federated P2SP platform. In this paper, based on a large-scale commercial open-P2SP system named \"QQXuanfeng\" , we investigate the key challenging problems, practical designs and real-world performances of open-P2SP. Such \"white-box\" study of open-P2SP provides solid experiences and helpful heuristics to the designers of similar systems.",
"title": ""
},
{
"docid": "82edffdadaee9ac0a5b11eb686e109a1",
"text": "This paper highlights different security threats and vulnerabilities that is being challenged in smart-grid utilizing Distributed Network Protocol (DNP3) as a real time communication protocol. Experimentally, we will demonstrate two scenarios of attacks, unsolicited message attack and data set injection. The experiments were run on a computer virtual environment and then simulated in DETER testbed platform. The use of intrusion detection system will be necessary to identify attackers targeting different part of the smart grid infrastructure. Therefore, mitigation techniques will be used to ensure a healthy check of the network and we will propose the use of host-based intrusion detection agent at each Intelligent Electronic Device (IED) for the purpose of detecting the intrusion and mitigating it. Performing attacks, attack detection, prevention and counter measures will be our primary goal to achieve in this research paper.",
"title": ""
},
{
"docid": "aa5d6e57350c2c1082091c62b6a941e8",
"text": "MEC is an emerging paradigm that provides computing, storage, and networking resources within the edge of the mobile RAN. MEC servers are deployed on a generic computing platform within the RAN, and allow for delay-sensitive and context-aware applications to be executed in close proximity to end users. This paradigm alleviates the backhaul and core network and is crucial for enabling low-latency, high-bandwidth, and agile mobile services. This article envisions a real-time, context-aware collaboration framework that lies at the edge of the RAN, comprising MEC servers and mobile devices, and amalgamates the heterogeneous resources at the edge. Specifically, we introduce and study three representative use cases ranging from mobile edge orchestration, collaborative caching and processing, and multi-layer interference cancellation. We demonstrate the promising benefits of the proposed approaches in facilitating the evolution to 5G networks. Finally, we discuss the key technical challenges and open research issues that need to be addressed in order to efficiently integrate MEC into the 5G ecosystem.",
"title": ""
},
{
"docid": "74aaf19d143d86b52c09e726a70a2ac0",
"text": "This paper presents simulation and experimental investigation results of steerable integrated lens antennas (ILAs) operating in the 60 GHz frequency band. The feed array of the ILAs is comprised by four switched aperture coupled microstrip antenna (ACMA) elements that allows steering between four different antenna main beam directions in one plane. The dielectric lenses of the designed ILAs are extended hemispherical quartz (ε = 3.8) lenses with the radiuses of 7.5 and 12.5 mm. The extension lengths of the lenses are selected through the electromagnetic optimization in order to achieve the maximum ILAs directivities and also the minimum directivity degradations of the outer antenna elements in the feed array (± 3 mm displacement) relatively to the inner ones (± 1 mm displacement). Simulated maximum directivities of the boresight beam of the designed ILAs are 19.8 dBi and 23.8 dBi that are sufficient for the steerable antennas for the millimeter-wave WLAN/WPAN communication systems. The feed ACMA array together with the waveguide to microstrip transition dedicated for experimental investigations is fabricated on high frequency and low cost Rogers 4003C substrate. Single Pole Double Through (SPDT) switches from Hittite are used in order to steer the ILA prototypes main beam directions. The experimental results of the fabricated electronically steerable quartz ILA prototypes prove the simulation results and show ±35° and ±22° angle sector coverage for the lenses with the 7.5 and 12.5 mm radiuses respectively.",
"title": ""
},
{
"docid": "508ce0c5126540ad7f46b8f375c50df8",
"text": "Sex differences in children’s toy preferences are thought by many to arise from gender socialization. However, evidence from patients with endocrine disorders suggests that biological factors during early development (e.g., levels of androgens) are influential. In this study, we found that vervet monkeys (Cercopithecus aethiops sabaeus) show sex differences in toy preferences similar to those documented previously in children. The percent of contact time with toys typically preferred by boys (a car and a ball) was greater in male vervets (n = 33) than in female vervets (n = 30) (P < .05), whereas the percent of contact time with toys typically preferred by girls (a doll and a pot) was greater in female vervets than in male vervets (P < .01). In contrast, contact time with toys preferred equally by boys and girls (a picture book and a stuffed dog) was comparable in male and female vervets. The results suggest that sexually differentiated object preferences arose early in human evolution, prior to the emergence of a distinct hominid lineage. This implies that sexually dimorphic preferences for features (e.g., color, shape, movement) may have evolved from differential selection pressures based on the different behavioral roles of males and females, and that evolved object feature preferences may contribute to present day sexually dimorphic toy preferences in children. D 2002 Elsevier Science Inc. All rights reserved.",
"title": ""
},
{
"docid": "f21b0f519f4bf46cb61b2dc2861014df",
"text": "Player experience is difficult to evaluate and report, especially using quantitative methodologies in addition to observations and interviews. One step towards tying quantitative physiological measures of player arousal to player experience reports are Biometric Storyboards (BioSt). They can visualise meaningful relationships between a player's physiological changes and game events. This paper evaluates the usefulness of BioSt to the game industry. We presented the Biometric Storyboards technique to six game developers and interviewed them about the advantages and disadvantages of this technique.",
"title": ""
},
{
"docid": "2cc1afe86873bb7d83e919d25fbd5954",
"text": "Cellular Automata (CA) have attracted growing attention in urban simulation because their capability in spatial modelling is not fully developed in GIS. This paper discusses how cellular automata (CA) can be extended and integrated with GIS to help planners to search for better urban forms for sustainable development. The cellular automata model is built within a grid-GIS system to facilitate easy access to GIS databases for constructing the constraints. The essence of the model is that constraint space is used to regulate cellular space. Local, regional and global constraints play important roles in a ecting modelling results. In addition, ‘grey’ cells are de ned to represent the degrees or percentages of urban land development during the iterations of modelling for more accurate results. The model can be easily controlled by the parameter k using a power transformation function for calculating the constraint scores. It can be used as a useful planning tool to test the e ects of di erent urban development scenarios. 1. Cellular automata and GIS for urban simulation Cellular automata (CA) were developed by Ulam in the 1940s and soon used by Von Neumann to investigate the logical nature of self-reproducible systems (White and Engelen 1993). A CA system usually consists of four elements—cells, states, neighbourhoods and rules. Cells are the smallest units which must manifest some adjacency or proximity. The state of a cell can change according to transition rules which are de ned in terms of neighbourhood functions. The notion of neighbourhood is central to the CA paradigm (Couclelis 1997), but the de nition of neighbourhood is rather relaxed. CA are cell-based methods that can model two-dimensional space. Because of this underlying feature, it does not take long for geographers to apply CA to simulate land use change, urban development and other changes of geographical phenomena. CA have become especially, useful as a tool for modelling urban spatial dynamics and encouraging results have been documented (Deadman et al. 1993, Batty and Xie 1994a, Batty and Xie 1997, White and Engelen 1997). The advantages are that the future trajectory of urban morphology can be shown virtually during the simulation processes. The rapid development of GIS helps to foster the application of CA in urban Internationa l Journal of Geographica l Information Science ISSN 1365-8816 print/ISSN 1362-3087 online © 2000 Taylor & Francis Ltd http://www.tandf.co.uk/journals/tf/13658816.html X. L i and A. G. Yeh 132 simulation. Some researches indicate that cell-based GIS may indeed serve as a useful tool for implementing cellular automata models for the purposes of geographical analysis (Itami 1994). Although current GIS are not designed for fast iterative computation, cellular automata can still be used by creating batch ® les that contain iterative command sequences. While linking cellular automata to GIS can overcome some of the limitations of current GIS (White and Engelen 1997), CA can bene® t from the useful information provided by GIS in de® ning transition rules. The data realism requirement of CA can be best satis® ed with the aid of GIS (Couclelis 1997). Space no longer needs to be uniform since the spatial di erence equations can be easily developed in the context of GIS (Batty and Xie 1994b). Most current GIS techniques have limitations in modelling changes in the landscape over time, but the integration of CA and GIS has demonstrated considerable potential (Itami 1988, Deadman et al. 1993). The limitations of contemporary GIS include its poor ability to handle dynamic spatial models, poor performance for many operations, and poor handling of the temporal dimension (Park and Wagner 1997 ). In coupling GIS with CA, CA can serves as an analytical engine to provide a ̄ exible framework for the programming and running of dynamic spatial models. 2. Constrained CA for the planning of sustainable urban development Interest in sustainable urban development has increased rapidly in recent years. Unfortunately, the concept of sustainable urban development is debatable because unique de® nitions and scopes do not exist (Haughton and Hunter 1994). However, this concept is very important to our society in dealing with its increasingly pressing resource and environmental problems. As more nations are implementing this concept in their development plans, it has created important impacts on national policies and urban planning. The concern over sustainable urban development will continue to grow, especially in the developing countries which are undergoing rapid urbanization. A useful way to clarify its ambiguity is to set up some working de® nitions. Some speci® c and narrow de® nitions do exist for special circumstances but there are no commonly accepted de® nitions. The working de® nitions can help to eliminate ambiguities and ® nd out solutions and better alternatives to existing development patterns. The conversion of agricultural land into urban land uses in the urbanization processes has become a serious issue for sustainable urban development in the developing countries. Take China as an example, it cannot a ord to lose a signi® cant amount of its valuable agricultural land because it has a huge growing population to feed. Unfortunately, in recent years, a large amount of such land have been unnecessarily lost and the forms of existing urban development cannot help to sustain its further development (Yeh and Li 1997, Yeh and Li 1998). The complete depletion of agricultural land resources would not be far away in some fast growing areas if such development trends continued. The main issue of sustainable urban development is to search for better urban forms that can help to sustain development, especially the minimization of unnecessary agricultural land loss. Four operational criteria for sustainable urban forms can be used: (1 ) not to convert too much agricultural land at the early stages of development; (2 ) to decide the amount of land consumption based on available land resources and population growth; (3 ) to guide urban development to sites which are less important for food production; and (4 ) to maintain compact development patterns. The objective of this research is to develop an operational CA model for Modelling sustainable urban development 133 sustainable urban development. A number of advantages have been identi® ed in the application of CA in urban simulation (Wolfram 1984, Itami 1988). Cellular automata are seen not only as a framework for dynamic spatial modelling but as a paradigm for thinking about complex spatial-temporal phenomena and an experimental laboratory for testing ideas (Itami 1994 ). Formally, standard cellular automata may be generalised as follows: St+1 = f (St, N ) (1 ) where S is a set of all possible states of the cellular automata, N is a neighbourhood of all cells providing input values for the function f, and f is a transition function that de® nes the change of the state from t to t+1. Standard cellular automata apply a b̀ottom-up’ approach. The approach argues that local rules can create complex patterns by running the models in iterations. It is central to the idea that cities should work from particular to general, and that they should seek to understand the small scale in order to understand the large (Batty and Xie 1994a). It is amazing to see that real urban systems can be modelled based on microscopic behaviour that may be the CA model’s most useful advantage . However, the t̀op-down’ critique nevertheless needs to be taken seriously. An example is that central governments have the power to control overall land development patterns and the amount of land consumption. With the implementations of sustainable elements into cellular automata, a new paradigm for thinking about urban planning emerges. It is possible to embed some constraints in the transition rules of cellular automata so that urban growth can be rationalised according to a set of pre-de® ned sustainable criteria. However, such experiments are very limited since many researchers just focus on the simulation of possible urban evolution and the understanding of growth mechanisms using CA techniques. The constrained cellular automata should be able to provide much better alternatives to actual development patterns. A good example is to produce a c̀ompact’ urban form using CA models. The need for sustainable cities is readily apparent in recent years. A particular issue is to seek the most suitable form for sustainable urban development. The growing spread of urban areas accelerating at an alarming rate in the last few decades re ̄ ects the dramatic pressure of human development on nature. The steady rise in urban areas and decline in agricultural land have led to the worsening of food production and other environmental problems. Urban development towards a compact form has been proposed as a means to alleviate the increasingly intensi® ed land use con ̄ icts. The morphology of a city is an important feature in the c̀ompact city theory’ (Jenks et al. 1996). Evidence indicates a strong link between urban form and sustainable development, although it is not simple and straightforward. Compact urban form can be a major means in guiding urban development to sustainability, especially in reducing the negative e ects of the present dispersed development in Western cities. However, one of the frequent problems in the compact city debate is the lack of proper tools to ensure successful implementation of the compact city because of its complexity (Burton et al. 1996). This study demonstrates that the constrained CA can be used to model compact cities and sustainable urban forms based on local, regional and global constraints. 3. Suitability and constraints for sustainable urban forms using CA In this constrained CA model, there are three important aspects of sustainable urban forms that need to be consideredÐ compact patterns, land q",
"title": ""
},
{
"docid": "320dbbbc643ff97e97d928130a51384d",
"text": "Deep evolutionary network structured representation (DENSER) is a novel evolutionary approach for the automatic generation of deep neural networks (DNNs) which combines the principles of genetic algorithms (GAs) with those of dynamic structured grammatical evolution (DSGE). The GA-level encodes the macro structure of evolution, i.e., the layers, learning, and/or data augmentation methods (among others); the DSGE-level specifies the parameters of each GA evolutionary unit and the valid range of the parameters. The use of a grammar makes DENSER a general purpose framework for generating DNNs: one just needs to adapt the grammar to be able to deal with different network and layer types, problems, or even to change the range of the parameters. DENSER is tested on the automatic generation of convolutional neural networks (CNNs) for the CIFAR-10 dataset, with the best performing networks reaching accuracies of up to 95.22%. Furthermore, we take the fittest networks evolved on the CIFAR-10, and apply them to classify MNIST, Fashion-MNIST, SVHN, Rectangles, and CIFAR-100. The results show that the DNNs discovered by DENSER during evolution generalise, are robust, and scale. The most impressive result is the 78.75% classification accuracy on the CIFAR-100 dataset, which, to the best of our knowledge, sets a new state-of-the-art on methods that seek to automatically design CNNs.",
"title": ""
},
{
"docid": "41c99f4746fc299ae886b6274f899c4b",
"text": "The disruptive power of blockchain technologies represents a great opportunity to re-imagine standard practices of providing radio access services by addressing critical areas such as deployment models that can benefit from brand new approaches. As a starting point for this debate, we look at the current limits of infrastructure sharing, and specifically at the Small-Cell-as-a-Service trend, asking ourselves how we could push it to its natural extreme: a scenario in which any individual home or business user can become a service provider for mobile network operators (MNOs), freed from all the scalability and legal constraints that are inherent to the current modus operandi. We propose the adoption of smart contracts to implement simple but effective Service Level Agreements (SLAs) between small cell providers and MNOs, and present an example contract template based on the Ethereum blockchain.",
"title": ""
}
] | scidocsrr |
550b38cef95967bdb7e4dbed990f0777 | Twitter-Based User Modeling for News Recommendations | [
{
"docid": "f8e20046f9ad2e4ef63339f7c611e815",
"text": "We propose and evaluate a probabilistic framework for estimating a Twitter user's city-level location based purely on the content of the user's tweets, even in the absence of any other geospatial cues. By augmenting the massive human-powered sensing capabilities of Twitter and related microblogging services with content-derived location information, this framework can overcome the sparsity of geo-enabled features in these services and enable new location-based personalized information services, the targeting of regional advertisements, and so on. Three of the key features of the proposed approach are: (i) its reliance purely on tweet content, meaning no need for user IP information, private login information, or external knowledge bases; (ii) a classification component for automatically identifying words in tweets with a strong local geo-scope; and (iii) a lattice-based neighborhood smoothing model for refining a user's location estimate. The system estimates k possible locations for each user in descending order of confidence. On average we find that the location estimates converge quickly (needing just 100s of tweets), placing 51% of Twitter users within 100 miles of their actual location.",
"title": ""
},
{
"docid": "2936f8e1f9a6dcf2ba4fdbaee73684e2",
"text": "Recently the world of the web has become more social and more real-time. Facebook and Twitter are perhaps the exemplars of a new generation of social, real-time web services and we believe these types of service provide a fertile ground for recommender systems research. In this paper we focus on one of the key features of the social web, namely the creation of relationships between users. Like recent research, we view this as an important recommendation problem -- for a given user, UT which other users might be recommended as followers/followees -- but unlike other researchers we attempt to harness the real-time web as the basis for profiling and recommendation. To this end we evaluate a range of different profiling and recommendation strategies, based on a large dataset of Twitter users and their tweets, to demonstrate the potential for effective and efficient followee recommendation.",
"title": ""
}
] | [
{
"docid": "9869bc5dfc8f20b50608f0d68f7e49ba",
"text": "Automated discovery of early visual concepts from raw image data is a major open challenge in AI research. Addressing this problem, we propose an unsupervised approach for learning disentangled representations of the underlying factors of variation. We draw inspiration from neuroscience, and show how this can be achieved in an unsupervised generative model by applying the same learning pressures as have been suggested to act in the ventral visual stream in the brain. By enforcing redundancy reduction, encouraging statistical independence, and exposure to data with transform continuities analogous to those to which human infants are exposed, we obtain a variational autoencoder (VAE) framework capable of learning disentangled factors. Our approach makes few assumptions and works well across a wide variety of datasets. Furthermore, our solution has useful emergent properties, such as zero-shot inference and an intuitive understanding of “objectness”.",
"title": ""
},
{
"docid": "1c89b9927bd5e81c53a9896cd3122b92",
"text": "The whole world is changed rapidly and using the current technologies Internet becomes an essential need for everyone. Web is used in every field. Most of the people use web for a common purpose like online shopping, chatting etc. During an online shopping large number of reviews/opinions are given by the users that reflect whether the product is good or bad. These reviews need to be explored, analyse and organized for better decision making. Opinion Mining is a natural language processing task that deals with finding orientation of opinion in a piece of text with respect to a topic. In this paper a document based opinion mining system is proposed that classify the documents as positive, negative and neutral. Negation is also handled in the proposed system. Experimental results using reviews of movies show the effectiveness of the system.",
"title": ""
},
{
"docid": "1e74b6331730fac83481aa431feecf46",
"text": "A widely cited 1993 Computer article described failures in a software-controlled radiation machine that massively overdosed six people in the late 1980s, resulting in serious injury and fatalities. How far have safety-critical systems come since then?",
"title": ""
},
{
"docid": "6080c2ede3a8fb37b9c162d0ce815b3f",
"text": "The successes of deep learning in recent years has been fueled by the development of innovative new neural network architectures. However, the design of a neural network architecture remains a difficult problem, requiring significant human expertise as well as computational resources. In this paper, we propose a method for transforming a discrete neural network architecture space into a continuous and differentiable form, which enables the use of standard gradient-based optimization techniques for this problem, and allows us to learn the architecture and the parameters simultaneously. We evaluate our methods on the Udacity steering angle prediction dataset, and show that our method can discover architectures with similar or better predictive accuracy but significantly fewer parameters and smaller computational cost.",
"title": ""
},
{
"docid": "a80e3d5ee1d158295378671fcc3ea4fb",
"text": "We review the task of Sentence Pair Scoring, popular in the literature in various forms — viewed as Answer Sentence Selection, Semantic Text Scoring, Next Utterance Ranking, Recognizing Textual Entailment, Paraphrasing or e.g. a component of Memory Networks. We argue that all such tasks are similar from the model perspective and propose new baselines by comparing the performance of common IR metrics and popular convolutional, recurrent and attentionbased neural models across many Sentence Pair Scoring tasks and datasets. We discuss the problem of evaluating randomized models, propose a statistically grounded methodology, and attempt to improve comparisons by releasing new datasets that are much harder than some of the currently used well explored benchmarks. We introduce a unified open source software framework with easily pluggable models and tasks, which enables us to experiment with multi-task reusability of trained sentence models.",
"title": ""
},
{
"docid": "cbd6e6c75cae86426c21a38bd523200f",
"text": "Schottky junctions have been realized by evaporating gold spots on top of sexithiophen (6T), which is deposited on TiO 2 or ZnO with e-beam and spray pyrolysis. Using Mott-Schottky analysis of 6T/TiO2 and 6T/ZnO devices acceptor densities of 4.5x10(16) and 3.7x10(16) cm(-3) are obtained, respectively. For 6T/TiO2 deposited with the e-beam evaporation a conductivity of 9x10(-8) S cm(-1) and a charge carrier mobility of 1.2x10(-5) cm2/V s is found. Impedance spectroscopy is used to model the sample response in detail in terms of resistances and capacitances. An equivalent circuit is derived from the impedance measurements. The high-frequency data are analyzed in terms of the space-charge capacitance. In these frequencies shallow acceptor states dominate the heterojunction time constant. The high-frequency RC time constant is 8 micros. Deep acceptor states are represented by a resistance and a CPE connected in series. The equivalent circuit is validated in the potential range (from -1.2 to 0.8 V) for 6T/ZnO obtained with spray pyrolysis.",
"title": ""
},
{
"docid": "95d5229599fcf91b7ea302aa5dafee2a",
"text": "The more the telecom services marketing paradigm evolves, the more important it becomes to retain high value customers. Traditional customer segmentation methods based on experience or ARPU (Average Revenue per User) consider neither customers’ future revenue nor the cost of servicing customers of different types. Therefore, it is very difficult to effectively identify high-value customers. In this paper, we propose a novel customer segmentation method based on customer lifecycle, which includes five decision models, i.e. current value, historic value, prediction of long-term value, credit and loyalty. Due to the difficulty of quantitative computation of long-term value, credit and loyalty, a decision tree method is used to extract important parameters related to long-term value, credit and loyalty. Then a judgments matrix formulated on the basis of characteristics of data and the experience of business experts is presented. Finally a simple and practical customer value evaluation system is built. This model is applied to telecom operators in a province in China and good accuracy is achieved. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "de5c439731485929416b0e729f7f79b2",
"text": "The feedback dynamics from mosquito to human and back to mosquito involve considerable time delays due to the incubation periods of the parasites. In this paper, taking explicit account of the incubation periods of parasites within the human and the mosquito, we first propose a delayed Ross-Macdonald model. Then we calculate the basic reproduction number R0 and carry out some sensitivity analysis of R0 on the incubation periods, that is, to study the effect of time delays on the basic reproduction number. It is shown that the basic reproduction number is a decreasing function of both time delays. Thus, prolonging the incubation periods in either humans or mosquitos (via medicine or control measures) could reduce the prevalence of infection.",
"title": ""
},
{
"docid": "6c9f3107fbf14f5bef1b8edae1b9d059",
"text": "Syntax definitions are pervasive in modern software systems, and serve as the basis for language processing tools like parsers and compilers. Mainstream parser generators pose restrictions on syntax definitions that follow from their implementation algorithm. They hamper evolution, maintainability, and compositionality of syntax definitions. The pureness and declarativity of syntax definitions is lost. We analyze how these problems arise for different aspects of syntax definitions, discuss their consequences for language engineers, and show how the pure and declarative nature of syntax definitions can be regained.",
"title": ""
},
{
"docid": "9d319b7bfdf43b05aa79f67c990ccb73",
"text": "Queries are the foundations of data intensive applications. In model-driven software engineering (MDE), model queries are core technologies of tools and transformations. As software models are rapidly increasing in size and complexity, traditional tools exhibit scalability issues that decrease productivity and increase costs [17]. While scalability is a hot topic in the database community and recent NoSQL efforts have partially addressed many shortcomings, this happened at the cost of sacrificing the ad-hoc query capabilities of SQL. Unfortunately, this is a critical problem for MDE applications due to their inherent workload complexity. In this paper, we aim to address both the scalability and ad-hoc querying challenges by adapting incremental graph search techniques – known from the EMF-IncQuery framework – to a distributed cloud infrastructure. We propose a novel architecture for distributed and incremental queries, and conduct experiments to demonstrate that IncQuery-D, our prototype system, can scale up from a single workstation to a cluster that can handle very large models and complex incremental queries efficiently.",
"title": ""
},
{
"docid": "ea01ef46670d4bb8244df0d6ab08a3d5",
"text": "In this paper, statics model of an underactuated wire-driven flexible robotic arm is introduced. The robotic arm is composed of a serpentine backbone and a set of controlling wires. It has decoupled bending rigidity and axial rigidity, which enables the robot large axial payload capacity. Statics model of the robotic arm is developed using the Newton-Euler method. Combined with the kinematics model, the robotic arm deformation as well as the wire motion needed to control the robotic arm can be obtained. The model is validated by experiments. Results show that, the proposed model can well predict the robotic arm bending curve. Also, the bending curve is not affected by the wire pre-tension. This enables the wire-driven robotic arm with potential applications in minimally invasive surgical operations.",
"title": ""
},
{
"docid": "fada1434ec6e060eee9a2431688f82f3",
"text": "Neural language models (NLMs) have been able to improve machine translation (MT) thanks to their ability to generalize well to long contexts. Despite recent successes of deep neural networks in speech and vision, the general practice in MT is to incorporate NLMs with only one or two hidden layers and there have not been clear results on whether having more layers helps. In this paper, we demonstrate that deep NLMs with three or four layers outperform those with fewer layers in terms of both the perplexity and the translation quality. We combine various techniques to successfully train deep NLMs that jointly condition on both the source and target contexts. When reranking nbest lists of a strong web-forum baseline, our deep models yield an average boost of 0.5 TER / 0.5 BLEU points compared to using a shallow NLM. Additionally, we adapt our models to a new sms-chat domain and obtain a similar gain of 1.0 TER / 0.5 BLEU points.",
"title": ""
},
{
"docid": "63c1747c8803802e9d4cbc7d6231fa1a",
"text": "Crowdfunding is an alternative model for project financing, whereby a large and dispersed audience participates through relatively small financial contributions, in exchange for physical, financial or social rewards. It is usually done via Internet-based platforms that act as a bridge between the crowd and the projects. Over the past few years, academics have explored this topic, both empirically and theoretically. However, the mixed findings and array of theories used have come to warrant a critical review of past works. To this end, we perform a systematic review of the literature on crowdfunding and seek to extract (1) the key management theories that have been applied in the context of crowdfunding and how these have been extended, and (2) the principal factors contributing to success for the different crowdfunding models, where success entails both fundraising and timely repayment. In the process, we offer a comprehensive definition of crowdfunding and identify avenues for future research based on the gaps and conflicting results in the literature.",
"title": ""
},
{
"docid": "9db388f2564a24f58d8ea185e5b514be",
"text": "Analyzing large volumes of log events without some kind of classification is undoable nowadays due to the large amount of events. Using AI to classify events make these log events usable again. With the use of the Keras Deep Learning API, which supports many Optimizing Stochastic Gradient Decent algorithms, better known as optimizers, this research project tried these algorithms in a Long Short-Term Memory (LSTM) network, which is a variant of the Recurrent Neural Networks. These algorithms have been applied to classify and update event data stored in Elastic-Search. The LSTM network consists of five layers where the output layer is a Dense layer using the Softmax function for evaluating the AI model and making the predictions. The Categorical Cross-Entropy is the algorithm used to calculate the loss. For the same AI model, different optimizers have been used to measure the accuracy and the loss. Adam was found as the best choice with an accuracy of 29,8%.",
"title": ""
},
{
"docid": "9f643ce1a8b6c8cb2a0f335d6e950a2d",
"text": "A girl of 4 years and 5 months of age was admitted as an outpatient to perform ultrasound because of painless vaginal bleeding for 2–3 days. First, on transabdominal US, the kidneys, bladder, and uterus appeared normal. A translabial perineal approach revealed a hyperaemic mass (Fig. 1a, b). One day after, MRI was performed to exclude a possible aggressive lesion. MRI revealed the mass to be in apparent continuity with the urethra (Fig. 1c). Inspection under anaesthesia revealed a prolapsed urethra (Fig. 2): a doughnut of red and purple tissue surrounded the urethra, obscuring the hymeneal orifice. The patient was treated by undergoing a resection of the prolapsed mucosa and reocclusion of the mucous membrane around a Foley catheter. Three months later we saw: normal external genitalia, normal external urethral meatus, and no mucosal ectropion. We did not see any pathological secretions.",
"title": ""
},
{
"docid": "c7d1fe6e9fa7acc54da8a8ab6030e48f",
"text": "An ongoing challenge in electrical engineering is the design of antennas whose size is small compared to the broadcast wavelength λ. One difficulty is that the radiation resistance of a small antenna is small compared to that of the typical transmission lines that feed the antenna, so that much of the power in the feed line is reflected off the antenna rather than radiated unless a matching network is used at the antenna terminals (with a large inductance for a small dipole antenna and a large capacitance for a small loop antenna). The radiation resistance of an antenna that emits dipole radiation is proportional to the square of the peak (electric or magnetic) dipole moment of the antenna. This dipole moment is roughly the product of the peak charge times the length of the antenna in the case of a linear (electric) antenna, and is the product of the peak current times the area of the antenna in the case of a loop (magnetic) antenna. Hence, it is hard to increase the radiation resistance of small linear or loop antennas by altering their shapes. One suggestion for a small antenna is the so-called “crossed-field” antenna [2]. Its proponents are not very explicit as to the design of this antenna, so this problem is based on a conjecture as to its motivation. It is well known that in the far zone of a dipole antenna the electric and magnetic fields have equal magnitudes (in Gaussian units), and their directions are at right angles to each other and to the direction of propagation of the radiation. Furthermore, the far zone electric and magnetic fields are in phase. The argument is, I believe, that it is desirable if these conditions could also be met in the near zone of the antenna. The proponents appear to argue that in the near zone the magnetic field B is in phase with the current in a simple, small antenna, while the electric field E is in phase with the charge, but the charge and current have a 90◦ phase difference. Hence, they imply, the electric and magnetic fields are 90◦ out of phase in the near zone, so that the radiation (which is proportional to E× B) is weak. The concept of the “crossed-field” antenna seems to be based on the use of two small antennas driven 90◦ out of phase. The expectation is that the electric field of one of the A center-fed linear dipole antenna of total length l λ has radiation resistance Rlinear = (l/λ) 197 Ω, while a circular loop antenna of diameter d λ has Rloop = (d/λ) 1948 Ω. For example, if l = d = 0.1λ then Rlinear = 2 Ω and Rloop = 0.2 Ω. That there is little advantage to so-called small fractal antennas is explored in [1]. A variant based on combining a small electric dipole antenna with a small magnetic dipole (loop) antenna has been proposed by [3].",
"title": ""
},
{
"docid": "1648a759d2487177af4b5d62407fd6cd",
"text": "This paper discusses the presence of steady-state limit cycles in digitally controlled pulse-width modulation (PWM) converters, and suggests conditions on the control law and the quantization resolution for their elimination. It then introduces single-phase and multi-phase controlled digital dither as a means of increasing the effective resolution of digital PWM (DPWM) modules, allowing for the use of low resolution DPWM units in high regulation accuracy applications. Bounds on the number of bits of dither that can be used in a particular converter are derived.",
"title": ""
},
{
"docid": "57c91bce931a23501f42772c103d15c1",
"text": "Faceted browsing is widely used in Web shops and product comparison sites. In these cases, a fixed ordered list of facets is often employed. This approach suffers from two main issues. First, one needs to invest a significant amount of time to devise an effective list. Second, with a fixed list of facets, it can happen that a facet becomes useless if all products that match the query are associated to that particular facet. In this work, we present a framework for dynamic facet ordering in e-commerce. Based on measures for specificity and dispersion of facet values, the fully automated algorithm ranks those properties and facets on top that lead to a quick drill-down for any possible target product. In contrast to existing solutions, the framework addresses e-commerce specific aspects, such as the possibility of multiple clicks, the grouping of facets by their corresponding properties, and the abundance of numeric facets. In a large-scale simulation and user study, our approach was, in general, favorably compared to a facet list created by domain experts, a greedy approach as baseline, and a state-of-the-art entropy-based solution.",
"title": ""
},
{
"docid": "6149a6aaa9c39a1e02ab8fbe64fcb62b",
"text": "The thoracic diaphragm is a dome-shaped septum, composed of muscle surrounding a central tendon, which separates the thoracic and abdominal cavities. The function of the diaphragm is to expand the chest cavity during inspiration and to promote occlusion of the gastroesophageal junction. This article provides an overview of the normal anatomy of the diaphragm.",
"title": ""
},
{
"docid": "3e8de1702f4fd5da19175c29ad2b27ad",
"text": "In this work we formulate the problem of image captioning as a multimodal translation task. Analogous to machine translation, we present a sequence-to-sequence recurrent neural networks (RNN) model for image caption generation. Different from most existing work where the whole image is represented by convolutional neural network (CNN) feature, we propose to represent the input image as a sequence of detected objects which feeds as the source sequence of the RNN model. In this way, the sequential representation of an image can be naturally translated to a sequence of words, as the target sequence of the RNN model. To represent the image in a sequential way, we extract the objects features in the image and arrange them in a order using convolutional neural networks. To further leverage the visual information from the encoded objects, a sequential attention layer is introduced to selectively attend to the objects that are related to generate corresponding words in the sentences. Extensive experiments are conducted to validate the proposed approach on popular benchmark dataset, i.e., MS COCO, and the proposed model surpasses the state-of-the-art methods in all metrics following the dataset splits of previous work. The proposed approach is also evaluated by the evaluation server of MS COCO captioning challenge, and achieves very competitive results, e.g., a CIDEr of 1.029 (c5) and 1.064 (c40).",
"title": ""
}
] | scidocsrr |
fbdb2968aa93632ac8cdbbebbe879522 | A Single-Phase Photovoltaic Inverter Topology With a Series-Connected Energy Buffer | [
{
"docid": "826612712b3a44da30e6fb7e2dba95bc",
"text": "Flyback converters show the characteristics of current source when operating in discontinuous conduction mode (DCM) and boundary conduction mode (BCM), which makes it widely used in photovoltaic grid-connected micro-inverter. In this paper, an active clamp interleaved flyback converter operating with combination of DCM and BCM is proposed in micro-inverter to achieve zero voltage switching (ZVS) for both of primary switches and fully recycle the energy in the leakage inductance. The proposed control method makes active-clamping part include only one clamp capacitor. In DCM area, only one flyback converter operates and turn-off of its auxiliary switch is suggested here to reduce resonant conduction losses, which improve the efficiency at light loads. Performance of the proposed circuit is validated by the simulation results and experimental results.",
"title": ""
}
] | [
{
"docid": "3bc9e621a0cfa7b8791ae3fb94eff738",
"text": "This paper deals with environment perception for automobile applications. Environment perception comprises measuring the surrounding field with onboard sensors such as cameras, radar, lidars, etc., and signal processing to extract relevant information for the planned safety or assistance function. Relevant information is primarily supplied using two well-known methods, namely, object based and grid based. In the introduction, we discuss the advantages and disadvantages of the two methods and subsequently present an approach that combines the two methods to achieve better results. The first part outlines how measurements from stereo sensors can be mapped onto an occupancy grid using an appropriate inverse sensor model. We employ the Dempster-Shafer theory to describe the occupancy grid, which has certain advantages over Bayes' theorem. Furthermore, we generate clusters of grid cells that potentially belong to separate obstacles in the field. These clusters serve as input for an object-tracking framework implemented with an interacting multiple-model estimator. Thereby, moving objects in the field can be identified, and this, in turn, helps update the occupancy grid more effectively. The first experimental results are illustrated, and the next possible research intentions are also discussed.",
"title": ""
},
{
"docid": "a7c79045bcbd9fac03015295324745e3",
"text": "Image saliency detection has recently witnessed rapid progress due to deep convolutional neural networks. However, none of the existing methods is able to identify object instances in the detected salient regions. In this paper, we present a salient instance segmentation method that produces a saliency mask with distinct object instance labels for an input image. Our method consists of three steps, estimating saliency map, detecting salient object contours and identifying salient object instances. For the first two steps, we propose a multiscale saliency refinement network, which generates high-quality salient region masks and salient object contours. Once integrated with multiscale combinatorial grouping and a MAP-based subset optimization framework, our method can generate very promising salient object instance segmentation results. To promote further research and evaluation of salient instance segmentation, we also construct a new database of 1000 images and their pixelwise salient instance annotations. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks for salient region detection as well as on our new dataset for salient instance segmentation.",
"title": ""
},
{
"docid": "d813fcc217544b522a7835e79c1e21d9",
"text": "We present a framework to synthesize character movements based on high level parameters, such that the produced movements respect the manifold of human motion, trained on a large motion capture dataset. The learned motion manifold, which is represented by the hidden units of a convolutional autoencoder, represents motion data in sparse components which can be combined to produce a wide range of complex movements. To map from high level parameters to the motion manifold, we stack a deep feedforward neural network on top of the trained autoencoder. This network is trained to produce realistic motion sequences from parameters such as a curve over the terrain that the character should follow, or a target location for punching and kicking. The feedforward control network and the motion manifold are trained independently, allowing the user to easily switch between feedforward networks according to the desired interface, without re-training the motion manifold. Once motion is generated it can be edited by performing optimization in the space of the motion manifold. This allows for imposing kinematic constraints, or transforming the style of the motion, while ensuring the edited motion remains natural. As a result, the system can produce smooth, high quality motion sequences without any manual pre-processing of the training data.",
"title": ""
},
{
"docid": "cd23b0dfd98fb42513229070035e0aa9",
"text": "Sixteen residents in long-term care with advanced dementia (14 women; average age = 88) showed significantly more constructive engagement (defined as motor or verbal behaviors in response to an activity), less passive engagement (defined as passively observing an activity), and more pleasure while participating in Montessori-based programming than in regularly scheduled activities programming. Principles of Montessori-based programming, along with examples of such programming, are presented. Implications of the study and methods for expanding the use of Montessori-based dementia programming are discussed.",
"title": ""
},
{
"docid": "a0306096725c0d4b6bdd648bfa396f13",
"text": "Graph coloring—also known as vertex coloring—considers the problem of assigning colors to the nodes of a graph such that adjacent nodes do not share the same color. The optimization version of the problem concerns the minimization of the number of colors used. In this paper we deal with the problem of finding valid graphs colorings in a distributed way, that is, by means of an algorithm that only uses local information for deciding the color of the nodes. The algorithm proposed in this paper is inspired by the calling behavior of Japanese tree frogs. Male frogs use their calls to attract females. Interestingly, groups of males that are located near each other desynchronize their calls. This is because female frogs are only able to correctly localize male frogs when their calls are not too close in time. The proposed algorithm makes use of this desynchronization behavior for the assignment of different colors to neighboring nodes. We experimentally show that our algorithm is very competitive with the current state of the art, using different sets of problem instances and comparing to one of the most competitive algorithms from the literature.",
"title": ""
},
{
"docid": "c93a401b7ed3031ed6571bfbbf1078c8",
"text": "In this paper we propose a new footstep detection technique for data acquired using a triaxial geophone. The idea evolves from the investigation of geophone transduction principle. The technique exploits the randomness of neighbouring data vectors observed when the footstep is absent. We extend the same principle for triaxial signal denoising. Effectiveness of the proposed technique for transient detection and denoising are presented for real seismic data collected using a triaxial geophone.",
"title": ""
},
{
"docid": "93afb696fa395a7f7c2a4f3fc2ac690d",
"text": "We present a framework for recognizing isolated and continuous American Sign Language (ASL) sentences from three-dimensional data. The data are obtained by using physics-based three-dimensional tracking methods and then presented as input to Hidden Markov Models (HMMs) for recognition. To improve recognition performance, we model context-dependent HMMs and present a novel method of coupling three-dimensional computer vision methods and HMMs by temporally segmenting the data stream with vision methods. We then use the geometric properties of the segments to constrain the HMM framework for recognition. We show in experiments with a 53 sign vocabulary that three-dimensional features outperform two-dimensional features in recognition performance. Furthermore, we demonstrate that contextdependent modeling and the coupling of vision methods and HMMs improve the accuracy of continuous ASL recognition.",
"title": ""
},
{
"docid": "568317c1f18c476de5029d0a1e91438e",
"text": "Plant volatiles (PVs) are lipophilic molecules with high vapor pressure that serve various ecological roles. The synthesis of PVs involves the removal of hydrophilic moieties and oxidation/hydroxylation, reduction, methylation, and acylation reactions. Some PV biosynthetic enzymes produce multiple products from a single substrate or act on multiple substrates. Genes for PV biosynthesis evolve by duplication of genes that direct other aspects of plant metabolism; these duplicated genes then diverge from each other over time. Changes in the preferred substrate or resultant product of PV enzymes may occur through minimal changes of critical residues. Convergent evolution is often responsible for the ability of distally related species to synthesize the same volatile.",
"title": ""
},
{
"docid": "4a7ed4868ff279b4d83f969076fb91e9",
"text": "Information theoretic measures form a fundamental class of measures for comparing clusterings, and have recently received increasing interest. Neverthel ss, a number of questions concerning their properties and inter-relationships remain unresolv ed. In this paper, we perform an organized study of information theoretic measures for clustering com parison, including several existing popular measures in the literature, as well as some newly propos ed nes. We discuss and prove their important properties, such as the metric property and the no rmalization property. We then highlight to the clustering community the importance of correct ing information theoretic measures for chance, especially when the data size is small compared to th e number of clusters present therein. Of the available information theoretic based measures, we a dvocate the normalized information distance (NID) as a general measure of choice, for it possess e concurrently several important properties, such as being both a metric and a normalized meas ure, admitting an exact analytical adjusted-for-chance form, and using the nominal [0,1] range better than other normalized variants.",
"title": ""
},
{
"docid": "db9b374b230ded851846655fa88fd755",
"text": "Edges are important features in an image since they represent significant local intensity changes. They provide important clues to separate regions within an object or to identify changes in illumination. point noise. The real problem is how to enhance noisy remote sensing images and simultaneously extract the edges. Using the implemented Canny edge detector for features extraction and as an enhancement tool for remote sensing images, the result was robust with a very high enhancement level.",
"title": ""
},
{
"docid": "0ef3d7b26feba199df7d466d14740a57",
"text": "A parsing algorithm visualizer is a tool that visualizes the construction of a parser for a given context-free grammar and then illustrates the use of that parser to parse a given string. Parsing algorithm visualizers are used to teach the course on compiler construction which in invariably included in all undergraduate computer science curricula. This paper presents a new parsing algorithm visualizer that can visualize six parsing algorithms, viz. predictive parsing, simple LR parsing, canonical LR parsing, look-ahead LR parsing, Earley parsing and CYK parsing. The tool logically explains the process of parsing showing the calculations involved in each step. The output of the tool has been structured to maximize the learning outcomes and contains important constructs like FIRST and FOLLOW sets, item sets, parsing table, parse tree and leftmost or rightmost derivation depending on the algorithm being visualized. The tool has been used to teach the course on compiler construction at both undergraduate and graduate levels. An overall positive feedback was received from the students with 89% of them saying that the tool helped them in understanding the parsing algorithms. The tool is capable of visualizing multiple parsing algorithms and 88% students used it to compare the algorithms.",
"title": ""
},
{
"docid": "8c95392ab3cc23a7aa4f621f474d27ba",
"text": "Designing agile locomotion for quadruped robots often requires extensive expertise and tedious manual tuning. In this paper, we present a system to automate this process by leveraging deep reinforcement learning techniques. Our system can learn quadruped locomotion from scratch using simple reward signals. In addition, users can provide an open loop reference to guide the learning process when more control over the learned gait is needed. The control policies are learned in a physics simulator and then deployed on real robots. In robotics, policies trained in simulation often do not transfer to the real world. We narrow this reality gap by improving the physics simulator and learning robust policies. We improve the simulation using system identification, developing an accurate actuator model and simulating latency. We learn robust controllers by randomizing the physical environments, adding perturbations and designing a compact observation space. We evaluate our system on two agile locomotion gaits: trotting and galloping. After learning in simulation, a quadruped robot can successfully perform both gaits in the real world.",
"title": ""
},
{
"docid": "fb23919bd638765ec07efda41e4c4cf6",
"text": "OBJECTIVE\nThe distinct trajectories of patients with autism spectrum disorders (ASDs) have not been extensively studied, particularly regarding clinical manifestations beyond the neurobehavioral criteria from the Diagnostic and Statistical Manual of Mental Disorders. The objective of this study was to investigate the patterns of co-occurrence of medical comorbidities in ASDs.\n\n\nMETHODS\nInternational Classification of Diseases, Ninth Revision codes from patients aged at least 15 years and a diagnosis of ASD were obtained from electronic medical records. These codes were aggregated by using phenotype-wide association studies categories and processed into 1350-dimensional vectors describing the counts of the most common categories in 6-month blocks between the ages of 0 to 15. Hierarchical clustering was used to identify subgroups with distinct courses.\n\n\nRESULTS\nFour subgroups were identified. The first was characterized by seizures (n = 120, subgroup prevalence 77.5%). The second (n = 197) was characterized by multisystem disorders including gastrointestinal disorders (prevalence 24.3%) and auditory disorders and infections (prevalence 87.8%), and the third was characterized by psychiatric disorders (n = 212, prevalence 33.0%). The last group (n = 4316) could not be further resolved. The prevalence of psychiatric disorders was uncorrelated with seizure activity (P = .17), but a significant correlation existed between gastrointestinal disorders and seizures (P < .001). The correlation results were replicated by using a second sample of 496 individuals from a different geographic region.\n\n\nCONCLUSIONS\nThree distinct patterns of medical trajectories were identified by unsupervised clustering of electronic health record diagnoses. These may point to distinct etiologies with different genetic and environmental contributions. Additional clinical and molecular characterizations will be required to further delineate these subgroups.",
"title": ""
},
{
"docid": "36e2a20efc0f11589de197975c1195cc",
"text": "The conventional sigma-delta (SigmaDelta) modulator structures used in telecommunication and audio applications usually cannot satisfy the requirements of signal processing applications for converting the wideband signals into digital samples accurately. In this paper, system design, analytical aspects and optimization methods of a third order incremental sigma-delta (SigmaDelta) modulator will be discussed and finally the designed modulator will be implemented by switched-capacitor circuits. The design of anti-aliasing filter has been integrated inside of modulator signal transfer function. It has been shown that the implemented 3rd order sigma-delta (SigmaDelta) modulator can be designed for the maximum SNR of 54 dB for minimum over- sampling ratio of 16. The modulator operating principles and its analysis in frequency domain and the topologies for its optimizing have been discussed elaborately. Simulation results on implemented modulator validate the system design and its main parameters such as stability and output dynamic range.",
"title": ""
},
{
"docid": "a0429b8c7f7ae11eab315b28384e312b",
"text": "Almost all cellular mobile communications including first generation analog systems, second generation digital systems, third generation WCDMA, and fourth generation OFDMA systems use Ultra High Frequency (UHF) band of radio spectrum with frequencies in the range of 300MHz-3GHz. This band of spectrum is becoming increasingly crowded due to spectacular growth in mobile data and other related services. The portion of the RF spectrum above 3GHz has largely been uxexploited for commercial mobile applications. In this paper, we reason why wireless community should start looking at 3–300GHz spectrum for mobile broadband applications. We discuss propagation and device technology challenges associated with this band as well as its unique advantages such as spectrum availability and small component sizes for mobile applications.",
"title": ""
},
{
"docid": "cc2e24cd04212647f1c29482aa12910d",
"text": "A number of surveillance scenarios require the detection and tracking of people. Although person detection and counting systems are commercially available today, there is need for further research to address the challenges of real world scenarios. The focus of this work is the segmentation of groups of people into individuals. One relevant application of this algorithm is people counting. Experiments document that the presented approach leads to robust people counts.",
"title": ""
},
{
"docid": "bc758b1dd8e3a75df2255bb880a716ef",
"text": "In recent years, convolutional neural networks (CNNs) based machine learning algorithms have been widely applied in computer vision applications. However, for large-scale CNNs, the computation-intensive, memory-intensive and resource-consuming features have brought many challenges to CNN implementations. This work proposes an end-to-end FPGA-based CNN accelerator with all the layers mapped on one chip so that different layers can work concurrently in a pipelined structure to increase the throughput. A methodology which can find the optimized parallelism strategy for each layer is proposed to achieve high throughput and high resource utilization. In addition, a batch-based computing method is implemented and applied on fully connected layers (FC layers) to increase the memory bandwidth utilization due to the memory-intensive feature. Further, by applying two different computing patterns on FC layers, the required on-chip buffers can be reduced significantly. As a case study, a state-of-the-art large-scale CNN, AlexNet, is implemented on Xilinx VC709. It can achieve a peak performance of 565.94 GOP/s and 391 FPS under 156MHz clock frequency which outperforms previous approaches.",
"title": ""
},
{
"docid": "3867ff9ac24349b17e50ec2a34e84da4",
"text": "Each generation that enters the workforce brings with it its own unique perspectives and values, shaped by the times of their life, about work and the work environment; thus posing atypical human resources management challenges. Following the completion of an extensive quantitative study conducted in Cyprus, and by adopting a qualitative methodology, the researchers aim to further explore the occupational similarities and differences of the two prevailing generations, X and Y, currently active in the workplace. Moreover, the study investigates the effects of the perceptual generational differences on managing the diverse hospitality workplace. Industry implications, recommendations for stakeholders as well as directions for further scholarly research are discussed.",
"title": ""
},
{
"docid": "dadcea041dcc49d7d837cb8c938830f3",
"text": "Software Defined Networking (SDN) has been proposed as a drastic shift in the networking paradigm, by decoupling network control from the data plane and making the switching infrastructure truly programmable. The key enabler of SDN, OpenFlow, has seen widespread deployment on production networks and its adoption is constantly increasing. Although openness and programmability are primary features of OpenFlow, security is of core importance for real-world deployment. In this work, we perform a security analysis of OpenFlow using STRIDE and attack tree modeling methods, and we evaluate our approach on an emulated network testbed. The evaluation assumes an attacker model with access to the network data plane. Finally, we propose appropriate counter-measures that can potentially mitigate the security issues associated with OpenFlow networks. Our analysis and evaluation approach are not exhaustive, but are intended to be adaptable and extensible to new versions and deployment contexts of OpenFlow.",
"title": ""
},
{
"docid": "b1ec900ac755cb8af4e78a926702a626",
"text": "As social media has become more integrated into peoples’ daily lives, its users have begun turning to it in times of distress. People use Twitter, Facebook, YouTube, and other social media platforms to broadcast their needs, propagate rumors and news, and stay abreast of evolving crisis situations. Disaster relief organizations have begun to craft their efforts around pulling data about where aid is needed from social media and broadcasting their own needs and perceptions of the situation. They have begun deploying new software platforms to better analyze incoming data from social media, as well as to deploy new technologies to specifically harvest messages from disaster situations.",
"title": ""
}
] | scidocsrr |
172baf4e94ca4e8817d656c1bb5d6732 | Scene Flow Estimation: A Survey | [
{
"docid": "38fccb4ef1b53ccc8464beaf74db2b4b",
"text": "The novel concept of total generalized variation of a function u is introduced and some of its essential properties are proved. Differently from the bounded variation semi-norm, the new concept involves higher order derivatives of u. Numerical examples illustrate the high quality of this functional as a regularization term for mathematical imaging problems. In particular this functional selectively regularizes on different regularity levels and does not lead to a staircasing effect.",
"title": ""
}
] | [
{
"docid": "4151dc76a41c79339edf1976944707a0",
"text": "The realisation of domain-speci®c languages (DSLs) diers in fundamental ways from that of traditional programming languages. We describe eight recurring patterns that we have identi®ed as being used for DSL design and implementation. Existing languages can be extended, restricted, partially used, or become hosts for DSLs. Simple DSLs can be implemented by lexical processing. In addition, DSLs can be used to create front-ends to existing systems or to express complicated data structures. Finally, DSLs can be combined using process pipelines. The patterns described form a pattern language that can be used as a building block for a systematic view of the software development process involving DSLs. Ó 2001 Elsevier Science Inc. All rights reserved.",
"title": ""
},
{
"docid": "39d1271ce88b840b8d75806faf9463ad",
"text": "Dynamically Reconfigurable Systems (DRS), implemented using Field-Programmable Gate Arrays (FPGAs), allow hardware logic to be partially reconfigured while the rest of a design continues to operate. By mapping multiple reconfigurable hardware modules to the same physical region of an FPGA, such systems are able to time-multiplex their circuits at run time and can adapt to changing execution requirements. This architectural flexibility introduces challenges for verifying system functionality. New simulation approaches need to extend traditional simulation techniques to assist designers in testing and debugging the time-varying behavior of DRS. Another significant challenge is the effective use of tools so as to reduce the number of design iterations. This thesis focuses on simulation-based functional verification of modular reconfigurable DRS designs. We propose a methodology and provide tools to assist designers in verifying DRS designs while part of the design is undergoing reconfiguration. This thesis analyzes the challenges in verifying DRS designs with respect to the user design and the physical implementation of such systems. We propose using a simulationonly layer to emulate the behavior of target FPGAs and accurately model the characteristic features of reconfiguration. The simulation-only layer maintains verification productivity by abstracting away the physical details of the FPGA fabric. Furthermore, since the design does not need to be modified for simulation purposes, the design as implemented instead of some variation of it is verified. We provide two possible implementations of the simulation-only layer. Extended ReChannel is a SystemC library that can be used to model DRS at a high level. ReSim is a library to support RTL simulation of a DRS reconfiguring both its logic and state. Through a number of case studies, we demonstrate that with insignificant overheads, our approach seamlessly integrates with the existing, mainstream DRS design flow and with wellestablished verification methodologies such as top-down modeling and coverage-driven verification. The case studies also serve as a guide in the use of our libraries to identify bugs that are related to Dynamic Partial Reconfiguration. Our results demonstrate that using the simulation-only layer is an effective approach to the simulation-based functional verification of DRS designs.",
"title": ""
},
{
"docid": "51ae09462b4def4ff6d9994c6532cb7c",
"text": "Issue No. 2, Fall 2002 www.spacejournal.org Page 1 of 29 A Prediction Model that Combines Rain Attenuation and Other Propagation Impairments Along EarthSatellite Paths Asoka Dissanayake, Jeremy Allnutt, Fatim Haidara Abstract The rapid growth of satellite services using higher frequency bands such as the Ka-band has highlighted a need for estimating the combined effect of different propagation impairments. Many projected Ka-band services will use very small terminals and, for some, rain effects may only form a relatively small part of the total propagation link margin. It is therefore necessary to identify and predict the overall impact of every significant attenuating effect along any given path. A procedure for predicting the combined effect of rain attenuation and several other propagation impairments along earth-satellite paths is presented. Where accurate model exist for some phenomena, these have been incorporated into the prediction procedure. New models were developed, however, for rain attenuation, cloud attenuation, and low-angle fading to provide more overall accuracy, particularly at very low elevation angles (<10°). In the absence of a detailed knowledge of the occurrence probabilities of different impairments, an empirical approach is taken in estimating their combined effects. An evaluation of the procedure is made using slant-path attenuation data that have been collected with simultaneous beacon and radiometer measurements which allow a near complete account of different impairments. Results indicate that the rain attenuation element of the model provides the best average accuracy globally between 10 and 30 GHz and that the combined procedure gives prediction accuracies comparable to uncertainties associated with the year-to-year variability of path attenuation.",
"title": ""
},
{
"docid": "1b4ece2fe2c92fa1f3c5c8d61739cbb7",
"text": "Generating high-resolution, photo-realistic images has been a long-standing goal in machine learning. Recently, Nguyen et al. [37] showed one interesting way to synthesize novel images by performing gradient ascent in the latent space of a generator network to maximize the activations of one or multiple neurons in a separate classifier network. In this paper we extend this method by introducing an additional prior on the latent code, improving both sample quality and sample diversity, leading to a state-of-the-art generative model that produces high quality images at higher resolutions (227 × 227) than previous generative models, and does so for all 1000 ImageNet categories. In addition, we provide a unified probabilistic interpretation of related activation maximization methods and call the general class of models Plug and Play Generative Networks. PPGNs are composed of 1) a generator network G that is capable of drawing a wide range of image types and 2) a replaceable condition network C that tells the generator what to draw. We demonstrate the generation of images conditioned on a class (when C is an ImageNet or MIT Places classification network) and also conditioned on a caption (when C is an image captioning network). Our method also improves the state of the art of Multifaceted Feature Visualization [40], which generates the set of synthetic inputs that activate a neuron in order to better understand how deep neural networks operate. Finally, we show that our model performs reasonably well at the task of image inpainting. While image models are used in this paper, the approach is modality-agnostic and can be applied to many types of data.",
"title": ""
},
{
"docid": "2a34800bc275f062f820c0eb4597d297",
"text": "Construction sites are dynamic and complicated systems. The movement and interaction of people, goods and energy make construction safety management extremely difficult. Due to the ever-increasing amount of information, traditional construction safety management has operated under difficult circumstances. As an effective way to collect, identify and process information, sensor-based technology is deemed to provide new generation of methods for advancing construction safety management. It makes the real-time construction safety management with high efficiency and accuracy a reality and provides a solid foundation for facilitating its modernization, and informatization. Nowadays, various sensor-based technologies have been adopted for construction safety management, including locating sensor-based technology, vision-based sensing and wireless sensor networks. This paper provides a systematic and comprehensive review of previous studies in this field to acknowledge useful findings, identify the research gaps and point out future research directions.",
"title": ""
},
{
"docid": "dacf2f44c3f8fc0931dceda7e4cb9bef",
"text": "Brain-computer interaction has already moved from assistive care to applications such as gaming. Improvements in usability, hardware, signal processing, and system integration should yield applications in other nonmedical areas.",
"title": ""
},
{
"docid": "9aaae1995134469ffddea73baa7b911d",
"text": "We present probabilistic neural programs, a framework for program induction that permits flexible specification of both a computational model and inference algorithm while simultaneously enabling the use of deep neural networks. Probabilistic neural programs combine a computation graph for specifying a neural network with an operator for weighted nondeterministic choice. Thus, a program describes both a collection of decisions as well as the neural network architecture used to make each one. We evaluate our approach on a challenging diagram question answering task where probabilistic neural programs correctly execute nearly twice as many programs as a baseline model.",
"title": ""
},
{
"docid": "71cf493e0026fe057b1100c5ad1118ad",
"text": "We explore story generation: creative systems that can build coherent and fluent passages of text about a topic. We collect a large dataset of 300K human-written stories paired with writing prompts from an online forum. Our dataset enables hierarchical story generation, where the model first generates a premise, and then transforms it into a passage of text. We gain further improvements with a novel form of model fusion that improves the relevance of the story to the prompt, and adding a new gated multi-scale self-attention mechanism to model long-range context. Experiments show large improvements over strong baselines on both automated and human evaluations. Human judges prefer stories generated by our approach to those from a strong non-hierarchical model by a factor of two to one.",
"title": ""
},
{
"docid": "a77ff69755aadddfd70ea11fe23a1aac",
"text": "PURPOSE\nTo evaluate the potential of third-harmonic generation (THG) microscopy combined with second-harmonic generation (SHG) and two-photon excited fluorescence (2PEF) microscopies for visualizing the microstructure of the human cornea and trabecular meshwork based on their intrinsic nonlinear properties.\n\n\nMETHODS\nFresh human corneal buttons and corneoscleral discs from an eye bank were observed under a multiphoton microscope incorporating a titanium-sapphire laser and an optical parametric oscillator for the excitation, and equipped with detection channels in the forward and backward directions.\n\n\nRESULTS\nOriginal contrast mechanisms of THG signals in cornea with physiological relevance were elucidated. THG microscopy with circular incident polarization detected microscopic anisotropy and revealed the stacking and distribution of stromal collagen lamellae. THG imaging with linear incident polarization also revealed cellular and anchoring structures with micrometer resolution. In edematous tissue, a strong THG signal around cells indicated the local presence of water. Additionally, SHG signals reflected the distribution of fibrillar collagen, and 2PEF imaging revealed the elastic component of the trabecular meshwork and the fluorescence of metabolically active cells.\n\n\nCONCLUSIONS\nThe combined imaging modalities of THG, SHG, and 2PEF provide key information about the physiological state and microstructure of the anterior segment over its entire thickness with remarkable contrast and specificity. This imaging method should prove particularly useful for assessing glaucoma and corneal physiopathologies.",
"title": ""
},
{
"docid": "3085d2de614b6816d7a66cb62823824e",
"text": "Plastic debris is known to undergo fragmentation at sea, which leads to the formation of microscopic particles of plastic; the so called 'microplastics'. Due to their buoyant and persistent properties, these microplastics have the potential to become widely dispersed in the marine environment through hydrodynamic processes and ocean currents. In this study, the occurrence and distribution of microplastics was investigated in Belgian marine sediments from different locations (coastal harbours, beaches and sublittoral areas). Particles were found in large numbers in all samples, showing the wide distribution of microplastics in Belgian coastal waters. The highest concentrations were found in the harbours where total microplastic concentrations of up to 390 particles kg(-1) dry sediment were observed, which is 15-50 times higher than reported maximum concentrations of other, similar study areas. The depth profile of sediment cores suggested that microplastic concentrations on the beaches reflect the global plastic production increase.",
"title": ""
},
{
"docid": "7843fb4bbf2e94a30c18b359076899ab",
"text": "In the area of magnetic resonance imaging (MRI), an extensive range of non-linear reconstruction algorithms has been proposed which can be used with general Fourier subsampling patterns. However, the design of these subsampling patterns has typically been considered in isolation from the reconstruction rule and the anatomy under consideration. In this paper, we propose a learning-based framework for optimizing MRI subsampling patterns for a specific reconstruction rule and anatomy, considering both the noiseless and noisy settings. Our learning algorithm has access to a representative set of training signals, and searches for a sampling pattern that performs well on average for the signals in this set. We present a novel parameter-free greedy mask selection method and show it to be effective for a variety of reconstruction rules and performance metrics. Moreover, we also support our numerical findings by providing a rigorous justification of our framework via statistical learning theory.",
"title": ""
},
{
"docid": "6008664fe24620a90684ba5b6143d22b",
"text": "Brain Computer Interface (BCI) research advanced for more than forty years, providing a rich variety of sophisticated data analysis methods. Yet, most BCI studies have been restricted to the laboratory with controlled and undisturbed environment. BCI research was aiming at developing tools for communication and control. Recently, BCI research has broadened to explore novel applications for improved man-machine interaction. In the present study, we investigated the option to employ neurotechnology in an industrial environment for the psychophysiological optimization of working conditions in such settings. Our findings suggest that it is possible to use BCI-related analysis techniques to qualify responses of an operator by assessing the depth of cognitive processing on the basis of neuronal correlates of behaviourally relevant measures. This could lead to assistive technologies helping to avoid accidents in working environments by designing a collaborative workspace in which the environment takes into account the actual cognitive mental state of the operator.",
"title": ""
},
{
"docid": "9ae0f9643f095b3d1dd832a831ef1a86",
"text": "The Epstein-Barr virus (EBV) is associated with a broad spectrum of diseases, mainly because of its genomic characteristics, which result in different latency patterns in immune cells and infective mechanisms. The patient described in this report is a previously healthy young man who presented to the emergency department with clinical features consistent with meningitis and genital ulcers, which raised concern that the herpes simplex virus was the causative agent. However, the polymerase chain reaction of cerebral spinal fluid was positive for EBV. The authors highlight the importance of this infection among the differential diagnosis of central nervous system involvement and genital ulceration.",
"title": ""
},
{
"docid": "61e75fb597438712098c2b6d4b948558",
"text": "Impact of occupational stress on employee performance has been recognized as an important area of concern for organizations. Negative stress affects the physical and mental health of the employees that in turn affects their performance on job. Research into the relationship between stress and job performance has been neglected in the occupational stress literature (Jex, 1998). It is therefore significant to understand different Occupational Stress Inducers (OSI) on one hand and their impact on different aspects of job performance on the other. This article reviews the available literature to understand the phenomenon so as to develop appropriate stress management strategies to not only save the employees from variety of health problems but to improve their performance and the performance of the organization. 35 Occupational Stress Inducers (OSI) were identified through a comprehensive review of articles and reports published in the literature of management and allied disciplines between 1990 and 2014. A conceptual model is proposed towards the end to study the impact of stress on employee job performance. The possible data analysis techniques are also suggested providing direction for future research.",
"title": ""
},
{
"docid": "5a397012744d958bb1a69b435c73e666",
"text": "We introduce a method to generate whole body motion of a humanoid robot such that the resulted total linear/angular momenta become specified values. First, we derive a linear equation which gives the total momentum of a robot from its physical parameters, the base link speed and the joint speeds. Constraints between the legs and the environment are also considered. The whole body motion is calculated from a given momentum reference by using a pseudo-inverse of the inertia matrix. As examples, we generated the kicking and walking motions and tested on the actual humanoid robot HRP-2. This method, the Resolved Momentum Control, gives us a unified framework to generate various maneuver of humanoid robots.",
"title": ""
},
{
"docid": "ba56c75498bfd733eb29ea5601c53181",
"text": "The designations employed and the presentation of material in this information product do not imply the expression of any opinion whatsoever on the part of the Food and Agriculture Organization of the United Nations (FAO) concerning the legal or development status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. The mention of specific companies or products of manufacturers, whether or not these have been patented, does not imply that these have been endorsed or recommended by FAO in preference to others of a similar nature that are not mentioned. The views expressed in this information product are those of the author(s) and do not necessarily reflect the views of FAO.",
"title": ""
},
{
"docid": "1381442b92b9e702033df9bc233842eb",
"text": "Many real-time tasks, such as human-computer interaction, require fast and efficient facial gender classification. Although deep CNN nets have been very effective for a multitude of classification tasks, their high space and time demands make them impractical for personal computers and mobile devices without a powerful GPU. In this paper, we develop a 16-layer, yet lightweight, neural network which boosts efficiency while maintaining high accuracy. Our net is pruned from the VGG-16 model starting from the last convolutional (conv) layer where we find neuron activations are highly uncorrelated given the gender. Through Fisher's Linear Discriminant Analysis (LDA), we show that this high decorrelation makes it safe to discard directly last conv layer neurons with high within-class variance and low between-class variance. Combined with either Support Vector Machines (SVM) or Bayesian classification, the reduced CNNs are capable of achieving comparable (or even higher) accuracies on the LFW and CelebA datasets than the original net with fully connected layers. On LFW, only four Conv5_3 neurons are able to maintain a comparably high recognition accuracy, which results in a reduction of total network size by a factor of 70X with a 11 fold speedup. Comparisons with a state-of-the-art pruning method (as well as two smaller nets) in terms of accuracy loss and convolutional layers pruning rate are also provided.",
"title": ""
},
{
"docid": "c70f3b57354d9010167dd3be5ad6e1b6",
"text": "We present a photo-realistic training and evaluation simulator (Sim4CV) (http://www.sim4cv.org) with extensive applications across various fields of computer vision. Built on top of the Unreal Engine, the simulator integrates full featured physics based cars, unmanned aerial vehicles (UAVs), and animated human actors in diverse urban and suburban 3D environments. We demonstrate the versatility of the simulator with two case studies: autonomous UAV-based tracking of moving objects and autonomous driving using supervised learning. The simulator fully integrates both several state-of-the-art tracking algorithms with a benchmark evaluation tool and a deep neural network architecture for training vehicles to drive autonomously. It generates synthetic photo-realistic datasets with automatic ground truth annotations to easily extend existing real-world datasets and provides extensive synthetic data variety through its ability to reconfigure synthetic worlds on the fly using an automatic world generation tool.",
"title": ""
},
{
"docid": "f18a0ae573711eb97b9b4150d53182f3",
"text": "The Electrocardiogram (ECG) is commonly used to detect arrhythmias. Traditionally, a single ECG observation is used for diagnosis, making it difficult to detect irregular arrhythmias. Recent technology developments, however, have made it cost-effective to collect large amounts of raw ECG data over time. This promises to improve diagnosis accuracy, but the large data volume presents new challenges for cardiologists. This paper introduces ECGLens, an interactive system for arrhythmia detection and analysis using large-scale ECG data. Our system integrates an automatic heartbeat classification algorithm based on convolutional neural network, an outlier detection algorithm, and a set of rich interaction techniques. We also introduce A-glyph, a novel glyph designed to improve the readability and comparison of ECG signals. We report results from a comprehensive user study showing that A-glyph improves the efficiency in arrhythmia detection, and demonstrate the effectiveness of ECGLens in arrhythmia detection through two expert interviews.",
"title": ""
},
{
"docid": "11d418decc0d06a3af74be77d4c71e5e",
"text": "Automatic generation control (AGC) regulates mechanical power generation in response to load changes through local measurements. Its main objective is to maintain system frequency and keep energy balanced within each control area in order to maintain the scheduled net interchanges between control areas. The scheduled interchanges as well as some other factors of AGC are determined at a slower time scale by considering a centralized economic dispatch (ED) problem among different generators. However, how to make AGC more economically efficient is less studied. In this paper, we study the connections between AGC and ED by reverse engineering AGC from an optimization view, and then we propose a distributed approach to slightly modify the conventional AGC to improve its economic efficiency by incorporating ED into the AGC automatically and dynamically.",
"title": ""
}
] | scidocsrr |
5fb852832e7238ab239940ac26efeef3 | PRIME: A Novel Processing-in-Memory Architecture for Neural Network Computation in ReRAM-Based Main Memory | [
{
"docid": "ef52c7d4c56ff47c8e18b42e0a757655",
"text": "Microprocessors and memory systems suffer from a growing gap in performance. We introduce Active Pages, a computation model which addresses this gap by shifting data-intensive computations to the memory system. An Active Page consists of a page of data and a set of associated functions which can operate upon that data. We describe an implementation of Active Pages on RADram (Reconfigurable Architecture DRAM), a memory system based upon the integration of DRAM and reconfigurable logic. Results from the SimpleScalar simulator [BA97] demonstrate up to 1000X speedups on several applications using the RADram system versus conventional memory systems. We also explore the sensitivity of our results to implementations in other memory technologies.",
"title": ""
}
] | [
{
"docid": "e7de23a164446a208df5fde7a2a1a2f9",
"text": "Building facade detection is an important problem in comput er vision, with applications in mobile robotics and semanti c scene understanding. In particular, mobile platform localizati on and guidance in urban environments can be enabled with acc urate models of the various building facades in a scene. Toward that end, w e present a system for detection, segmentation, and paramet er estimation of building facades in stereo imagery. The propo sed method incorporates multilevel appearance and dispari ty features in a binary discriminative model, and generates a set of cand id te planes by sampling and clustering points from the imag e with Random Sample Consensus (RANSAC), using local normal estim ates derived from Principal Component Analysis (PCA) to inf rm the planar models. These two models are incorporated into a t w -layer Markov Random Field (MRF): an appearanceand disp ar tybased discriminative classifier at the mid-level, and a geom etric model to segment the building pixels into facades at th e highlevel. By using object-specific stereo features, our discri minative classifier is able to achieve substantially higher accuracy than standard boosting or modeling with only appearance-based f eatures. Furthermore, the results of our MRF classification indicate a strong improvement in accuracy for the binary building dete ction problem and the labeled planar surface models provide a good approximation to the ground truth planes.",
"title": ""
},
{
"docid": "1cbdf99545998789219e3f662a601d1b",
"text": "In this paper, we propose a knowledge-guided pose grammar network to tackle the problem of 3D human pose estimation. Our model directly takes 2D poses as inputs and learns the generalized 2D-3D mapping function, which renders high applicability. The proposed network consists of a base network which efficiently captures pose-aligned features and a hierarchy of Bidirectional RNNs on top of it to explicitly incorporate a set of knowledge (e.g., kinematics, symmetry, coordination) and thus enforce high-level constraints over human poses. In learning, we develop a pose-guided sample simulator to augment training samples in virtual camera views, which further improves the generalization ability of our model. We validate our method on public 3D human pose benchmarks and propose a new evaluation protocol working on cross-view setting to verify the generalization ability of different methods. We empirically observe that most state-ofthe-arts face difficulty under such setting while our method obtains superior performance.",
"title": ""
},
{
"docid": "0241cef84d46b942ee32fc7345874b90",
"text": "A total of eight appendices (Appendix 1 through Appendix 8) and an associated reference for these appendices have been placed here. In addition, there is currently a search engine located at to assist users in identifying BPR techniques and tools.",
"title": ""
},
{
"docid": "2e16758c0f55cd44b88c18b8948ec1cb",
"text": "We introduce a new approach to intrinsic image decomposition, the task of decomposing a single image into albedo and shading components. Our strategy, which we term direct intrinsics, is to learn a convolutional neural network (CNN) that directly predicts output albedo and shading channels from an input RGB image patch. Direct intrinsics is a departure from classical techniques for intrinsic image decomposition, which typically rely on physically-motivated priors and graph-based inference algorithms. The large-scale synthetic ground-truth of the MPI Sintel dataset plays the key role in training direct intrinsics. We demonstrate results on both the synthetic images of Sintel and the real images of the classic MIT intrinsic image dataset. On Sintel, direct intrinsics, using only RGB input, outperforms all prior work, including methods that rely on RGB+Depth input. Direct intrinsics also generalizes across modalities, our Sintel-trained CNN produces quite reasonable decompositions on the real images of the MIT dataset. Our results indicate that the marriage of CNNs with synthetic training data may be a powerful new technique for tackling classic problems in computer vision.",
"title": ""
},
{
"docid": "1b4814271b850d4632cf40006feda183",
"text": "Mass shootings are a particular problem in the United States, with one mass shooting occurring approximately every 12.5 days. Recently a \"contagion\" effect has been suggested wherein the occurrence of one mass shooting increases the likelihood of another mass shooting occurring in the near future. Although contagion is a convenient metaphor used to describe the temporal spread of a behavior, it does not explain how the behavior spreads. Generalized imitation is proposed as a better model to explain how one person's behavior can influence another person to engage in similar behavior. Here we provide an overview of generalized imitation and discuss how the way in which the media report a mass shooting can increase the likelihood of another shooting event. Also, we propose media reporting guidelines to minimize imitation and further decrease the likelihood of a mass shooting.",
"title": ""
},
{
"docid": "b3214224f699aaabab3c9336d1b88705",
"text": "This work is concerned with the field of static program analysis —in particular with analyses aimed to guarantee certain security properties of programs, like confidentiality and integrity. Our approach uses socalled dependence graphs to capture the program behavior as well as the information flow between the individual program points. Using this technique, we can guarantee for example that a program does not reveal any information about a secret password. In particular we focus on techniques that improve the dependence graph computation —the basis for many advanced security analyses. We incorporated the presented algorithms and improvements into our analysis tool Joana and published its source code as open source. Several collaborations with other researchers and publications using Joana demonstrate the relevance of these improvements for practical research. This work consists essentially of three parts. Part 1 deals with improvements in the computation of the dependence graph, Part 2 introduces a new approach to the analysis of incomplete programs and Part 3 shows current use cases of Joana on concrete examples. In the first part we describe the algorithms used to compute a dependence graph, with special attention to the problems and challenges that arise when analyzing object-oriented languages such as Java. For example we present an analysis that improves the precision of detected control flow by incorporating the effects of exceptions. The main improvement concerns the way side effects —caused by communication over methods boundaries— are modelled. Dependence graphs capture side effects —memory locations read or changed by a method— in the form of additional nodes called parameter nodes. We show that the structure and computation of these nodes have a huge impact on both the precision and scalability of the entire analysis. The so-called parameter model describes the algorithms used to compute these nodes. We explain the weakness of the old parameter model based on object-trees and present our improvements in form of a new model using object-graphs. The new graph structure merges redundant information of multiple nodes into a single node and thus reduces the number of overall parameter nodes",
"title": ""
},
{
"docid": "be50f3605fbe84500667d095184e491b",
"text": "Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems.",
"title": ""
},
{
"docid": "1c9a14804cd1bd673c2547642f9b6683",
"text": "In this paper we applied multilabel classification algorithms to the EUR-Lex database of legal documents of the European Union. On this document collection, we studied three different multilabel classification problems, the largest being the categorization into the EUROVOC concept hierarchy with almost 4000 classes. We evaluated three algorithms: (i) the binary relevance approach which independently trains one classifier per label; (ii) the multiclass multilabel perceptron algorithm, which respects dependencies between the base classifiers; and (iii) the multilabel pairwise perceptron algorithm, which trains one classifier for each pair of labels. All algorithms use the simple but very efficient perceptron algorithm as the underlying classifier, which makes them very suitable for large-scale multilabel classification problems. The main challenge we had to face was that the almost 8,000,000 perceptrons that had to be trained in the pairwise setting could no longer be stored in memory. We solve this problem by resorting to the dual representation of the perceptron, which makes the pairwise approach feasible for problems of this size. The results on the EUR-Lex database confirm the good predictive performance of the pairwise approach and demonstrates the feasibility of this approach for large-scale tasks.",
"title": ""
},
{
"docid": "75189509743ba4f329b5ea5877f0e8ad",
"text": "The psychology of conspiracy theory beliefs is not yet well understood, although research indicates that there are stable individual differences in conspiracist ideation - individuals' general tendency to engage with conspiracy theories. Researchers have created several short self-report measures of conspiracist ideation. These measures largely consist of items referring to an assortment of prominent conspiracy theories regarding specific real-world events. However, these instruments have not been psychometrically validated, and this assessment approach suffers from practical and theoretical limitations. Therefore, we present the Generic Conspiracist Beliefs (GCB) scale: a novel measure of individual differences in generic conspiracist ideation. The scale was developed and validated across four studies. In Study 1, exploratory factor analysis of a novel 75-item measure of non-event-based conspiracist beliefs identified five conspiracist facets. The 15-item GCB scale was developed to sample from each of these themes. Studies 2, 3, and 4 examined the structure and validity of the GCB, demonstrating internal reliability, content, criterion-related, convergent and discriminant validity, and good test-retest reliability. In sum, this research indicates that the GCB is a psychometrically sound and practically useful measure of conspiracist ideation, and the findings add to our theoretical understanding of conspiracist ideation as a monological belief system unpinned by a relatively small number of generic assumptions about the typicality of conspiratorial activity in the world.",
"title": ""
},
{
"docid": "b9838e512912f4bcaf3c224df3548d95",
"text": "In this paper, we develop a system for training human calligraphy skills. For such a development, the so-called dynamic font and augmented reality (AR) are employed. The dynamic font is used to generate a model character, in which the character are formed as the result of 3-dimensional motion of a virtual writing device on a virtual writing plane. Using the AR technology, we then produce a visual information consisting of not only static writing path but also dynamic writing process of model character. Such a visual information of model character is given some trainee through a head mounted display. The performance is demonstrated by some experimental studies.",
"title": ""
},
{
"docid": "f3864982e2e03ce4876a6685d74fb84c",
"text": "The central nervous system (CNS) operates by a fine-tuned balance between excitatory and inhibitory signalling. In this context, the inhibitory neurotransmission may be of particular interest as it has been suggested that such neuronal pathways may constitute 'command pathways' and the principle of 'dis-inhibition' leading ultimately to excitation may play a fundamental role (Roberts, E. (1974). Adv. Neurol., 5: 127-143). The neurotransmitter responsible for this signalling is gamma-aminobutyrate (GABA) which was first discovered in the CNS as a curious amino acid (Roberts, E., Frankel, S. (1950). J. Biol. Chem., 187: 55-63) and later proposed as an inhibitory neurotransmitter (Curtis, D.R., Watkins, J.C. (1960). J. Neurochem., 6: 117-141; Krnjevic, K., Schwartz, S. (1967). Exp. Brain Res., 3: 320-336). The present review will describe aspects of GABAergic neurotransmission related to homeostatic mechanisms such as biosynthesis, metabolism, release and inactivation. Additionally, pharmacological and therapeutic aspects of this will be discussed.",
"title": ""
},
{
"docid": "047112c682f64fc6a272a7e80d5f1a1b",
"text": "In this paper, we study an important yet largely under-explored setting of graph embedding, i.e., embedding communities instead of each individual nodes. We find that community embedding is not only useful for community-level applications such as graph visualization, but also beneficial to both community detection and node classification. To learn such embedding, our insight hinges upon a closed loop among community embedding, community detection and node embedding. On the one hand, node embedding can help improve community detection, which outputs good communities for fitting better community embedding. On the other hand, community embedding can be used to optimize the node embedding by introducing a community-aware high-order proximity. Guided by this insight, we propose a novel community embedding framework that jointly solves the three tasks together. We evaluate such a framework on multiple real-world datasets, and show that it improves graph visualization and outperforms state-of-the-art baselines in various application tasks, e.g., community detection and node classification.",
"title": ""
},
{
"docid": "9cf0d6e811f7cdafe4316b49d060d192",
"text": "Medical imaging plays a central role in a vast range of healthcare practices. The usefulness of 3D visualizations has been demonstrated for many types of treatment planning. Nevertheless, full access to 3D renderings outside of the radiology department is still scarce even for many image-centric specialties. Our work stems from the hypothesis that this under-utilization is partly due to existing visualization systems not taking the prerequisites of this application domain fully into account. We have developed a medical visualization table intended to better fit the clinical reality. The overall design goals were two-fold: similarity to a real physical situation and a very low learning threshold. This paper describes the development of the visualization table with focus on key design decisions. The developed features include two novel interaction components for touch tables. A user study including five orthopedic surgeons demonstrates that the system is appropriate and useful for this application domain.",
"title": ""
},
{
"docid": "c4c686a3838088d890dd3dee1fdc19da",
"text": "Agile programming involves continually evolving requirements along with a possible change in their business value and an uncertainty in their time of development. This leads to the difficulty in adapting the release plans according to the response of the environment at each iteration step. This paper shows how a machine learning approach can support the release planning process in an agile environment. The objective is to adapt the release plans according to the results of the previous iterations in the present environment . Reinforcement learning technique has been used to learn the release planning process in an environment of various constraints and multiple objectives. The technique has been applied to a case study to show the utility of the method. The simulation results show that the reinforcement technique can be easily integrated into the release planning process. The teams can learn from the previous iterations and incorporate the learning into the release plans",
"title": ""
},
{
"docid": "74da0fe221dd6a578544e6b4896ef60e",
"text": "This paper outlines a new approach to the study of power, that of the sociology of translation. Starting from three principles, those of agnosticism (impartiality between actors engaged in controversy), generalised symmetry (the commitment to explain conflicting viewpoints in the same terms) and free association (the abandonment of all a priori distinctions between the natural and the social), the paper describes a scientific and economic controversy about the causes for the decline in the population of scallops in St. Brieuc Bay and the attempts by three marine biologists to develop a conservation strategy for that population. Four ‘moments’ of translation are discerned in the attempts by these researchers to impose themselves and their definition of the situation on others: (a) problematisation: the researchers sought to become indispensable to other actors in the drama by defining the nature and the problems of the latter and then suggesting that these would be resolved if the actors negotiated the ‘obligatory passage point’ of the researchers’ programme of investigation; (b) interessement: a series of processes by which the researchers sought to lock the other actors into the roles that had been proposed for them in that programme; (c) enrolment: a set of strategies in which the researchers sought to define and interrelate the various roles they had allocated to others; (d) mobilisation: a set of methods used by the researchers to ensure that supposed spokesmen for various relevant collectivities were properly able to represent those collectivities and not betrayed by the latter. In conclusion it is noted that translation is a process, never a completed accomplishment, and it may (as in the empirical case considered) fail.",
"title": ""
},
{
"docid": "0066d03bf551e64b9b4a1595f1494347",
"text": "Visual Text Analytics has been an active area of interdisciplinary research (http://textvis.lnu.se/). This interactive tutorial is designed to give attendees an introduction to the area of information visualization, with a focus on linguistic visualization. After an introduction to the basic principles of information visualization and visual analytics, this tutorial will give an overview of the broad spectrum of linguistic and text visualization techniques, as well as their application areas [3]. This will be followed by a hands-on session that will allow participants to design their own visualizations using tools (e.g., Tableau), libraries (e.g., d3.js), or applying sketching techniques [4]. Some sample datasets will be provided by the instructor. Besides general techniques, special access will be provided to use the VisArgue framework [1] for the analysis of selected datasets.",
"title": ""
},
{
"docid": "2d0c5f6be15408d4814b22d28b1541af",
"text": "OBJECTIVE\nOur previous study has found that circulating microRNA (miRNA, or miR) -122, -140-3p, -720, -2861, and -3149 are significantly elevated during early stage of acute coronary syndrome (ACS). This study was conducted to determine the origin of these elevated plasma miRNAs in ACS.\n\n\nMETHODS\nqRT-PCR was performed to detect the expression profiles of these 5 miRNAs in liver, spleen, lung, kidney, brain, skeletal muscles, and heart. To determine their origins, these miRNAs were detected in myocardium of acute myocardial infarction (AMI), and as well in platelets and peripheral blood mononuclear cells (PBMCs, including monocytes, circulating endothelial cells (CECs) and lymphocytes) of the AMI pigs and ACS patients.\n\n\nRESULTS\nMiR-122 was specifically expressed in liver, and miR-140-3p, -720, -2861, and -3149 were highly expressed in heart. Compared with the sham pigs, miR-122 was highly expressed in the border zone of the ischemic myocardium in the AMI pigs without ventricular fibrillation (P < 0.01), miR-122 and -720 were decreased in platelets of the AMI pigs, and miR-122, -140-3p, -720, -2861, and -3149 were increased in PBMCs of the AMI pigs (all P < 0.05). Compared with the non-ACS patients, platelets miR-720 was decreased and PBMCs miR-122, -140-3p, -720, -2861, and -3149 were increased in the ACS patients (all P < 0.01). Furthermore, PBMCs miR-122, -720, and -3149 were increased in the AMI patients compared with the unstable angina (UA) patients (all P < 0.05). Further origin identification revealed that the expression levels of miR-122 in CECs and lymphocytes, miR-140-3p and -2861 in monocytes and CECs, miR-720 in monocytes, and miR-3149 in CECs were greatly up-regulated in the ACS patients compared with the non-ACS patients, and were higher as well in the AMI patients than that in the UA patients except for the miR-122 in CECs (all P < 0.05).\n\n\nCONCLUSION\nThe elevated plasma miR-122, -140-3p, -720, -2861, and -3149 in the ACS patients were mainly originated from CECs and monocytes.",
"title": ""
},
{
"docid": "8966d588d11eac49f4cc98e70f7333e6",
"text": "The timeliness and synchronization requirements of multimedia data demand e&ient buffer management and disk access schemes for multimedia database systems. The data rates involved are very high and despite the developmenl of eficient storage and retrieval strategies, disk I/O is a potential bottleneck, which limits the number of concurrent sessions supported by a system. This calls for more eficient use of data that has already been brought into the buffer. We introduce the notion of continuous media caching, which is a simple and novel technique where data that have been played back by a user are preserved in a controlled fashion for use by subsequent users requesting the same data. We present heuristics to determine when continuous media sharing is beneficial and describe the bufler management algorithms. Simulation studies indicate that our technique substantially improves the performance of multimedia database applications where data sharing is possible.",
"title": ""
},
{
"docid": "ac57fab046cfd02efa1ece262b07492f",
"text": "Interactive Narrative is an approach to interactive entertainment that enables the player to make decisions that directly affect the direction and/or outcome of the narrative experience being delivered by the computer system. Interactive narrative requires two seemingly conflicting requirements: coherent narrative and user agency. We present an interactive narrative system that uses a combination of narrative control and autonomous believable character agents to augment a story world simulation in which the user has a high degree of agency with narrative plot control. A drama manager called the Automated Story Director gives plot-based guidance to believable agents. The believable agents are endowed with the autonomy necessary to carry out directives in the most believable fashion possible. Agents also handle interaction with the user. When the user performs actions that change the world in such a way that the Automated Story Director can no longer drive the intended narrative forward, it is able to adapt the plot to incorporate the user’s changes and still achieve",
"title": ""
},
{
"docid": "791314f5cee09fc8e27c236018a0927f",
"text": "© The Author(s) 2018. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creat iveco mmons .org/ publi cdoma in/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Oral presentations",
"title": ""
}
] | scidocsrr |
2bf578b3e0c75da7a32fb4d7db6a9ebe | Monitoring body positions and movements during sleep using WISPs | [
{
"docid": "42b1052a0d1e1536228b1b90602051ea",
"text": "Improving the quality of healthcare and the prospects of \"aging in place\" using wireless sensor technology requires solving difficult problems in scale, energy management, data access, security, and privacy. We present AlarmNet, a novel system for assisted living and residential monitoring that uses a two-way flow of data and analysis between the front- and back-ends to enable context-aware protocols that are tailored to residents' individual patterns of living. AlarmNet integrates environmental, physiological, and activity sensors in a scalable heterogeneous architecture. The SenQ query protocol provides real-time access to data and lightweight in-network processing. Circadian activity rhythm analysis learns resident activity patterns and feeds them back into the network to aid context-aware power management and dynamic privacy policies.",
"title": ""
}
] | [
{
"docid": "2fa6f761f22e0484a84f83e5772bef40",
"text": "We consider the problem of planning smooth paths for a vehicle in a region bounded by polygonal chains. The paths are represented as B-spline functions. A path is found by solving an optimization problem using a cost function designed to care for both the smoothness of the path and the safety of the vehicle. Smoothness is defined as small magnitude of the derivative of curvature and safety is defined as the degree of centering of the path between the polygonal chains. The polygonal chains are preprocessed in order to remove excess parts and introduce safety margins for the vehicle. The method has been implemented for use with a standard solver and tests have been made on application data provided by the Swedish mining company LKAB.",
"title": ""
},
{
"docid": "90d2bf357eea588bc1326c87a723ed86",
"text": "Traffic is the chief puzzle problem which every country faces because of the enhancement in number of vehicles throughout the world, especially in large urban towns. Hence the need arises for simulating and optimizing traffic control algorithms to better accommodate this increasing demand. Fuzzy optimization deals with finding the values of input parameters of a complex simulated system which result in desired output. This paper presents a MATLAB simulation of fuzzy logic traffic controller for controlling flow of traffic in isolated intersections. This controller is based on the waiting time and queue length of vehicles at present green phase and vehicles queue lengths at the other phases. The controller controls the traffic light timings and phase difference to ascertain sebaceous flow of traffic with least waiting time and queue length. In this paper, the isolated intersection model used consists of two alleyways in each approach. Every outlook has different value of queue length and waiting time, systematically, at the intersection. The maximum value of waiting time and vehicle queue length has to be selected by using proximity sensors as inputs to controller for the ameliorate control traffic flow at the intersection. An intelligent traffic model and fuzzy logic traffic controller are developed to evaluate the performance of traffic controller under different pre-defined conditions for oleaginous flow of traffic. Additionally, this fuzzy logic traffic controller has emergency vehicle siren sensors which detect emergency vehicle movement like ambulance, fire brigade, Police Van etc. and gives maximum priority to him and pass preferred signal to it. Keywords-Fuzzy Traffic Controller; Isolated Intersection; Vehicle Actuated Controller; Emergency Vehicle Selector.",
"title": ""
},
{
"docid": "430739180a49057c3413e82f8224815f",
"text": "The field of Big Data and related technologies is rapidly evolving. Consequently, many benchmarks are emerging, driven by academia and industry alike. As these benchmarks are emphasizing different aspects of Big Data and, in many cases, covering different technical platforms and uses cases, it is extremely difficult to keep up with the pace of benchmark creation. Also with the combinations of large volumes of data, heterogeneous data formats and the changing processing velocity, it becomes complex to specify an architecture which best suits all application requirements. This makes the investigation and standardization of such systems very difficult. Therefore, the traditional way of specifying a standardized benchmark with pre-defined workloads, which have been in use for years in the transaction and analytical processing systems, is not trivial to employ for Big Data systems. This document provides a summary of existing benchmarks and those that are in development, gives a side-by-side comparison of their characteristics and discusses their pros and cons. The goal is to understand the current state in Big Data benchmarking and guide practitioners in their approaches and use cases.",
"title": ""
},
{
"docid": "bf180a4ed173ef81c91594a2ee651c8c",
"text": "Recent emergence of low-cost and easy-operating depth cameras has reinvigorated the research in skeleton-based human action recognition. However, most existing approaches overlook the intrinsic interdependencies between skeleton joints and action classes, thus suffering from unsatisfactory recognition performance. In this paper, a novel latent max-margin multitask learning model is proposed for 3-D action recognition. Specifically, we exploit skelets as the mid-level granularity of joints to describe actions. We then apply the learning model to capture the correlations between the latent skelets and action classes each of which accounts for a task. By leveraging structured sparsity inducing regularization, the common information belonging to the same class can be discovered from the latent skelets, while the private information across different classes can also be preserved. The proposed model is evaluated on three challenging action data sets captured by depth cameras. Experimental results show that our model consistently achieves superior performance over recent state-of-the-art approaches.",
"title": ""
},
{
"docid": "fda176005a8edbec3d6dd4796826bb27",
"text": "In the perspective of a sustainable urban planning, it is necessary to investigate cities in a holistic way and to accept surprises in the response of urban environments to a particular set of strategies. For example, the process of inner-city densification may limit air pollution, carbon emissions, and energy use through reduced transportation; on the other hand, the resulting street canyons could lead to local levels of pollution that could be higher than in a low-density urban setting. The holistic approach to sustainable urban planning implies using different models in an integrated way that is capable of simulating the urban system. As the interconnection of such models is not a trivial task, one of the key elements that may be applied is the description of the urban geometric properties in an “interoperable” way. Focusing on air quality as one of the most pronounced urban problems, the geometric aspects of a city may be described by objects such as those defined in CityGML, so that an appropriate air quality model can be applied for estimating the quality of the urban air on the basis of atmospheric flow and chemistry equations. It is generally admitted that an ontology-based approach can provide a generic and robust way to interconnect different models. However, a direct approach, that consists in establishing correspondences between concepts, is not sufficient in the present situation. One has to take into account, among other things, the computations involved in the correspondences between concepts. In this paper we first present theoretical background and motivations for the interconnection of 3D city models and other models related to sustainable development and urban planning. Then we present a practical experiment based on the interconnection of CityGML with an air quality model. Our approach is based on the creation of an ontology of air quality models and on the extension of an ontology of urban planning process (OUPP) that acts as an ontology mediator.",
"title": ""
},
{
"docid": "abef10b620026b2c054ca69a3c75f930",
"text": "The idea that general intelligence may be more variable in males than in females has a long history. In recent years it has been presented as a reason that there is little, if any, mean sex difference in general intelligence, yet males tend to be overrepresented at both the top and bottom ends of its overall, presumably normal, distribution. Clear analysis of the actual distribution of general intelligence based on large and appropriately population-representative samples is rare, however. Using two population-wide surveys of general intelligence in 11-year-olds in Scotland, we showed that there were substantial departures from normality in the distribution, with less variability in the higher range than in the lower. Despite mean IQ-scale scores of 100, modal scores were about 105. Even above modal level, males showed more variability than females. This is consistent with a model of the population distribution of general intelligence as a mixture of two essentially normal distributions, one reflecting normal variation in general intelligence and one refecting normal variation in effects of genetic and environmental conditions involving mental retardation. Though present at the high end of the distribution, sex differences in variability did not appear to account for sex differences in high-level achievement.",
"title": ""
},
{
"docid": "c5a36e3b8196815fea6b5db825c09133",
"text": "In this paper, solutions for developing low cost electronics for antenna transceivers that take advantage of the stable electrical properties of the organic substrate liquid crystal polymer (LCP) has been presented. Three important ingredients in RF wireless transceivers namely embedded passives, a dual band filter and a RFid antenna have been designed and fabricated on LCP. Test results of all 3 of the structures show good agreement between the simulated and measured results over their respective bandwidths, demonstrating stable performance of the LCP substrate.",
"title": ""
},
{
"docid": "aed7133c143edbe0e1c6f6dfcddee9ec",
"text": "This paper describes a version of the auditory image model (AIM) [1] implemented in MATLAB. It is referred to as “aim-mat” and it includes the basic modules that enable AIM to simulate the spectral analysis, neural encoding and temporal integration performed by the auditory system. The dynamic representations produced by non-static sounds can be viewed on a frame-by-frame basis or in movies with synchronized sound. The software has a sophisticated graphical user interface designed to facilitate the auditory modelling. It is also possible to add MATLAB code and complete modules to aim-mat. The software can be downloaded from http://www.mrccbu.cam.ac.uk/cnbh/aimmanual",
"title": ""
},
{
"docid": "c2c1c8e97858bb6e8541bfa662ac4db8",
"text": "In exploring the question of whether a computer program is behaving creatively, it is important to be explicit, and if possible formal, about the criteria that are being applied in making judgements of creativity. We propose a formal (and rather simplified) outline of the relevant attributes of a potentially creative program. Based on this, we posit a number of formal criteria that could be applied to rate the extent to which the program has behaved creatively. A guiding principle is that the question of what computational mechanisms might lead to creative behaviour is open and empirical, and hence we should clearly distinguish between judgements about creative achievements and theoretical proposals about potentially creative mechanisms. The intention is to focus, clarify and make more concrete the debate about creative",
"title": ""
},
{
"docid": "cfaeeb000232ade838ad751b7b404a66",
"text": "Meyer has recently introduced an image decomposition model to split an image into two components: a geometrical component and a texture (oscillatory) component. Inspired by his work, numerical models have been developed to carry out the decomposition of gray scale images. In this paper, we propose a decomposition algorithm for color images. We introduce a generalization of Meyer s G norm to RGB vectorial color images, and use Chromaticity and Brightness color model with total variation minimization. We illustrate our approach with numerical examples. 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "1c2f9c3ed21ab5e3e6a0b17ae8bfc059",
"text": "The purpose of this study is to analyze the relationship among Person Organization Fit (POF), Organizational Commitment (OC) and Knowledge Sharing Attitude (KSA). The paper develops a conceptual frame based on a theory and literature review. A quantitative approach has been used to measure the level of POF and OC as well as to explore the relationship of these variables with KSA & with each other by using a sample of 315 academic managers of public sector institutions of higher education. POF has a positive relationship with OC and KSA. A positive relationship also exists between OC and KSA. It would be an effective contribution in the existing body of knowledge. Managers and other stakeholders may be helped to recognize the significance of POF, OC and KSA as well as their relationship with each other for ensuring selection of employee’s best fitted in the organization and for creating and maintaining a conducive environment for improving organizational commitment and knowledge sharing of the employees which will ultimately result in enhanced efficacy and effectiveness of the organization.",
"title": ""
},
{
"docid": "621ce8bf645f9d2c9d142e119a95df01",
"text": "This study examined the impact of mobile communications on interpersonal relationships in daily life. Based on a nationwide survey in Japan, landline phone, mobile voice phone, mobile mail (text messaging), and PC e-mail were compared to assess their usage in terms of social network and psychological factors. The results indicated that young, nonfamily-related pairs of friends, living close to each other with frequent faceto-face contact were more likely to use mobile media. Social skill levels are negatively correlated with relative preference for mobile mail in comparison with mobile voice phone. These findings suggest that mobile mail is preferable for Japanese young people who tend to avoid direct communication and that its use maintains existing bonds rather than create new ones.",
"title": ""
},
{
"docid": "ed9a02a856782a89476bcf233f4c9488",
"text": "This paper examines the role of IT in developing collaborative consumption. We present a study of the multi-sided platform goCatch, which is widely recognized as a mobile application and digital disruptor in the Australian transport industry. From our investigation, we find that goCatch uses IT to create situational-based and object-based opportunities to enable collaborative consumption and in turn digital disruption to the incumbent industry. We also highlight the factors to consider in developing a mobile application to connect with customers, and serve as a viable competitive option for responding to competition. Such research is necessary in order to better understand how service providers extract business value from digital technologies to formulate new breakthrough strategies, design compelling new products and services, and transform management processes. Ongoing work will reveal how m-commerce service providers can extract business value from a collaborative consumption model.",
"title": ""
},
{
"docid": "3ba586c49e662c29f373eb08ad9eb1cb",
"text": "The first pathologic alterations of the retina are seen in the vessel network. These modifications affect very differently arteries and veins, and the appearance and entity of the modification differ as the retinopathy becomes milder or more severe. In order to develop an automatic procedure for the diagnosis and grading of retinopathy, it is necessary to be able to discriminate arteries from veins. The problem is complicated by the similarity in the descriptive features of these two structures and by the contrast and luminosity variability of the retina. We developed a new algorithm for classifying the vessels, which exploits the peculiarities of retinal images. By applying a divide et imperaapproach that partitioned a concentric zone around the optic disc into quadrants, we were able to perform a more robust local classification analysis. The results obtained by the proposed technique were compared with those provided by a manual classification on a validation set of 443 vessels and reached an overall classification error of 12 %, which reduces to 7 % if only the diagnostically important retinal vessels are considered.",
"title": ""
},
{
"docid": "44cad9f5ed673152e52c7aab66a8e6f2",
"text": "BACKGROUND\nHuman leech infestation is a disease of the poor who live in rural areas and use water contaminated with leeches. Like any other body orifices, vagina can also be infested by leech when females use contaminated water for bathing and/or douching. Although this condition is very rare in postmenopausal women, it causes morbidities and mortalities.\n\n\nCASE DETAILS\nA 70 year old Para X (all alive) abortion I mother, postmenopausal for the last 20 years, presented with vaginal bleeding of 3 weeks duration to Gimbie Adventist Hospital, Western Ethiopia. On examination, she had deranged vital signs and there was a dark moving worm attached to the cervical os. She was admitted with the diagnosis of hypovolumic shock and severe anemia secondary to postmenopausal vaginal bleeding. After the patient was stabilized with intravenous crystalloids, the leech was removed from the vagina. She was then transfused with two units of whole blood and discharged with good condition on the 3(rd) post procedure day with ferrous sulphate.\n\n\nCONCLUSION\nVaginal leech infestation in postmenopausal woman can cause hypovolumic shock and severe anemia. Therefore, in order to decrease morbidities from failure or delay in making the diagnosis, health care providers should consider the possibility of vaginal leech infestation in postmenopausal woman from rural areas and those who use river water for drinking, bathing and/or douching and presented with vaginal bleeding. In addition, the importance of using clean water and improving access to safe water should be emphasized.",
"title": ""
},
{
"docid": "a0fe4a04b2fd17f7df86cd2768fdf80c",
"text": "Line labelling has been used to determine whether a two-dimensional (2D) line drawing object is a possible or impossible representation of a three-dimensional (3D) solid object. However, the results are not sufficiently robust because the existing line labelling methods do not have any validation method to verify their own result. In this research paper, the concept of graph colouring is applied to a validation technique for a labelled 2D line drawing. As a result, a graph colouring algorithm for validating labelled 2D line drawings is presented. A high-level programming language, MATLAB R2009a, and two primitive 2D line drawing classes, prism and pyramid are used to show how the algorithms can be implemented. The proposed algorithm also shows that the minimum number of colours needed to colour the labelled 2D line drawing object is equal to 3 for prisms and 1 n − for pyramids, where n is the number of vertices (junctions) in the pyramid objects.",
"title": ""
},
{
"docid": "ed3a859e2cea465a6d34c556fec860d9",
"text": "Multi-word expressions constitute a significant portion of the lexicon of every natural language, and handling them correctly is mandatory for various NLP applications. Yet such entities are notoriously hard to define, and are consequently missing from standard lexicons and dictionaries. Multi-word expressions exhibit idiosyncratic behavior on various levels: orthographic, morphological, syntactic and semantic. In this work we take advantage of the morphological and syntactic idiosyncrasy of Hebrew noun compounds and employ it to extract such expressions from text corpora. We show that relying on linguistic information dramatically improves the accuracy of compound extraction, reducing over one third of the errors compared with the best baseline.",
"title": ""
},
{
"docid": "9b0114697dc6c260610d0badc1d7a2a4",
"text": "This review captures the synthesis, assembly, properties, and applications of copper chalcogenide NCs, which have achieved significant research interest in the last decade due to their compositional and structural versatility. The outstanding functional properties of these materials stems from the relationship between their band structure and defect concentration, including charge carrier concentration and electronic conductivity character, which consequently affects their optoelectronic, optical, and plasmonic properties. This, combined with several metastable crystal phases and stoichiometries and the low energy of formation of defects, makes the reproducible synthesis of these materials, with tunable parameters, remarkable. Further to this, the review captures the progress of the hierarchical assembly of these NCs, which bridges the link between their discrete and collective properties. Their ubiquitous application set has cross-cut energy conversion (photovoltaics, photocatalysis, thermoelectrics), energy storage (lithium-ion batteries, hydrogen generation), emissive materials (plasmonics, LEDs, biolabelling), sensors (electrochemical, biochemical), biomedical devices (magnetic resonance imaging, X-ray computer tomography), and medical therapies (photochemothermal therapies, immunotherapy, radiotherapy, and drug delivery). The confluence of advances in the synthesis, assembly, and application of these NCs in the past decade has the potential to significantly impact society, both economically and environmentally.",
"title": ""
},
{
"docid": "3c017a50302e8a09eff32b97474433a1",
"text": "Few concepts embody the goals of artificial intelligence as well as fully autonomous robots. Countless films and stories have been made that focus on a future filled with autonomous agents that complete menial tasks or run errands that humans do not want or are too busy to carry out. One such task is driving automobiles. In this paper, we summarize the work we have done towards a future of fully-autonomous vehicles, specifically coordinating such vehicles safely and efficiently at intersections. We then discuss the implications this work has for other areas of AI, including planning, multiagent learning, and computer vision.",
"title": ""
},
{
"docid": "820f67fa3521ee4af7da0e022a8d0be3",
"text": "The visual appearance of rain is highly complex. Unlike the particles that cause other weather conditions such as haze and fog, rain drops are large and visible to the naked eye. Each drop refracts and reflects both scene radiance and environmental illumination towards an observer. As a result, a spatially distributed ensemble of drops moving at high velocities (rain) produces complex spatial and temporal intensity fluctuations in images and videos. To analyze the effects of rain, it is essential to understand the visual appearance of a single rain drop. In this paper, we develop geometric and photometric models for the refraction through, and reflection (both specular and internal) from, a rain drop. Our geometric and photometric models show that each rain drop behaves like a wide-angle lens that redirects light from a large field of view towards the observer. From this, we observe that in spite of being a transparent object, the brightness of the drop does not depend strongly on the brightness of the background. Our models provide the fundamental tools to analyze the complex effects of rain. Thus, we believe our work has implications for vision in bad weather as well as for efficient rendering of rain in computer graphics.",
"title": ""
}
] | scidocsrr |
624e607dbd27503e328cfd000f7b9ac3 | A Novel Variable Reluctance Resolver with Nonoverlapping Tooth–Coil Windings | [
{
"docid": "94cb308e7b39071db4eda05c5ff16d95",
"text": "A resolver generates a pair of signals proportional to the sine and cosine of the angular position of its shaft. A new low-cost method for converting the amplitudes of these sine/cosine transducer signals into a measure of the input angle without using lookup tables is proposed. The new method takes advantage of the components used to operate the resolver, the excitation (carrier) signal in particular. This is a feedforward method based on comparing the amplitudes of the resolver signals to those of the excitation signal together with another shifted by pi/2. A simple method is then used to estimate the shaft angle through this comparison technique. The poor precision of comparison of the signals around their highly nonlinear peak regions is avoided by using a simple technique that relies only on the alternating pseudolinear segments of the signals. This results in a better overall accuracy of the converter. Beside simplicity of implementation, the proposed scheme offers the advantage of robustness to amplitude fluctuation of the transducer excitation signal.",
"title": ""
},
{
"docid": "b40b81e25501b08a07c64f68c851f4a6",
"text": "Variable reluctance (VR) resolver is widely used in traction motor for battery electric vehicle as well as hybrid electric vehicle as a rotor position sensor. VR resolver generates absolute position signal by using resolver-to-digital converter (RDC) in order to deliver exact position of permanent magnets in a rotor of traction motor to motor controller. This paper deals with fault diagnosis of VR resolver by using co-simulation analysis with RDC for position angle detection. As fault conditions, eccentricity of VR resolver, short circuit condition of excitation coil and output signal coils, and material problem of silicon steel in a view point of permeability are considered. 2D FEM is used for the output signal waveforms of SIN, COS and these waveforms are converted into absolute position angle by using the algorithm of RDC. For the verification of proposed analysis results, experiment on fault conditions was conducted and compared with simulation ones.",
"title": ""
}
] | [
{
"docid": "e7230519f0bd45b70c1cbd42f09cb9e8",
"text": "Environmental isolates belonging to the genus Acidovorax play a crucial role in degrading a wide range of pollutants. Studies on Acidovorax are currently limited for many species due to the lack of genetic tools. Here, we described the use of the replicon from a small, cryptic plasmid indigenous to Acidovorx temperans strain CB2, to generate stably maintained shuttle vectors. In addition, we have developed a scarless gene knockout technique, as well as establishing green fluorescent protein (GFP) reporter and complementation systems. Taken collectively, these tools will improve genetic manipulations in the genus Acidovorax.",
"title": ""
},
{
"docid": "2fbfe1fa8cda571a931b700cbb18f46e",
"text": "A low-noise front-end and its controller are proposed for capacitive touch screen panels. The proposed front-end circuit based on a ΔΣ ADC uses differential sensing and integration scheme to maximize the input dynamic range. In addition, supply and internal reference voltage noise are effectively removed in the sensed touch signal. Furthermore, the demodulation process in front of the ΔΣ ADC provides the maximized oversampling ratio (OSR) so that the scan rate can be increased at the targeted resolution. The proposed IC is implemented in a mixed-mode 0.18-μm CMOS process. The measurement is performed on a bar-patterned 4.3-inch touch screen panel with 12 driving lines and 8 sensing channels. The report rate is 100 Hz, and SNR and spatial jitter are 54 dB and 0.11 mm, respectively. The chip area is 3 × 3 mm2 and total power consumption is 2.9 mW with 1.8-V and 3.3-V supply.",
"title": ""
},
{
"docid": "8ae1ef032c0a949aa31b3ca8bc024cb5",
"text": "Measuring intellectual capital is on the agenda of most 21st century organisations. This paper takes a knowledge-based view of the firm and discusses the importance of measuring organizational knowledge assets. Knowledge assets underpin capabilities and core competencies of any organisation. Therefore, they play a key strategic role and need to be measured. This paper reviews the existing approaches for measuring knowledge based assets and then introduces the knowledge asset map which integrates existing approaches in order to achieve comprehensiveness. The paper then introduces the knowledge asset dashboard to clarify the important actor/infrastructure relationship, which elucidates the dynamic nature of these assets. Finally, the paper suggests to visualise the value pathways of knowledge assets before designing strategic key performance indicators which can then be used to test the assumed causal relationships. This will enable organisations to manage and report these key value drivers in today’s economy. Introduction In the last decade management literature has paid significant attention to the role of knowledge for global competitiveness in the 21st century. It is recognised as a durable and more sustainable strategic resource to acquire and maintain competitive advantages (Barney, 1991a; Drucker, 1988; Grant, 1991a). Today’s business world is characterised by phenomena such as e-business, globalisation, higher degrees of competitiveness, fast evolution of new technology, rapidly changing client demands, as well as changing economic and political structures. In this new context companies need to develop clearly defined strategies that will give them a competitive advantage (Porter, 2001; Barney, 1991a). For this, organisations have to understand which capabilities they need in order to gain and maintain this competitive advantage (Barney, 1991a; Prahalad and Hamel, 1990). Organizational capabilities are based on knowledge. Thus, knowledge is a resource that forms the foundation of the company’s capabilities. Capabilities combine to The Emerald Research Register for this journal is available at The current issue and full text archive of this journal is available at www.emeraldinsight.com/researchregister www.emeraldinsight.com/1463-7154.htm The authors would like to thank, Göran Roos, Steven Pike, Oliver Gupta, as well as the two anonymous reviewers for their valuable comments which helped us to improve this paper. Intellectual capital",
"title": ""
},
{
"docid": "d909528f98e49f8107bf0cee7a83bbfe",
"text": "INTRODUCTION\nThe increasing use of cone-beam computed tomography in orthodontics has been coupled with heightened concern about the long-term risks of x-ray exposure in orthodontic populations. An industry response to this has been to offer low-exposure alternative scanning options in newer cone-beam computed tomography models.\n\n\nMETHODS\nEffective doses resulting from various combinations of field of view size and field location comparing child and adult anthropomorphic phantoms with the recently introduced i-CAT FLX cone-beam computed tomography unit (Imaging Sciences, Hatfield, Pa) were measured with optical stimulated dosimetry using previously validated protocols. Scan protocols included high resolution (360° rotation, 600 image frames, 120 kV[p], 5 mA, 7.4 seconds), standard (360°, 300 frames, 120 kV[p], 5 mA, 3.7 seconds), QuickScan (180°, 160 frames, 120 kV[p], 5 mA, 2 seconds), and QuickScan+ (180°, 160 frames, 90 kV[p], 3 mA, 2 seconds). Contrast-to-noise ratio was calculated as a quantitative measure of image quality for the various exposure options using the QUART DVT phantom.\n\n\nRESULTS\nChild phantom doses were on average 36% greater than adult phantom doses. QuickScan+ protocols resulted in significantly lower doses than standard protocols for the child (P = 0.0167) and adult (P = 0.0055) phantoms. The 13 × 16-cm cephalometric fields of view ranged from 11 to 85 μSv in the adult phantom and 18 to 120 μSv in the child phantom for the QuickScan+ and standard protocols, respectively. The contrast-to-noise ratio was reduced by approximately two thirds when comparing QuickScan+ with standard exposure parameters.\n\n\nCONCLUSIONS\nQuickScan+ effective doses are comparable with conventional panoramic examinations. Significant dose reductions are accompanied by significant reductions in image quality. However, this trade-off might be acceptable for certain diagnostic tasks such as interim assessment of treatment results.",
"title": ""
},
{
"docid": "6f56fca8d3df57619866d9520f79e1a8",
"text": "This paper explores how the remaining useful life (RUL) can be assessed for complex systems whose internal state variables are either inaccessible to sensors or hard to measure under operational conditions. Consequently, inference and estimation techniques need to be applied on indirect measurements, anticipated operational conditions, and historical data for which a Bayesian statistical approach is suitable. Models of electrochemical processes in the form of equivalent electric circuit parameters were combined with statistical models of state transitions, aging processes, and measurement fidelity in a formal framework. Relevance vector machines (RVMs) and several different particle filters (PFs) are examined for remaining life prediction and for providing uncertainty bounds. Results are shown on battery data.",
"title": ""
},
{
"docid": "b32b16971f9dd1375785a85617b3bd2a",
"text": "White matter hyperintensities (WMHs) in the brain are the consequence of cerebral small vessel disease, and can easily be detected on MRI. Over the past three decades, research has shown that the presence and extent of white matter hyperintense signals on MRI are important for clinical outcome, in terms of cognitive and functional impairment. Large, longitudinal population-based and hospital-based studies have confirmed a dose-dependent relationship between WMHs and clinical outcome, and have demonstrated a causal link between large confluent WMHs and dementia and disability. Adequate differential diagnostic assessment and management is of the utmost importance in any patient, but most notably those with incipient cognitive impairment. Novel imaging techniques such as diffusion tensor imaging might reveal subtle damage before it is visible on standard MRI. Even in Alzheimer disease, which is thought to be primarily caused by amyloid, vascular pathology, such as small vessel disease, may be of greater importance than amyloid itself in terms of influencing the disease course, especially in older individuals. Modification of risk factors for small vessel disease could be an important therapeutic goal, although evidence for effective interventions is still lacking. Here, we provide a timely Review on WMHs, including their relationship with cognitive decline and dementia.",
"title": ""
},
{
"docid": "dfccff16f4600e8cc297296481e50b7b",
"text": "Trust models have been recently suggested as an effective security mechanism for Wireless Sensor Networks (WSNs). Considerable research has been done on modeling trust. However, most current research work only takes communication behavior into account to calculate sensor nodes' trust value, which is not enough for trust evaluation due to the widespread malicious attacks. In this paper, we propose an Efficient Distributed Trust Model (EDTM) for WSNs. First, according to the number of packets received by sensor nodes, direct trust and recommendation trust are selectively calculated. Then, communication trust, energy trust and data trust are considered during the calculation of direct trust. Furthermore, trust reliability and familiarity are defined to improve the accuracy of recommendation trust. The proposed EDTM can evaluate trustworthiness of sensor nodes more precisely and prevent the security breaches more effectively. Simulation results show that EDTM outperforms other similar models, e.g., NBBTE trust model.",
"title": ""
},
{
"docid": "3f206b161dc55aea204dda594127bf3d",
"text": "A key challenge in fine-grained recognition is how to find and represent discriminative local regions. Recent attention models are capable of learning discriminative region localizers only from category labels with reinforcement learning. However, not utilizing any explicit part information, they are not able to accurately find multiple distinctive regions. In this work, we introduce an attribute-guided attention localization scheme where the local region localizers are learned under the guidance of part attribute descriptions. By designing a novel reward strategy, we are able to learn to locate regions that are spatially and semantically distinctive with reinforcement learning algorithm. The attribute labeling requirement of the scheme is more amenable than the accurate part location annotation required by traditional part-based fine-grained recognition methods. Experimental results on the CUB-200-2011 dataset [1] demonstrate the superiority of the proposed scheme on both fine-grained recognition and attribute recognition.",
"title": ""
},
{
"docid": "c4387f3c791acc54d0a0655221947c8b",
"text": "An emerging Internet application, IPTV, has the potential to flood Internet access and backbone ISPs with massive amounts of new traffic. Although many architectures are possible for IPTV video distribution, several mesh-pull P2P architectures have been successfully deployed on the Internet. In order to gain insights into mesh-pull P2P IPTV systems and the traffic loads they place on ISPs, we have undertaken an in-depth measurement study of one of the most popular IPTV systems, namely, PPLive. We have developed a dedicated PPLive crawler, which enables us to study the global characteristics of the mesh-pull PPLive system. We have also collected extensive packet traces for various different measurement scenarios, including both campus access networks and residential access networks. The measurement results obtained through these platforms bring important insights into P2P IPTV systems. Specifically, our results show the following. 1) P2P IPTV users have the similar viewing behaviors as regular TV users. 2) During its session, a peer exchanges video data dynamically with a large number of peers. 3) A small set of super peers act as video proxy and contribute significantly to video data uploading. 4) Users in the measured P2P IPTV system still suffer from long start-up delays and playback lags, ranging from several seconds to a couple of minutes. Insights obtained in this study will be valuable for the development and deployment of future P2P IPTV systems.",
"title": ""
},
{
"docid": "52fd33335eb177f989ae1b754527327a",
"text": "For robot tutors, autonomy and personalizations are important factors in order to engage users as well as to personalize the content and interaction according to the needs of individuals. is paper presents the Programming Cognitive Robot (ProCRob) soware architecture to target personalized social robotics in two complementary ways. ProCRob supports the development and personalization of social robot applications by teachers and therapists without computer programming background. It also supports the development of autonomous robots which can adapt according to the human-robot interaction context. ProCRob is based on our previous research on autonomous robotics and has been developed since 2015 by a multi-disciplinary team of researchers from the elds of AI, Robotics and Psychology as well as artists and designers at the University of Luxembourg. ProCRob is currently being used and further developed for therapy of children with autism, and for encouraging rehabilitation activities in patients with post-stroke. is paper presents a summary of ProCRob and its application in autism.",
"title": ""
},
{
"docid": "5da804fa4c1474e27a1c91fcf5682e20",
"text": "We present an overview of Candide, a system for automatic translat ion of French text to English text. Candide uses methods of information theory and statistics to develop a probabili ty model of the translation process. This model, which is made to accord as closely as possible with a large body of French and English sentence pairs, is then used to generate English translations of previously unseen French sentences. This paper provides a tutorial in these methods, discussions of the training and operation of the system, and a summary of test results. 1. I n t r o d u c t i o n Candide is an experimental computer program, now in its fifth year of development at IBM, for translation of French text to Enghsh text. Our goal is to perform fuRy-automatic, high-quality text totext translation. However, because we are still far from achieving this goal, the program can be used in both fully-automatic and translator 's-assistant modes. Our approach is founded upon the statistical analysis of language. Our chief tools axe the source-channel model of communication, parametric probabili ty models of language and translation, and an assortment of numerical algorithms for training such models from examples. This paper presents elementary expositions of each of these ideas, and explains how they have been assembled to produce Caadide. In Section 2 we introduce the necessary ideas from information theory and statistics. The reader is assumed to know elementary probabili ty theory at the level of [1]. In Sections 3 and 4 we discuss our language and translation models. In Section 5 we describe the operation of Candide as it translates a French document. In Section 6 we present results of our internal evaluations and the AB.PA Machine Translation Project evaluations. Section 7 is a summary and conclusion. 2 . Stat is t ical Trans la t ion Consider the problem of translating French text to English text. Given a French sentence f , we imagine that it was originally rendered as an equivalent Enghsh sentence e. To obtain the French, the Enghsh was t ransmit ted over a noisy communication channel, which has the curious property that English sentences sent into it emerge as their French translations. The central assumption of Candide's design is that the characteristics of this channel can be determined experimentally, and expressed mathematically. *Current address: Renaissance Technologies, Stony Brook, NY ~ English-to-French I f e Channel \" _[ French-to-English -] Decoder 6 Figure 1: The Source-Channel Formalism of Translation. Here f is the French text to be translated, e is the putat ive original English rendering, and 6 is the English translation. This formalism can be exploited to yield French-to-English translations as follows. Let us write P r (e I f ) for the probability that e was the original English rendering of the French f. Given a French sentence f, the problem of automatic translation reduces to finding the English sentence tha t maximizes P.r(e I f) . That is, we seek 6 = argmsx e Pr (e I f) . By virtue of Bayes' Theorem, we have = argmax Pr(e If ) = argmax Pr(f I e)Pr(e) (1) e e The term P r ( f l e ) models the probabili ty that f emerges from the channel when e is its input. We call this function the translation model; its domain is all pairs (f, e) of French and English word-strings. The term Pr (e ) models the a priori probability that e was supp led as the channel input. We call this function the language model. Each of these fac tors the translation model and the language model independent ly produces a score for a candidate English translat ion e. The translation model ensures that the words of e express the ideas of f, and the language model ensures that e is a grammatical sentence. Candide sehcts as its translat ion the e that maximizes their product. This discussion begs two impor tant questions. First , where do the models P r ( f [ e) and Pr (e ) come from? Second, even if we can get our hands on them, how can we search the set of all English strings to find 6? These questions are addressed in the next two sections. 2.1. P robab i l i ty Models We begin with a brief detour into probabili ty theory. A probability model is a mathematical formula that purports to express the chance of some observation. A parametric model is a probability model with adjustable parameters, which can be changed to make the model bet ter match some body of data. Let us write c for a body of da ta to be modeled, and 0 for a vector of parameters. The quanti ty Prs (c ) , computed according to some formula involving c and 0, is called the hkelihood 157 [Human Language Technology, Plainsboro, 1994]",
"title": ""
},
{
"docid": "a44264e4c382204606fdb140ab485617",
"text": "Atrophoderma vermiculata is a rare genodermatosis with usual onset in childhood, characterized by a \"honey-combed\" reticular atrophy of the cheeks. The course is generally slow, with progressive worsening. We report successful treatment of 2 patients by means of the carbon dioxide and 585 nm pulsed dye lasers.",
"title": ""
},
{
"docid": "ac08bc7d30b03fcb5cbe9f6354235ccd",
"text": "The type III secretion (T3S) pathway allows bacteria to inject effector proteins into the cytosol of target animal or plant cells. T3S systems evolved into seven families that were distributed among Gram-negative bacteria by horizontal gene transfer. There are probably a few hundred effectors interfering with control and signaling in eukaryotic cells and offering a wealth of new tools to cell biologists.",
"title": ""
},
{
"docid": "e96cf46cc99b3eff60d32f3feb8afc47",
"text": "We present an field programmable gate arrays (FPGA) based implementation of the popular Viola-Jones face detection algorithm, which is an essential building block in many applications such as video surveillance and tracking. Our implementation is a complete system level hardware design described in a hardware description language and validated on the affordable DE2-115 evaluation board. Our primary objective is to study the achievable performance with a low-end FPGA chip based implementation. In addition, we release to the public domain the entire project. We hope that this will enable other researchers to easily replicate and compare their results to ours and that it will encourage and facilitate further research and educational ideas in the areas of image processing, computer vision, and advanced digital design and FPGA prototyping. 2017 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).",
"title": ""
},
{
"docid": "42d861f1b332db23e5dca67b6247828d",
"text": "Information systems and intelligent knowledge processing are playing an increasing role in business, science and technology. Recently, advanced information systems have evolved to facilitate the co-evolution of human and information networks within communities. These advanced information systems use various paradigms including artificial intelligence, knowledge management, and neural science as well as conventional information processing paradigms.",
"title": ""
},
{
"docid": "db0581e9f46516ee1ed26937bbec515b",
"text": "In this paper we address the problem of offline Arabic handwriting word recognition. Offline recognition of handwritten words is a difficult task due to the high variability and uncertainty of human writing. The majority of the recent systems are constrained by the size of the lexicon to deal with and the number of writers. In this paper, we propose an approach for multi-writers Arabic handwritten words recognition using multiple Bayesian networks. First, we cut the image in several blocks. For each block, we compute a vector of descriptors. Then, we use K-means to cluster the low-level features including Zernik and Hu moments. Finally, we apply four variants of Bayesian networks classifiers (Naïve Bayes, Tree Augmented Naïve Bayes (TAN), Forest Augmented Naïve Bayes (FAN) and DBN (dynamic bayesian network) to classify the whole image of tunisian city name. The results demonstrate FAN and DBN outperform good recognition rates.",
"title": ""
},
{
"docid": "6f6733c35f78b00b771cf7099c953954",
"text": "This paper proposes an asymmetrical pulse width modulation (APWM) with frequency tracking control of full bridge series resonant inverter for induction heating application. In this method, APWM is used as power regulation, and phased locked loop (PLL) is used to attain zero-voltage-switching (ZVS) over a wide load range. The complete closed loop control model is obtained using small signal analysis. The validity of the proposed control is verified by simulation results.",
"title": ""
},
{
"docid": "5e0bcb6cf54879c65e9da7a08d97bc6b",
"text": "The present study made an attempt to analyze the existing buying behaviour of Instant Food Products by individual households and to predict the demand for Instant Food Products of Hyderabad city in Andra Padesh .All the respondents were aware of pickles and Sambar masala but only 56.67 per cent of respondents were aware of Dosa/Idli mix. About 96.11 per cent consumers of Dosa/Idli mix and more than half of consumers of pickles and Sambar masala prepared their own. Low cost of home preparation and differences in tastes were the major reasons for non consumption, whereas ready availability and save time of preparation were the reasons for consuming Instant Food Products. Retail shops are the major source of information and source of purchase of Instant Food Products. The average monthly expenditure on Instant Food Products was found to be highest in higher income groups. The average per capita purchase and per capita expenditure on Instant food Products had a positive relationship with income of households.High price and poor taste were the reasons for not purchasing particular brand whereas best quality, retailers influence and ready availability were considered for preferring particular brand of products by the consumers.",
"title": ""
},
{
"docid": "8bd367e82f7a5c046f6887c5edbf51c5",
"text": "Internet of Things (IoT) is a fast-growing innovation that will greatly change the way humans live. It can be thought of as the next big step in Internet technology. What really enable IoT to be a possibility are the various technologies that build it up. The IoT architecture mainly requires two types of technologies: data acquisition technologies and networking technologies. Many technologies are currently present that aim to serve as components to the IoT paradigm. This paper aims to categorize the various technologies present that are commonly used by Internet of Things.",
"title": ""
},
{
"docid": "b91e67b9ae7dbad0100c0fa98d2203e5",
"text": "We develop a flexible Conditional Random Field framework for supervised preference aggregation, which combines preferences from multiple experts over items to form a distribution over rankings. The distribution is based on an energy comprised of unary and pairwise potentials allowing us to effectively capture correlations between both items and experts. We describe procedures for learning in this modelnand demonstrate that inference can be done much more efficiently thannin analogous models. Experiments on benchmark tasks demonstrate significant performance gains over existing rank aggregation methods.",
"title": ""
}
] | scidocsrr |
23245dd3ef097c3662c9229f96b38ddf | Weakly Supervised Object Localization Using Things and Stuff Transfer | [
{
"docid": "28fd803428e8f40a4627e05a9464e97b",
"text": "We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image.",
"title": ""
},
{
"docid": "94160496e0a470dc278f71c67508ae21",
"text": "In this paper, we tackle the problem of co-localization in real-world images. Co-localization is the problem of simultaneously localizing (with bounding boxes) objects of the same class across a set of distinct images. Although similar problems such as co-segmentation and weakly supervised localization have been previously studied, we focus on being able to perform co-localization in real-world settings, which are typically characterized by large amounts of intra-class variation, inter-class diversity, and annotation noise. To address these issues, we present a joint image-box formulation for solving the co-localization problem, and show how it can be relaxed to a convex quadratic program which can be efficiently solved. We perform an extensive evaluation of our method compared to previous state-of-the-art approaches on the challenging PASCAL VOC 2007 and Object Discovery datasets. In addition, we also present a large-scale study of co-localization on ImageNet, involving ground-truth annotations for 3, 624 classes and approximately 1 million images.",
"title": ""
}
] | [
{
"docid": "221f28bc87e82f8264880c773b8b2fbe",
"text": "BACKGROUND\nMuscle weakness in old age is associated with physical function decline. Progressive resistance strength training (PRT) exercises are designed to increase strength.\n\n\nOBJECTIVES\nTo assess the effects of PRT on older people and identify adverse events.\n\n\nSEARCH STRATEGY\nWe searched the Cochrane Bone, Joint and Muscle Trauma Group Specialized Register (to March 2007), the Cochrane Central Register of Controlled Trials (The Cochrane Library 2007, Issue 2), MEDLINE (1966 to May 01, 2008), EMBASE (1980 to February 06 2007), CINAHL (1982 to July 01 2007) and two other electronic databases. We also searched reference lists of articles, reviewed conference abstracts and contacted authors.\n\n\nSELECTION CRITERIA\nRandomised controlled trials reporting physical outcomes of PRT for older people were included.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently selected trials, assessed trial quality and extracted data. Data were pooled where appropriate.\n\n\nMAIN RESULTS\nOne hundred and twenty one trials with 6700 participants were included. In most trials, PRT was performed two to three times per week and at a high intensity. PRT resulted in a small but significant improvement in physical ability (33 trials, 2172 participants; SMD 0.14, 95% CI 0.05 to 0.22). Functional limitation measures also showed improvements: e.g. there was a modest improvement in gait speed (24 trials, 1179 participants, MD 0.08 m/s, 95% CI 0.04 to 0.12); and a moderate to large effect for getting out of a chair (11 trials, 384 participants, SMD -0.94, 95% CI -1.49 to -0.38). PRT had a large positive effect on muscle strength (73 trials, 3059 participants, SMD 0.84, 95% CI 0.67 to 1.00). Participants with osteoarthritis reported a reduction in pain following PRT(6 trials, 503 participants, SMD -0.30, 95% CI -0.48 to -0.13). There was no evidence from 10 other trials (587 participants) that PRT had an effect on bodily pain. Adverse events were poorly recorded but adverse events related to musculoskeletal complaints, such as joint pain and muscle soreness, were reported in many of the studies that prospectively defined and monitored these events. Serious adverse events were rare, and no serious events were reported to be directly related to the exercise programme.\n\n\nAUTHORS' CONCLUSIONS\nThis review provides evidence that PRT is an effective intervention for improving physical functioning in older people, including improving strength and the performance of some simple and complex activities. However, some caution is needed with transferring these exercises for use with clinical populations because adverse events are not adequately reported.",
"title": ""
},
{
"docid": "ce5c5d0d0cb988c96f0363cfeb9610d4",
"text": "Due to deep automation, the configuration of many Cloud infrastructures is static and homogeneous, which, while easing administration, significantly decreases a potential attacker's uncertainty on a deployed Cloud-based service and hence increases the chance of the service being compromised. Moving-target defense (MTD) is a promising solution to the configuration staticity and homogeneity problem. This paper presents our findings on whether and to what extent MTD is effective in protecting a Cloud-based service with heterogeneous and dynamic attack surfaces - these attributes, which match the reality of current Cloud infrastructures, have not been investigated together in previous works on MTD in general network settings. We 1) formulate a Cloud-based service security model that incorporates Cloud-specific features such as VM migration/snapshotting and the diversity/compatibility of migration, 2) consider the accumulative effect of the attacker's intelligence on the target service's attack surface, 3) model the heterogeneity and dynamics of the service's attack surfaces, as defined by the (dynamic) probability of the service being compromised, as an S-shaped generalized logistic function, and 4) propose a probabilistic MTD service deployment strategy that exploits the dynamics and heterogeneity of attack surfaces for protecting the service against attackers. Through simulation, we identify the conditions and extent of the proposed MTD strategy's effectiveness in protecting Cloud-based services. Namely, 1) MTD is more effective when the service deployment is dense in the replacement pool and/or when the attack is strong, and 2) attack-surface heterogeneity-and-dynamics awareness helps in improving MTD's effectiveness.",
"title": ""
},
{
"docid": "0dba7993e502824bda56bdcf80278c26",
"text": "The recent expansion of the Internet of Things (IoT) and the consequent explosion in the volume of data produced by smart devices have led to the outsourcing of data to designated data centers. However, to manage these huge data stores, centralized data centers, such as cloud storage cannot afford auspicious way. There are many challenges that must be addressed in the traditional network architecture due to the rapid growth in the diversity and number of devices connected to the internet, which is not designed to provide high availability, real-time data delivery, scalability, security, resilience, and low latency. To address these issues, this paper proposes a novel blockchain-based distributed cloud architecture with a software defined networking (SDN) enable controller fog nodes at the edge of the network to meet the required design principles. The proposed model is a distributed cloud architecture based on blockchain technology, which provides low-cost, secure, and on-demand access to the most competitive computing infrastructures in an IoT network. By creating a distributed cloud infrastructure, the proposed model enables cost-effective high-performance computing. Furthermore, to bring computing resources to the edge of the IoT network and allow low latency access to large amounts of data in a secure manner, we provide a secure distributed fog node architecture that uses SDN and blockchain techniques. Fog nodes are distributed fog computing entities that allow the deployment of fog services, and are formed by multiple computing resources at the edge of the IoT network. We evaluated the performance of our proposed architecture and compared it with the existing models using various performance measures. The results of our evaluation show that performance is improved by reducing the induced delay, reducing the response time, increasing throughput, and the ability to detect real-time attacks in the IoT network with low performance overheads.",
"title": ""
},
{
"docid": "4ed1c4f2fb1922acc9ee781eb1f9524e",
"text": "Across HCI and social computing platforms, mobile applications that support citizen science, empowering non-experts to explore, collect, and share data have emerged. While many of these efforts have been successful, it remains difficult to create citizen science applications without extensive programming expertise. To address this concern, we present Sensr, an authoring environment that enables people without programming skills to build mobile data collection and management tools for citizen science. We demonstrate how Sensr allows people without technical skills to create mobile applications. Findings from our case study demonstrate that our system successfully overcomes technical constraints and provides a simple way to create mobile data collection tools.",
"title": ""
},
{
"docid": "a112cd31e136054bdf9d34c82b960d95",
"text": "We propose a completely automatic approach for recognizing low resolution face images captured in uncontrolled environment. The approach uses multidimensional scaling to learn a common transformation matrix for the entire face which simultaneously transforms the facial features of the low resolution and the high resolution training images such that the distance between them approximates the distance had both the images been captured under the same controlled imaging conditions. Stereo matching cost is used to obtain the similarity of two images in the transformed space. Though this gives very good recognition performance, the time taken for computing the stereo matching cost is significant. To overcome this limitation, we propose a reference-based approach in which each face image is represented by its stereo matching cost from a few reference images. Experimental evaluation on the real world challenging databases and comparison with the state-of-the-art super-resolution, classifier based and cross modal synthesis techniques show the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "fc8e227dbd257954435c164b2a6193f7",
"text": "To present three cases of arterial high flow priapism (HFP) and propose a management algorithm for this condition. We studied three children with post-traumatic arterial HFP (two patients with perineal trauma and one with penis trauma). Spontaneous resolution was observed in all the patients. The time of resolution by a return to a completely flaccid penis was different: 14, 27 and 36 days in each case. Absence of long-term damaging effects of arterial HFP on erectile tissue combined with the possibility of spontaneous resolution associated with blunt perineal trauma are suggestive signs for the introduction of an observation period in the management algorithm of HFP. Such a period may help to avoid unnecessary surgical intervention. Thus, these cases reinforce the decision to manage these patients conservatively and avoid angiographic embolization as a first therapeutic choice.",
"title": ""
},
{
"docid": "2e1a6dfb1208bc09a227c7e16ffc7b4f",
"text": "Cannabis sativa L. (Cannabaceae) is an important medicinal plant well known for its pharmacologic and therapeutic potency. Because of allogamous nature of this species, it is difficult to maintain its potency and efficacy if grown from the seeds. Therefore, chemical profile-based screening, selection of high yielding elite clones and their propagation using biotechnological tools is the most suitable way to maintain their genetic lines. In this regard, we report a simple and efficient method for the in vitro propagation of a screened and selected high yielding drug type variety of Cannabis sativa, MX-1 using synthetic seed technology. Axillary buds of Cannabis sativa isolated from aseptic multiple shoot cultures were successfully encapsulated in calcium alginate beads. The best gel complexation was achieved using 5 % sodium alginate with 50 mM CaCl2.2H2O. Regrowth and conversion after encapsulation was evaluated both under in vitro and in vivo conditions on different planting substrates. The addition of antimicrobial substance — Plant Preservative Mixture (PPM) had a positive effect on overall plantlet development. Encapsulated explants exhibited the best regrowth and conversion frequency on Murashige and Skoog medium supplemented with thidiazuron (TDZ 0.5 μM) and PPM (0.075 %) under in vitro conditions. Under in vivo conditions, 100 % conversion of encapsulated explants was obtained on 1:1 potting mix- fertilome with coco natural growth medium, moistened with full strength MS medium without TDZ, supplemented with 3 % sucrose and 0.5 % PPM. Plantlets regenerated from the encapsulated explants were hardened off and successfully transferred to the soil. These plants are selected to be used in mass cultivation for the production of biomass as a starting material for the isolation of THC as a bulk active pharmaceutical.",
"title": ""
},
{
"docid": "59cf7e5c4ef01d08e3c969c246342e3b",
"text": "A new overground body-weight support system called ZeroG has been developed that allows patients with severe gait impairments to practice gait and balance activities in a safe, controlled manner. The unloading system is capable of providing up to 300 lb of static support and 150 lb of dynamic (or constant force) support using a custom-series elastic actuator. The unloading system is mounted to a driven trolley, which rides along an overhead rail. We evaluated the performance of ZeroG's unloading system, as well as the trolley tracking system, using benchtop and human-subject testing. Average root-mean-square and peak errors in unloading were 2.2 and 7.2 percent, respectively, over the range of forces tested while trolley tracking errors were less than 3 degrees, indicating the system was able to maintain its position above the subject. We believe training with ZeroG will allow patients to practice activities that are critical to achieving functional independence at home and in the community.",
"title": ""
},
{
"docid": "df80b751fa78e0631ca51f6199cc822c",
"text": "OBJECTIVE\nHumane treatment and care of mentally ill people can be viewed from a historical perspective. Intramural (the institution) and extramural (the community) initiatives are not mutually exclusive.\n\n\nMETHOD\nThe evolution of the psychiatric institution in Canada as the primary method of care is presented from an historical perspective. A province-by-province review of provisions for mentally ill people prior to asylum construction reveals that humanitarian motives and a growing sensitivity to social and medical problems gave rise to institutional psychiatry. The influence of Great Britain, France, and, to a lesser extent, the United States in the construction of asylums in Canada is highlighted. The contemporary redirection of the Canadian mental health system toward \"dehospitalization\" is discussed and delineated.\n\n\nRESULTS\nEarly promoters of asylums were genuinely concerned with alleviating human suffering, which led to the separation of mental health services from the community and from those proffered to the criminal and indigent populations. While the results of the past institutional era were mixed, it is hoped that the \"care\" cycle will not repeat itself in the form of undesirable community alternatives.\n\n\nCONCLUSION\nSeverely psychiatrically disabled individuals can be cared for in the community if appropriate services exist.",
"title": ""
},
{
"docid": "67a52c021821f6e6c3ece1cc8114c3b2",
"text": "The purpose of this review was to present a comprehensive review of the scientific evidence available in the literature regarding the effect of altering the occlusal vertical dimens-ion (OVD) on producing temporomandibular disorders. The authors conducted a PubMed search with the following search terms 'temporoman-dibular disorders', 'occlusal vertical dimension', 'stomatognatic system', 'masticatory muscles' and 'skeletal muscle'. Bibliographies of all retrieved articles were consulted for additional publications. Hand-searched publications from 1938 were included. The literature review revealed a lack of well-designed studies. Traditional beliefs have been based on case reports and anecdotal opinions rather than on well-controlled clinical trials. The available evidence is weak and seems to indicate that the stomatognathic system has the ability to adapt rapidly to moderate changes in occlusal vertical dimension (OVD). Nevertheless, it should be taken into consideration that in some patients mild transient symptoms may occur, but they are most often self-limiting and without major consequence. In conclusion, there is no indication that permanent alteration in the OVD will produce long-lasting TMD symptoms. However, additional studies are needed.",
"title": ""
},
{
"docid": "14e2eecc36a1c08600598eb65678f99f",
"text": "The correct grasp of objects is a key aspect for the right fulfillment of a given task. Obtaining a good grasp requires algorithms to automatically determine proper contact points on the object as well as proper hand configurations, especially when dexterous manipulation is desired, and the quantification of a good grasp requires the definition of suitable grasp quality measures. This article reviews the quality measures proposed in the literature to evaluate grasp quality. The quality measures are classified into two groups according to the main aspect they evaluate: location of contact points on the object and hand configuration. The approaches that combine different measures from the two previous groups to obtain a global quality measure are also reviewed, as well as some measures related to human hand studies and grasp performance. Several examples are presented to illustrate and compare the performance of the reviewed measures.",
"title": ""
},
{
"docid": "476d80eda71ba451c740c4cb36a0042f",
"text": "This paper summarizes some of the literature on causal effects in mediation analysis. It presents causally-defined direct and indirect effects for continuous, binary, ordinal, nominal, and count variables. The expansion to non-continuous mediators and outcomes offers a broader array of causal mediation analyses than previously considered in structural equation modeling practice. A new result is the ability to handle mediation by a nominal variable. Examples with a binary outcome and a binary, ordinal or nominal mediator are given using Mplus to compute the effects. The causal effects require strong assumptions even in randomized designs, especially sequential ignorability, which is presumably often violated to some extent due to mediator-outcome confounding. To study the effects of violating this assumption, it is shown how a sensitivity analysis can be carried out. This can be used both in planning a new study and in evaluating the results of an existing study.",
"title": ""
},
{
"docid": "45c9ecc06dca6e18aae89ebf509d31d2",
"text": "For estimating causal effects of treatments, randomized experiments are generally considered the gold standard. Nevertheless, they are often infeasible to conduct for a variety of reasons, such as ethical concerns, excessive expense, or timeliness. Consequently, much of our knowledge of causal effects must come from non-randomized observational studies. This article will advocate the position that observational studies can and should be designed to approximate randomized experiments as closely as possible. In particular, observational studies should be designed using only background information to create subgroups of similar treated and control units, where 'similar' here refers to their distributions of background variables. Of great importance, this activity should be conducted without any access to any outcome data, thereby assuring the objectivity of the design. In many situations, this objective creation of subgroups of similar treated and control units, which are balanced with respect to covariates, can be accomplished using propensity score methods. The theoretical perspective underlying this position will be presented followed by a particular application in the context of the US tobacco litigation. This application uses propensity score methods to create subgroups of treated units (male current smokers) and control units (male never smokers) who are at least as similar with respect to their distributions of observed background characteristics as if they had been randomized. The collection of these subgroups then 'approximate' a randomized block experiment with respect to the observed covariates.",
"title": ""
},
{
"docid": "8074d30cb422922bc134d07547932685",
"text": "Research paper recommenders emerged over the last decade to ease finding publications relating to researchers' area of interest. The challenge was not just to provide researchers with very rich publications at any time, any place and in any form but to also offer the right publication to the right researcher in the right way. Several approaches exist in handling paper recommender systems. However, these approaches assumed the availability of the whole contents of the recommending papers to be freely accessible, which is not always true due to factors such as copyright restrictions. This paper presents a collaborative approach for research paper recommender system. By leveraging the advantages of collaborative filtering approach, we utilize the publicly available contextual metadata to infer the hidden associations that exist between research papers in order to personalize recommendations. The novelty of our proposed approach is that it provides personalized recommendations regardless of the research field and regardless of the user's expertise. Using a publicly available dataset, our proposed approach has recorded a significant improvement over other baseline methods in measuring both the overall performance and the ability to return relevant and useful publications at the top of the recommendation list.",
"title": ""
},
{
"docid": "e106df98a3d0240ed3e10840697bfc74",
"text": "Online question and answer (Q&A) services are facing key challenges to motivate domain experts to provide quick and high-quality answers. Recent systems seek to engage real-world experts by allowing them to set a price on their answers. This leads to a \"targeted\" Q&A model where users to ask questions to a target expert by paying the price. In this paper, we perform a case study on two emerging targeted Q&A systems Fenda (China) and Whale (US) to understand how monetary incentives affect user behavior. By analyzing a large dataset of 220K questions (worth 1 million USD), we find that payments indeed enable quick answers from experts, but also drive certain users to game the system for profits. In addition, this model requires users (experts) to proactively adjust their price to make profits. People who are unwilling to lower their prices are likely to hurt their income and engagement over time.",
"title": ""
},
{
"docid": "5fc9fe7bcc50aad948ebb32aefdb2689",
"text": "This paper explores the use of set expansion (SE) to improve question answering (QA) when the expected answer is a list of entities belonging to a certain class. Given a small set of seeds, SE algorithms mine textual resources to produce an extended list including additional members of the class represented by the seeds. We explore the hypothesis that a noise-resistant SE algorithm can be used to extend candidate answers produced by a QA system and generate a new list of answers that is better than the original list produced by the QA system. We further introduce a hybrid approach which combines the original answers from the QA system with the output from the SE algorithm. Experimental results for several state-of-the-art QA systems show that the hybrid system performs better than the QA systems alone when tested on list question data from past TREC evaluations.",
"title": ""
},
{
"docid": "c196444f2093afc3092f85b8fbb67da5",
"text": "The objective of this paper is to evaluate “human action recognition without human”. Motion representation is frequently discussed in human action recognition. We have examined several sophisticated options, such as dense trajectories (DT) and the two-stream convolutional neural network (CNN). However, some features from the background could be too strong, as shown in some recent studies on human action recognition. Therefore, we considered whether a background sequence alone can classify human actions in current large-scale action datasets (e.g., UCF101). In this paper, we propose a novel concept for human action analysis that is named “human action recognition without human”. An experiment clearly shows the effect of a background sequence for understanding an action label.",
"title": ""
},
{
"docid": "254f1d562996724781e2ef857edaac7d",
"text": "We propose a novel framework for controllable natural language transformation. Realizing that the requirement of parallel corpus is practically unsustainable for controllable generation tasks, an unsupervised training scheme is introduced. The crux of the framework is a deep neural encoder-decoder that is reinforced with text-transformation knowledge through auxiliary modules (called scorers). These scorers, based on off-the-shelf language processing tools, decide the learning scheme of the encoder-decoder based on its actions. We apply this framework for the text-transformation task of formalizing an input text by improving its readability grade; the degree of required formalization can be controlled by the user at run-time. Experiments on public datasets demonstrate the efficacy of our model towards: (a) transforming a given text to a more formal style, and (b) varying the amount of formalness in the output text based on the specified input control. Our code and datasets are released for academic use.",
"title": ""
},
{
"docid": "9edd6f8e6349689b71a351f5947497f7",
"text": "Convolutional Neural Networks (CNNs) have been applied to visual tracking with demonstrated success in recent years. Most CNN-based trackers utilize hierarchical features extracted from a certain layer to represent the target. However, features from a certain layer are not always effective for distinguishing the target object from the backgrounds especially in the presence of complicated interfering factors (e.g., heavy occlusion, background clutter, illumination variation, and shape deformation). In this work, we propose a CNN-based tracking algorithm which hedges deep features from different CNN layers to better distinguish target objects and background clutters. Correlation filters are applied to feature maps of each CNN layer to construct a weak tracker, and all weak trackers are hedged into a strong one. For robust visual tracking, we propose a hedge method to adaptively determine weights of weak classifiers by considering both the difference between the historical as well as instantaneous performance, and the difference among all weak trackers over time. In addition, we design a siamese network to define the loss of each weak tracker for the proposed hedge method. Extensive experiments on large benchmark datasets demonstrate the effectiveness of the proposed algorithm against the state-of-the-art tracking methods.",
"title": ""
},
{
"docid": "a5f557ddac63cd24a11c1490e0b4f6d4",
"text": "Continuous opinion dynamics optimizer (CODO) is an algorithm based on human collective opinion formation process for solving continuous optimization problems. In this paper, we have studied the impact of topology and introduction of leaders in the society on the optimization performance of CODO. We have introduced three new variants of CODO and studied the efficacy of algorithms on several benchmark functions. Experimentation demonstrates that scale free CODO performs significantly better than all algorithms. Also, the role played by individuals with different degrees during the optimization process is studied.",
"title": ""
}
] | scidocsrr |
f4b614cb9723511bfce27ae4db485ddd | A survey on the communication architectures in smart grid | [
{
"docid": "0b33249df17737a826dcaa197adccb74",
"text": "In the competitive electricity structure, demand response programs enable customers to react dynamically to changes in electricity prices. The implementation of such programs may reduce energy costs and increase reliability. To fully harness such benefits, existing load controllers and appliances need around-the-clock price information. Advances in the development and deployment of advanced meter infrastructures (AMIs), building automation systems (BASs), and various dedicated embedded control systems provide the capability to effectively address this requirement. In this paper we introduce a meter gateway architecture (MGA) to serve as a foundation for integrated control of loads by energy aggregators, facility hubs, and intelligent appliances. We discuss the requirements that motivate the architecture, describe its design, and illustrate its application to a small system with an intelligent appliance and a legacy appliance using a prototype implementation of an intelligent hub for the MGA and ZigBee wireless communications.",
"title": ""
}
] | [
{
"docid": "5249a94aa9d9dbb211bb73fa95651dfd",
"text": "Power and energy have become increasingly important concerns in the design and implementation of today's multicore/manycore chips. In this paper, we present two priority-based CPU scheduling algorithms, Algorithm Cache Miss Priority CPU Scheduler (CM-PCS) and Algorithm Context Switch Priority CPU Scheduler (CS-PCS), which take advantage of often ignored dynamic performance data, in order to reduce power consumption by over 20 percent with a significant increase in performance. Our algorithms utilize Linux cpusets and cores operating at different fixed frequencies. Many other techniques, including dynamic frequency scaling, can lower a core's frequency during the execution of a non-CPU intensive task, thus lowering performance. Our algorithms match processes to cores better suited to execute those processes in an effort to lower the average completion time of all processes in an entire task, thus improving performance. They also consider a process's cache miss/cache reference ratio, number of context switches and CPU migrations, and system load. Finally, our algorithms use dynamic process priorities as scheduling criteria. We have tested our algorithms using a real AMD Opteron 6134 multicore chip and measured results directly using the “KillAWatt” meter, which samples power periodically during execution. Our results show not only a power (energy/execution time) savings of 39 watts (21.43 percent) and 38 watts (20.88 percent), but also a significant improvement in the performance, performance per watt, and execution time · watt (energy) for a task consisting of 24 concurrently executing benchmarks, when compared to the default Linux scheduler and CPU frequency scaling governor.",
"title": ""
},
{
"docid": "7f3686b783273c4df7c4fb41fe7ccefd",
"text": "Data from service and manufacturing sectors is increasing sharply and lifts up a growing enthusiasm for the notion of Big Data. This paper investigates representative Big Data applications from typical services like finance & economics, healthcare, Supply Chain Management (SCM), and manufacturing sector. Current technologies from key aspects of storage technology, data processing technology, data visualization technique, Big Data analytics, as well as models and algorithms are reviewed. This paper then provides a discussion from analyzing current movements on the Big Data for SCM in service and manufacturing world-wide including North America, Europe, and Asia Pacific region. Current challenges, opportunities, and future perspectives such as data collection methods, data transmission, data storage, processing technologies for Big Data, Big Data-enabled decision-making models, as well as Big Data interpretation and application are highlighted. Observations and insights from this paper could be referred by academia and practitioners when implementing Big Data analytics in the service and manufacturing sectors. 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ef2cf439b0765c44e9e4db87836401e7",
"text": "Phishing is defined as mimicking a creditable company's website aiming to take private information of a user. In order to eliminate phishing, different solutions proposed. However, only one single magic bullet cannot eliminate this threat completely. Data mining is a promising technique used to detect phishing attacks. In this paper, an intelligent system to detect phishing attacks is presented. We used different data mining techniques to decide categories of websites: legitimate or phishing. Different classifiers were used in order to construct accurate intelligent system for phishing website detection. Classification accuracy, area under receiver operating characteristic (ROC) curves (AUC) and F-measure is used to evaluate the performance of the data mining techniques. Results showed that Random Forest has outperformed best among the classification methods by achieving the highest accuracy 97.36%. Random forest runtimes are quite fast, and it can deal with different websites for phishing detection.",
"title": ""
},
{
"docid": "3c5a5ee0b855625c959593a08d6e1e24",
"text": "We present Scalable Host-tree Embeddings for Efficient Partitioning (Sheep), a distributed graph partitioning algorithm capable of handling graphs that far exceed main memory. Sheep produces high quality edge partitions an order of magnitude faster than both state of the art offline (e.g., METIS) and streaming partitioners (e.g., Fennel). Sheep’s partitions are independent of the input graph distribution, which means that graph elements can be assigned to processing nodes arbitrarily without affecting the partition quality. Sheep transforms the input graph into a strictly smaller elimination tree via a distributed map-reduce operation. By partitioning this tree, Sheep finds an upper-bounded communication volume partitioning of the original graph. We describe the Sheep algorithm and analyze its spacetime requirements, partition quality, and intuitive characteristics and limitations. We compare Sheep to contemporary partitioners and demonstrate that Sheep creates competitive partitions, scales to larger graphs, and has better runtime.",
"title": ""
},
{
"docid": "ceb02ddf8b2085d67ccf27c3c5b57dfd",
"text": "We present a novel latent embedding model for learning a compatibility function between image and class embeddings, in the context of zero-shot classification. The proposed method augments the state-of-the-art bilinear compatibility model by incorporating latent variables. Instead of learning a single bilinear map, it learns a collection of maps with the selection, of which map to use, being a latent variable for the current image-class pair. We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image. We empirically demonstrate that our model improves the state-of-the-art for various class embeddings consistently on three challenging publicly available datasets for the zero-shot setting. Moreover, our method leads to visually highly interpretable results with clear clusters of different fine-grained object properties that correspond to different latent variable maps.",
"title": ""
},
{
"docid": "cbbd8c44de7e060779ed60c6edc31e3c",
"text": "This letter presents a compact broadband microstrip-line-fed sleeve monopole antenna for application in the DTV system. The design of meandering the monopole into a compact structure is applied for size reduction. By properly selecting the length and spacing of the sleeve, the broadband operation for the proposed design can be achieved, and the obtained impedance bandwidth covers the whole DTV (470862 MHz) band. Most importantly, the matching condition over a wide frequency range can be performed well even when a small ground-plane length is used; meanwhile, a small variation in the impedance bandwidth is observed for the ground-plane length varied in a great range.",
"title": ""
},
{
"docid": "b700c177ab4ee014cea9a3a2fd870230",
"text": "Exploiting network data (i.e., graphs) is a rather particular case of data mining. The size and relevance of network domains justifies research on graph mining, but also brings forth severe complications. Computational aspects like scalability and parallelism have to be reevaluated, and well as certain aspects of the data mining process. One of those are the methodologies used to evaluate graph mining methods, particularly when processing large graphs. In this paper we focus on the evaluation of a graph mining task known as Link Prediction. First we explore the available solutions in traditional data mining for that purpose, discussing which methods are most appropriate. Once those are identified, we argue about their capabilities and limitations for producing a faithful and useful evaluation. Finally, we introduce a novel modification to a traditional evaluation methodology with the goal of adapting it to the problem of Link Prediction on large graphs.",
"title": ""
},
{
"docid": "0972f1690f5bba5a8bdec67cd133d690",
"text": "We use a deep learning model trained only on a patient’s blood oxygenation data (measurable with an inexpensive fingertip sensor) to predict impending hypoxemia (low blood oxygen) more accurately than trained anesthesiologists with access to all the data recorded in a modern operating room. We also provide a simple way to visualize the reason why a patient’s risk is low or high by assigning weight to the patient’s past blood oxygen values. This work has the potential to provide cuttingedge clinical decision support in low-resource settings, where rates of surgical complication and death are substantially greater than in high-resource areas.",
"title": ""
},
{
"docid": "2c93fcf96c71c7c0a8dcad453da53f81",
"text": "Production cars are designed to understeer and rarely do they oversteer. If a car could automatically compensate for an understeer/oversteer problem, the driver would enjoy nearly neutral steering under varying operating conditions. Four-wheel steering is a serious effort on the part of automotive design engineers to provide near-neutral steering. Also in situations like low speed cornering, vehicle parking and driving in city conditions with heavy traffic in tight spaces, driving would be very difficult due to vehicle’s larger wheelbase and track width. Hence there is a requirement of a mechanism which result in less turning radius and it can be achieved by implementing four wheel steering mechanism instead of regular two wheel steering. In this project Maruti Suzuki 800 is considered as a benchmark vehicle. The main aim of this project is to turn the rear wheels out of phase to the front wheels. In order to achieve this, a mechanism which consists of two bevel gears and intermediate shaft which transmit 100% torque as well turns rear wheels in out of phase was developed. The mechanism was modelled using CATIA and the motion simulation was done using ADAMS. A physical prototype was realised. The prototype was tested for its cornering ability through constant radius test and was found 50% reduction in turning radius and the vehicle was operated at low speed of 10 kmph.",
"title": ""
},
{
"docid": "4961f878fecbe0153a679210fb986a8a",
"text": "Wikis are collaborative systems in which virtually anyone can edit anything. Although wikis have become highly popular in many domains, their mutable nature often leads them to be distrusted as a reliable source of information. Here we describe a social dynamic analysis tool called WikiDashboard which aims to improve social transparency and accountability on Wikipedia articles. Early reactions from users suggest that the increased transparency afforded by the tool can improve the interpretation, communication, and trustworthiness of Wikipedia articles.",
"title": ""
},
{
"docid": "e104e306d90605a5bc9d853180567917",
"text": "An algorithm is presented for the estimation of the fundamental frequency (F0) of speech or musical sounds. It is based on the well-known autocorrelation method with a number of modifications that combine to prevent errors. The algorithm has several desirable features. Error rates are about three times lower than the best competing methods, as evaluated over a database of speech recorded together with a laryngograph signal. There is no upper limit on the frequency search range, so the algorithm is suited for high-pitched voices and music. The algorithm is relatively simple and may be implemented efficiently and with low latency, and it involves few parameters that must be tuned. It is based on a signal model (periodic signal) that may be extended in several ways to handle various forms of aperiodicity that occur in particular applications. Finally, interesting parallels may be drawn with models of auditory processing.",
"title": ""
},
{
"docid": "a9c00556e3531ba81cc009ae3f5a1816",
"text": "A systematic, tiered approach to assess the safety of engineered nanomaterials (ENMs) in foods is presented. The ENM is first compared to its non-nano form counterpart to determine if ENM-specific assessment is required. Of highest concern from a toxicological perspective are ENMs which have potential for systemic translocation, are insoluble or only partially soluble over time or are particulate and bio-persistent. Where ENM-specific assessment is triggered, Tier 1 screening considers the potential for translocation across biological barriers, cytotoxicity, generation of reactive oxygen species, inflammatory response, genotoxicity and general toxicity. In silico and in vitro studies, together with a sub-acute repeat-dose rodent study, could be considered for this phase. Tier 2 hazard characterisation is based on a sentinel 90-day rodent study with an extended range of endpoints, additional parameters being investigated case-by-case. Physicochemical characterisation should be performed in a range of food and biological matrices. A default assumption of 100% bioavailability of the ENM provides a 'worst case' exposure scenario, which could be refined as additional data become available. The safety testing strategy is considered applicable to variations in ENM size within the nanoscale and to new generations of ENM.",
"title": ""
},
{
"docid": "cf4089c8c3b8408e2d2966e3abd8af09",
"text": "The deployment of wireless sensor networks and mobile ad-hoc networks in applications such as emergency services, warfare and health monitoring poses the threat of various cyber hazards, intrusions and attacks as a consequence of these networks’ openness. Among the most significant research difficulties in such networks safety is intrusion detection, whose target is to distinguish between misuse and abnormal behavior so as to ensure secure, reliable network operations and services. Intrusion detection is best delivered by multi-agent system technologies and advanced computing techniques. To date, diverse soft computing and machine learning techniques in terms of computational intelligence have been utilized to create Intrusion Detection and Prevention Systems (IDPS), yet the literature does not report any state-ofthe-art reviews investigating the performance and consequences of such techniques solving wireless environment intrusion recognition issues as they gain entry into cloud computing. The principal contribution of this paper is a review and categorization of existing IDPS schemes in terms of traditional artificial computational intelligence with a multi-agent support. The significance of the techniques and methodologies and their performance and limitations are additionally analyzed in this study, and the limitations are addressed as challenges to obtain a set of requirements for IDPS in establishing a collaborative-based wireless IDPS (Co-WIDPS) architectural design. It amalgamates a fuzzy reinforcement learning knowledge management by creating a far superior technological platform that is far more accurate in detecting attacks. In conclusion, we elaborate on several key future research topics with the potential to accelerate the progress and deployment of computational intelligence based Co-WIDPSs. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5afb121d5e4a5ab8daa80580c8bd8253",
"text": "In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers.",
"title": ""
},
{
"docid": "cc93f5a421ad0e5510d027b01582e5ae",
"text": "This paper assesses the impact of financial reforms in Zimbabwe on savings and credit availability to small and medium scale enterprises (SMEs) and the poor. We established that the reforms improved domestic savings mobilization due to high deposit rates, the emergence of new financial institutions and products and the general increase in real incomes after the 1990 economic reforms. The study uncovered that inflation and real income were the major determinants of savings during the sample period. High lending rates and the use of conventional lending methodologies by banks restricted access to credit by the SMEs and the poor. JEL Classification Numbers: E21, O16.",
"title": ""
},
{
"docid": "1b3b2b8872d3b846120502a7a40e03d0",
"text": "A viable fully on-line adaptive brain computer interface (BCI) is introduced. On-line experiments with nine naive and able-bodied subjects were carried out using a continuously adaptive BCI system. The data were analyzed and the viability of the system was studied. The BCI was based on motor imagery, the feature extraction was performed with an adaptive autoregressive model and the classifier used was an adaptive quadratic discriminant analysis. The classifier was on-line updated by an adaptive estimation of the information matrix (ADIM). The system was also able to provide continuous feedback to the subject. The success of the feedback was studied analyzing the error rate and mutual information of each session and this analysis showed a clear improvement of the subject's control of the BCI from session to session.",
"title": ""
},
{
"docid": "57c6d587b602b17a3cbf3b9b3c72c6c9",
"text": "OBJECTIVE\nDevelopment of a rational and enforceable basis for controlling the impact of cannabis use on traffic safety.\n\n\nMETHODS\nAn international working group of experts on issues related to drug use and traffic safety evaluated evidence from experimental and epidemiological research and discussed potential approaches to developing per se limits for cannabis.\n\n\nRESULTS\nIn analogy to alcohol, finite (non-zero) per se limits for delta-9-tetrahydrocannabinol (THC) in blood appear to be the most effective approach to separating drivers who are impaired by cannabis use from those who are no longer under the influence. Limited epidemiological studies indicate that serum concentrations of THC below 10 ng/ml are not associated with an elevated accident risk. A comparison of meta-analyses of experimental studies on the impairment of driving-relevant skills by alcohol or cannabis suggests that a THC concentration in the serum of 7-10 ng/ml is correlated with an impairment comparable to that caused by a blood alcohol concentration (BAC) of 0.05%. Thus, a suitable numerical limit for THC in serum may fall in that range.\n\n\nCONCLUSIONS\nThis analysis offers an empirical basis for a per se limit for THC that allows identification of drivers impaired by cannabis. The limited epidemiological data render this limit preliminary.",
"title": ""
},
{
"docid": "6721d6fb3b2f97062303eb63e6e9de31",
"text": "Business process modeling is a big part in the industry, mainly to document, analyze, and optimize workflows. Currently, the EPC process modeling notation is used very wide, because of the excellent integration in the ARIS Toolset and the long existence of this process language. But as a change of time, BPMN gets popular and the interest in the industry and companies gets growing up. It is standardized, has more expressiveness than EPC and the tool support increase very rapidly. With having tons of existing EPC process models; a big need from the industry is to have an automated transformation from EPC to BPMN. This paper specified a direct approach of a transformation from EPC process model elements to BPMN. Thereby it is tried to map every construct in EPC fully automated to BPMN. But as it is described, not for every process element works this out, so in addition, some extensions and semantics rules are defined.",
"title": ""
},
{
"docid": "51f47a5e873f7b24cd15aff4ceb8d35c",
"text": "We introduce the Adaptive Skills, Adaptive Partitions (ASAP) framework that (1) learns skills (i.e., temporally extended actions or options) as well as (2) where to apply them. We believe that both (1) and (2) are necessary for a truly general skill learning framework, which is a key building block needed to scale up to lifelong learning agents. The ASAP framework can also solve related new tasks simply by adapting where it applies its existing learned skills. We prove that ASAP converges to a local optimum under natural conditions. Finally, our experimental results, which include a RoboCup domain, demonstrate the ability of ASAP to learn where to reuse skills as well as solve multiple tasks with considerably less experience than solving each task from scratch.",
"title": ""
},
{
"docid": "93d40aa40a32edab611b6e8c4a652dbb",
"text": "In this paper, we present a detailed design of dynamic video segmentation network (DVSNet) for fast and efficient semantic video segmentation. DVSNet consists of two convolutional neural networks: a segmentation network and a flow network. The former generates highly accurate semantic segmentations, but is deeper and slower. The latter is much faster than the former, but its output requires further processing to generate less accurate semantic segmentations. We explore the use of a decision network to adaptively assign different frame regions to different networks based on a metric called expected confidence score. Frame regions with a higher expected confidence score traverse the flow network. Frame regions with a lower expected confidence score have to pass through the segmentation network. We have extensively performed experiments on various configurations of DVSNet, and investigated a number of variants for the proposed decision network. The experimental results show that our DVSNet is able to achieve up to 70.4% mIoU at 19.8 fps on the Cityscape dataset. A high speed version of DVSNet is able to deliver an fps of 30.4 with 63.2% mIoU on the same dataset. DVSNet is also able to reduce up to 95% of the computational workloads.",
"title": ""
}
] | scidocsrr |
911bbf99dbbf1992c6ebfc9349b22f70 | Cross-Point Architecture for Spin-Transfer Torque Magnetic Random Access Memory | [
{
"docid": "1898911f1e4f68f02edcc6c80fda47bc",
"text": "This paper reports a 45nm spin-transfer-torque (STT) MRAM embedded into a standard CMOS logic platform that employs low-power (LP) transistors and Cu/low-k BEOL. We believe that this is the first-ever demonstration of embedded STT MRAM that is fully compatible with the 45nm logic technology. To ensure the switching margin, a novel Ȝreverse-connectionȝ 1T/1MT cell has been developed with a cell size of 0.1026 µm2. This cell is utilized to build embedded memory macros up to 32 Mbits in density. Device attributes and design windows have been examined by considering PVT variations to secure operating margins. Promising early reliability data on endurance, read disturb, and thermal stability have been obtained.",
"title": ""
}
] | [
{
"docid": "1ed19900b9cfa74f27fef472acde0e84",
"text": "We describe the capabilities of and algorithms used in a ne w FPGA CAD tool, Versatile Place and Route (VPR). In terms of minimizing routing area, VPR outperforms all published FPGA place and route tools to which we can compare. Although the algorithms used are based on pre viously known approaches, we present se veral enhancements that impro ve run-time and quality . We present placement and routing results on a ne w set of lar ge circuits to allo w future benchmark comparisons of FPGA place and route tools on circuit sizes more typical of today’ s industrial designs. VPR is capable of tar geting a broad range of FPGA architectures, and the source code is publicly a vailable. It and the associated netlist translation / clustering tool VPACK have already been used in a number of research projects w orldwide, and should be useful in man y areas of FPGA architecture research.",
"title": ""
},
{
"docid": "ae6a02ee18e3599c65fb9db22706de44",
"text": "We use a hierarchical Bayesian approach to model user preferences in different contexts or settings. Unlike many previous recommenders, our approach is content-based. We assume that for each context, a user has a different set of preference weights which are linked by a common, “generic context” set of weights. The approach uses Expectation Maximization (EM) to estimate both the generic context weights and the context specific weights. This improves upon many current recommender systems that do not incorporate context into the recommendations they provide. In this paper, we show that by considering contextual information, we can improve our recommendations, demonstrating that it is useful to consider context in giving ratings. Because the approach does not rely on connecting users via collaborative filtering, users are able to interpret contexts in different ways and invent their own",
"title": ""
},
{
"docid": "ecaa792a7b3c9de643b7ed381ffb9d6b",
"text": "In the field of Evolutionary Computation, a common myth that “An Evolutionary Algorithm (EA) will outperform a local search algorithm, given enough runtime and a large-enough population” exists. We believe that this is not necessarily true and challenge the statement with several simple considerations. We then investigate the population size parameter of EAs, as this is the element in the above claim that can be controlled. We conduct a related work study, which substantiates the assumption that there should be an optimal setting for the population size at which a specific EA would perform best on a given problem instance and computational budget. Subsequently, we carry out a large-scale experimental study on 68 instances of the Traveling Salesman Problem with static population sizes that are powers of two between (1+2) and (262 144 + 524 288) EAs as well as with adaptive population sizes. We find that analyzing the performance of the different setups over runtime supports our point of view and the existence of optimal finite population size settings.",
"title": ""
},
{
"docid": "835b74c546ba60dfbb62e804daec8521",
"text": "The goal of Open Information Extraction (OIE) is to extract surface relations and their arguments from naturallanguage text in an unsupervised, domainindependent manner. In this paper, we propose MinIE, an OIE system that aims to provide useful, compact extractions with high precision and recall. MinIE approaches these goals by (1) representing information about polarity, modality, attribution, and quantities with semantic annotations instead of in the actual extraction, and (2) identifying and removing parts that are considered overly specific. We conducted an experimental study with several real-world datasets and found that MinIE achieves competitive or higher precision and recall than most prior systems, while at the same time producing shorter, semantically enriched extractions.",
"title": ""
},
{
"docid": "8cc12987072c983bc45406a033a467aa",
"text": "Vehicular drivers and shift workers in industry are at most risk of handling life critical tasks. The drivers traveling long distances or when they are tired, are at risk of a meeting an accident. The early hours of the morning and the middle of the afternoon are the peak times for fatigue driven accidents. The difficulty in determining the incidence of fatigue-related accidents is due, at least in part, to the difficulty in identifying fatigue as a causal or causative factor in accidents. In this paper we propose an alternative approach for fatigue detection in vehicular drivers using Respiration (RSP) signal to reduce the losses of the lives and vehicular accidents those occur due to cognitive fatigue of the driver. We are using basic K-means algorithm with proposed two modifications as classifier for detection of Respiration signal two state fatigue data recorded from the driver. The K-means classifiers [11] were trained and tested for wavelet feature of Respiration signal. The extracted features were treated as individual decision making parameters. From test results it could be found that some of the wavelet features could fetch 100 % classification accuracy.",
"title": ""
},
{
"docid": "4941250a228f9494480d8dd175490671",
"text": "In machine learning often a tradeoff must be made between accuracy and intelligibility. More accurate models such as boosted trees, random forests, and neural nets usually are not intelligible, but more intelligible models such as logistic regression, naive-Bayes, and single decision trees often have significantly worse accuracy. This tradeoff sometimes limits the accuracy of models that can be applied in mission-critical applications such as healthcare where being able to understand, validate, edit, and trust a learned model is important. We present two case studies where high-performance generalized additive models with pairwise interactions (GA2Ms) are applied to real healthcare problems yielding intelligible models with state-of-the-art accuracy. In the pneumonia risk prediction case study, the intelligible model uncovers surprising patterns in the data that previously had prevented complex learned models from being fielded in this domain, but because it is intelligible and modular allows these patterns to be recognized and removed. In the 30-day hospital readmission case study, we show that the same methods scale to large datasets containing hundreds of thousands of patients and thousands of attributes while remaining intelligible and providing accuracy comparable to the best (unintelligible) machine learning methods.",
"title": ""
},
{
"docid": "59869b070268fd17145e23c7b0bb4b80",
"text": "Friction characteristics between the wafer and the polishing pad play an important role in the chemical-mechanical planarization (CMP) process. In this paper, a wafer/pad friction modeling and monitoring scheme for the linear CMP process is presented. Kinematic analysis of the linear CMP system is investigated and a distributed LuGre dynamic friction model is utilized to capture the friction forces generated by the wafer/pad interactions. The frictional torques of both the polisher spindle and the roller systems are used to monitor in situ the changes of the friction coefficient during a CMP process. Effects of pad conditioning and patterned wafer topography on the wafer/pad friction are also analyzed and discussed. The proposed friction modeling and monitoring scheme can be further used for real-time CMP monitoring and process fault diagnosis.",
"title": ""
},
{
"docid": "eb81611ba60d5c07e0306dc4e93deee4",
"text": "Research in child fatalities because of abuse and neglect has continued to increase, yet the mechanisms of the death incident and risk factors for these deaths remain unclear. The purpose of this study was to systematically examine the types of neglect that resulted in children's deaths as determined by child welfare and a child death review board. This case review study reviewed 22 years of data (n=372) of child fatalities attributed solely to neglect taken from a larger sample (N=754) of abuse and neglect death cases spanning the years 1987-2008. The file information reviewed was provided by the Oklahoma Child Death Review Board (CDRB) and the Oklahoma Department of Human Services (DHS) Division of Children and Family Services. Variables of interest were child age, ethnicity, and birth order; parental age and ethnicity; cause of death as determined by child protective services (CPS); and involvement with DHS at the time of the fatal event. Three categories of fatal neglect--supervisory neglect, deprivation of needs, and medical neglect--were identified and analyzed. Results found an overwhelming presence of supervisory neglect in child neglect fatalities and indicated no significant differences between children living in rural and urban settings. Young children and male children comprised the majority of fatalities, and African American and Native American children were over-represented in the sample when compared to the state population. This study underscores the critical need for prevention and educational programming related to appropriate adult supervision and adequate safety measures to prevent a child's death because of neglect.",
"title": ""
},
{
"docid": "d44080fc547355ff8389f9da53d03c45",
"text": "High profile attacks such as Stuxnet and the cyber attack on the Ukrainian power grid have increased research in Industrial Control System (ICS) and Supervisory Control and Data Acquisition (SCADA) network security. However, due to the sensitive nature of these networks, there is little publicly available data for researchers to evaluate the effectiveness of the proposed solution. The lack of representative data sets makes evaluation and independent validation of emerging security solutions difficult and slows down progress towards effective and reusable solutions. This paper presents our work to generate representative labeled data sets for SCADA networks that security researcher can use freely. The data sets include packet captures including both malicious and non-malicious Modbus traffic and accompanying CSV files that contain labels to provide the ground truth for supervised machine learning. To provide representative data at the network level, the data sets were generated in a SCADA sandbox, where electrical network simulators were used to introduce realism in the physical component. Also, real attack tools, some of them custom built for Modbus networks, were used to generate the malicious traffic. Even though they do not fully replicate a production network, these data sets represent a good baseline to validate detection tools for SCADA systems.",
"title": ""
},
{
"docid": "28552dfe20642145afa9f9fa00218e8e",
"text": "Augmented Reality can be of immense benefit to the construction industry. The oft-cited benefits of AR in construction industry include real time visualization of projects, project monitoring by overlaying virtual models on actual built structures and onsite information retrieval. But this technology is restricted by the high cost and limited portability of the devices. Further, problems with real time and accurate tracking in a construction environment hinder its broader application. To enable utilization of augmented reality on a construction site, a low cost augmented reality framework based on the Google Cardboard visor is proposed. The current applications available for Google cardboard has several limitations in delivering an AR experience relevant to construction requirements. To overcome these limitations Unity game engine, with the help of Vuforia & Cardboard SDK, is used to develop an application environment which can be used for location and orientation specific visualization and planning of work at construction workface. The real world image is captured through the smart-phone camera input and blended with the stereo input of the 3D models to enable a full immersion experience. The application is currently limited to marker based tracking where the 3D models are triggered onto the user’s view upon scanning an image which is registered with a corresponding 3D model preloaded into the application. A gaze input user interface is proposed which enables the user to interact with the augmented models. Finally usage of AR app while traversing the construction site is illustrated.",
"title": ""
},
{
"docid": "6bae81e837f4a498ae4c814608aac313",
"text": "person’s ability to focus on his or her primary task. Distractions occur especially in mobile environments, because walking, driving, or other real-world interactions often preoccupy the user. A pervasivecomputing environment that minimizes distraction must be context aware, and a pervasive-computing system must know the user’s state to accommodate his or her needs. Context-aware applications provide at least two fundamental services: spatial awareness and temporal awareness. Spatially aware applications consider a user’s relative and absolute position and orientation. Temporally aware applications consider the time schedules of public and private events. With an interdisciplinary class of Carnegie Mellon University (CMU) students, we developed and implemented a context-aware, pervasive-computing environment that minimizes distraction and facilitates collaborative design.",
"title": ""
},
{
"docid": "0f9a33f8ef5c9c415cf47814c9ef896d",
"text": "BACKGROUND\nNeuropathic pain is one of the most devastating kinds of chronic pain. Neuroinflammation has been shown to contribute to the development of neuropathic pain. We have previously demonstrated that lumbar spinal cord-infiltrating CD4+ T lymphocytes contribute to the maintenance of mechanical hypersensitivity in spinal nerve L5 transection (L5Tx), a murine model of neuropathic pain. Here, we further examined the phenotype of the CD4+ T lymphocytes involved in the maintenance of neuropathic pain-like behavior via intracellular flow cytometric analysis and explored potential interactions between infiltrating CD4+ T lymphocytes and spinal cord glial cells.\n\n\nRESULTS\nWe consistently observed significantly higher numbers of T-Bet+, IFN-γ+, TNF-α+, and GM-CSF+, but not GATA3+ or IL-4+, lumbar spinal cord-infiltrating CD4+ T lymphocytes in the L5Tx group compared to the sham group at day 7 post-L5Tx. This suggests that the infiltrating CD4+ T lymphocytes expressed a pro-inflammatory type 1 phenotype (Th1). Despite the observation of CD4+ CD40 ligand (CD154)+ T lymphocytes in the lumbar spinal cord post-L5Tx, CD154 knockout (KO) mice did not display significant changes in L5Tx-induced mechanical hypersensitivity, indicating that T lymphocyte-microglial interaction through the CD154-CD40 pathway is not necessary for L5Tx-induced hypersensitivity. In addition, spinal cord astrocytic activation, represented by glial fibillary acidic protein (GFAP) expression, was significantly lower in CD4 KO mice compared to wild type (WT) mice at day 14 post-L5Tx, suggesting the involvement of astrocytes in the pronociceptive effects mediated by infiltrating CD4+ T lymphocytes.\n\n\nCONCLUSIONS\nIn all, these data indicate that the maintenance of L5Tx-induced neuropathic pain is mostly mediated by Th1 cells in a CD154-independent manner via a mechanism that could involve multiple Th1 cytokines and astrocytic activation.",
"title": ""
},
{
"docid": "a86b53d284ad8244d9917f05eeef5f15",
"text": "Social networks consist of various communities that host members sharing common characteristics. Often some members of one community are also members of other communities. Such shared membership of different communities leads to overlapping communities. Detecting such overlapping communities is a challenging and computationally intensive problem. In this paper, we investigate the usability of high performance computing in the area of social networks and community detection. We present highly scalable variants of a community detection algorithm called Speaker-listener Label Propagation Algorithm (SLPA). We show that despite of irregular data dependencies in the computation, parallel computing paradigms can significantly speed up the detection of overlapping communities of social networks which is computationally expensive. We show by experiments, how various parallel computing architectures can be utilized to analyze large social network data on both shared memory machines and distributed memory machines, such as IBM Blue Gene.",
"title": ""
},
{
"docid": "2c8061cf1c9b6e157bdebf9126b2f15c",
"text": "Recently, the concept of olfaction-enhanced multimedia applications has gained traction as a step toward further enhancing user quality of experience. The next generation of rich media services will be immersive and multisensory, with olfaction playing a key role. This survey reviews current olfactory-related research from a number of perspectives. It introduces and explains relevant olfactory psychophysical terminology, knowledge of which is necessary for working with olfaction as a media component. In addition, it reviews and highlights the use of, and potential for, olfaction across a number of application domains, namely health, tourism, education, and training. A taxonomy of research and development of olfactory displays is provided in terms of display type, scent generation mechanism, application area, and strengths/weaknesses. State of the art research works involving olfaction are discussed and associated research challenges are proposed.",
"title": ""
},
{
"docid": "e81b4c01c2512f2052354402cd09522b",
"text": "...................................................................................................................... iii ACKNOWLEDGEMENTS .................................................................................................v CHAPTER",
"title": ""
},
{
"docid": "b039138e9c0ef8456084891c45d7b36d",
"text": "Over the last few years or so, the use of artificial neural networks (ANNs) has increased in many areas of engineering. In particular, ANNs have been applied to many geotechnical engineering problems and have demonstrated some degree of success. A review of the literature reveals that ANNs have been used successfully in pile capacity prediction, modelling soil behaviour, site characterisation, earth retaining structures, settlement of structures, slope stability, design of tunnels and underground openings, liquefaction, soil permeability and hydraulic conductivity, soil compaction, soil swelling and classification of soils. The objective of this paper is to provide a general view of some ANN applications for solving some types of geotechnical engineering problems. It is not intended to describe the ANNs modelling issues in geotechnical engineering. The paper also does not intend to cover every single application or scientific paper that found in the literature. For brevity, some works are selected to be described in some detail, while others are acknowledged for reference purposes. The paper then discusses the strengths and limitations of ANNs compared with the other modelling approaches.",
"title": ""
},
{
"docid": "884f575062bb9e9702d3ec44d620e6cc",
"text": "A key issue in the direct torque control of permanent magnet brushless DC motors is the estimation of the instantaneous electromagnetic torque, while sensorless control is often advantageous. A sliding mode observer is employed to estimate the non-sinusoidal back-emf waveform, and a simplified extended Kalman filter is used to estimate the rotor speed. Both are combined to calculate the instantaneous electromagnetic torque, the effectiveness of this approach being validated by simulations and measurements.",
"title": ""
},
{
"docid": "bb89461e134951301bb41339f83d29d0",
"text": "Gravity is the only component of Earth environment that remained constant throughout the entire process of biological evolution. However, it is still unclear how gravity affects plant growth and development. In this study, an in vitro cell culture of Arabidopsis thaliana was exposed to different altered gravity conditions, namely simulated reduced gravity (simulated microgravity, simulated Mars gravity) and hypergravity (2g), to study changes in cell proliferation, cell growth, and epigenetics. The effects after 3, 14, and 24-hours of exposure were evaluated. The most relevant alterations were found in the 24-hour treatment, being more significant for simulated reduced gravity than hypergravity. Cell proliferation and growth were uncoupled under simulated reduced gravity, similarly, as found in meristematic cells from seedlings grown in real or simulated microgravity. The distribution of cell cycle phases was changed, as well as the levels and gene transcription of the tested cell cycle regulators. Ribosome biogenesis was decreased, according to levels and gene transcription of nucleolar proteins and the number of inactive nucleoli. Furthermore, we found alterations in the epigenetic modifications of chromatin. These results show that altered gravity effects include a serious disturbance of cell proliferation and growth, which are cellular functions essential for normal plant development.",
"title": ""
},
{
"docid": "fd2da8187978c334d5fe265b4df14487",
"text": "Monopulse is a classical radar technique [1] of precise direction finding of a source or target. The concept can be used both in radar applications as well as in modern communication techniques. The information contained in antenna sidelobes normally disturbs the determination of DOA in the case of a classical monopulse system. The suitable combination of amplitudeand phase-monopulse algorithm leads to the novel complex monopulse algorithm (CMP), which also can utilise information from the sidelobes by using the phase shift of the signals in the sidelobes in relation to the mainlobes.",
"title": ""
},
{
"docid": "fd1b82c69a3182ab7f8c0a7cf2030b6f",
"text": "Lenz-Majewski hyperostotic dwarfism (LMHD) is an ultra-rare Mendelian craniotubular dysostosis that causes skeletal dysmorphism and widely distributed osteosclerosis. Biochemical and histopathological characterization of the bone disease is incomplete and nonexistent, respectively. In 2014, a publication concerning five unrelated patients with LMHD disclosed that all carried one of three heterozygous missense mutations in PTDSS1 encoding phosphatidylserine synthase 1 (PSS1). PSS1 promotes the biosynthesis of phosphatidylserine (PTDS), which is a functional constituent of lipid bilayers. In vitro, these PTDSS1 mutations were gain-of-function and increased PTDS production. Notably, PTDS binds calcium within matrix vesicles to engender hydroxyapatite crystal formation, and may enhance mesenchymal stem cell differentiation leading to osteogenesis. We report an infant girl with LMHD and a novel heterozygous missense mutation (c.829T>C, p.Trp277Arg) within PTDSS1. Bone turnover markers suggested that her osteosclerosis resulted from accelerated formation with an unremarkable rate of resorption. Urinary amino acid quantitation revealed a greater than sixfold elevation of phosphoserine. Our findings affirm that PTDSS1 defects cause LMHD and support enhanced biosynthesis of PTDS in the pathogenesis of LMHD.",
"title": ""
}
] | scidocsrr |
72be7c7208ce5eadbbf526b9ebd309ce | 28 GHz channel modeling using 3D ray-tracing in urban environments | [
{
"docid": "8d05e13db12203e276a6b9f32ac9f3ef",
"text": "This deliverable describes WINNER II channel models for link and system level simulations. Both generic and clustered delay line models are defined for selected propagation scenarios. Disclaimer: The channel models described in this deliverable are based on a literature survey and measurements performed during this project. The authors are not responsible for any loss, damage or expenses caused by potential errors or inaccuracies in the models or in the deliverable. Executive Summary This deliverable presents WINNER II channel models for link level and system level simulations of local area, metropolitan area, and wide area wireless communication systems. The models have been evolved from the WINNER I channel models described in WINNER I deliverable D5.4 and WINNER II interim channel models described in deliverable D1.1.1. The covered propagation scenarios are indoor office, large indoor hall, indoor-to-outdoor, urban micro-cell, bad urban micro-cell, outdoor-to-indoor, stationary feeder, suburban macro-cell, urban macro-cell, rural macro-cell, and rural moving networks. The generic WINNER II channel model follows a geometry-based stochastic channel modelling approach, which allows creating of an arbitrary double directional radio channel model. The channel models are antenna independent, i.e., different antenna configurations and different element patterns can be inserted. The channel parameters are determined stochastically, based on statistical distributions extracted from channel measurement. The distributions are defined for, e.g., delay spread, delay values, angle spread, shadow fading, and cross-polarisation ratio. For each channel snapshot the channel parameters are calculated from the distributions. Channel realisations are generated by summing contributions of rays with specific channel parameters like delay, power, angle-of-arrival and angle-of-departure. Different scenarios are modelled by using the same approach, but different parameters. The parameter tables for each scenario are included in this deliverable. Clustered delay line (CDL) models with fixed large-scale and small-scale parameters have also been created for calibration and comparison of different simulations. The parameters of the CDL models are based on expectation values of the generic models. Several measurement campaigns provide the background for the parameterisation of the propagation scenarios for both line-of-sight (LOS) and non-LOS (NLOS) conditions. These measurements were conducted by seven partners with different devices. The developed models are based on both literature and extensive measurement campaigns that have been carried out within the WINNER I and WINNER II projects. The novel features of the WINNER models are its parameterisation, using of the same modelling approach for both indoor and outdoor environments, new scenarios like outdoor-to-indoor and indoor-to-outdoor, …",
"title": ""
},
{
"docid": "ed676ff14af6baf9bde3bdb314628222",
"text": "The ever growing traffic explosion in mobile communications has recently drawn increased attention to the large amount of underutilized spectrum in the millimeter-wave frequency bands as a potentially viable solution for achieving tens to hundreds of times more capacity compared to current 4G cellular networks. Historically, mmWave bands were ruled out for cellular usage mainly due to concerns regarding short-range and non-line-of-sight coverage issues. In this article, we present recent results from channel measurement campaigns and the development of advanced algorithms and a prototype, which clearly demonstrate that the mmWave band may indeed be a worthy candidate for next generation (5G) cellular systems. The results of channel measurements carried out in both the United States and Korea are summarized along with the actual free space propagation measurements in an anechoic chamber. Then a novel hybrid beamforming scheme and its link- and system-level simulation results are presented. Finally, recent results from our mmWave prototyping efforts along with indoor and outdoor test results are described to assert the feasibility of mmWave bands for cellular usage.",
"title": ""
}
] | [
{
"docid": "a9ea1f1f94a26181addac948837c3030",
"text": "Crime tends to clust er geographi cally. This has led to the wide usage of hotspot analysis to identify and visualize crime. Accurately identified crime hotspots can greatly benefit the public by creating accurate threat visualizations, more efficiently allocating police resources, and predicting crime. Yet existing mapping methods usually identify hotspots without considering the underlying correlates of crime. In this study, we introduce a spatial data mining framework to study crime hotspots through their related variables. We use Geospatial Discriminative Patterns (GDPatterns) to capture the significant difference between two classes (hotspots and normal areas) in a geo-spatial dataset. Utilizing GDPatterns, we develop a novel model—Hotspot Optimization Tool (HOT)—to improve the identification of crime hotspots. Finally, based on a similarity measure, we group GDPattern clusters and visualize the distribution and characteristics of crime related variables. We evaluate our approach using a real world dataset collected from a northeast city in the United States. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cb086fa252f4db172b9c7ac7e1081955",
"text": "Drivable free space information is vital for autonomous vehicles that have to plan evasive maneu vers in realtime. In this paper, we present a new efficient met hod for environmental free space detection with laser scann er based on 2D occupancy grid maps (OGM) to be used for Advance d Driving Assistance Systems (ADAS) and Collision Avo idance Systems (CAS). Firstly, we introduce an enhanced in verse sensor model tailored for high-resolution laser scanners f or building OGM. It compensates the unreflected beams and deals with the ray casting to grid cells accuracy and computationa l effort problems. Secondly, we introduce the ‘vehicle on a circle for grid maps’ map alignment algorithm that allows building more accurate local maps by avoiding the computationally expensive inaccurate operations of image sub-pixel shifting a nd rotation. The resulted grid map is more convenient for ADAS f eatures than existing methods, as it allows using less memo ry sizes, and hence, results into a better real-time performance. Thirdly, we present an algorithm to detect what we call the ‘in-sight edges’. These edges guarantee modeling the free space area with a single polygon of a fixed number of vertices regardless th e driving situation and map complexity. The results from real world experiments show the effectiveness of our approach. Keywords— Occupancy Grid Map; Static Free Space Detection; Advanced Driving Assistance Systems; las er canner; autonomous driving",
"title": ""
},
{
"docid": "72be67603d8548e9c161312b5d60c889",
"text": "RUNX1 is a member of the core-binding factor family of transcription factors and is indispensable for the establishment of definitive hematopoiesis in vertebrates. RUNX1 is one of the most frequently mutated genes in a variety of hematological malignancies. Germ line mutations in RUNX1 cause familial platelet disorder with associated myeloid malignancies. Somatic mutations and chromosomal rearrangements involving RUNX1 are frequently observed in myelodysplastic syndrome and leukemias of myeloid and lymphoid lineages, that is, acute myeloid leukemia, acute lymphoblastic leukemia, and chronic myelomonocytic leukemia. More recent studies suggest that the wild-type RUNX1 is required for growth and survival of certain types of leukemia cells. The purpose of this review is to discuss the current status of our understanding about the role of RUNX1 in hematological malignancies.",
"title": ""
},
{
"docid": "cb693221e954efcc593b46553d7bea6f",
"text": "The increased accessibility of digitally sourced data and advance technology to analyse it drives many industries to digital change. Many global businesses are talking about the potential of big data and they believe that analysing big data sets can help businesses derive competitive insight and shape organisations’ marketing strategy decisions. Potential impact of digital technology varies widely by industry. Sectors such as financial services, insurances and mobile telecommunications which are offering virtual rather than physical products are more likely highly susceptible to digital transformation. Howeverthe interaction between digital technology and organisations is complex and there are many barriers for to effective digital change which are presented by big data. Changes brought by technology challenges both researchers and practitioners. Various global business and digital tends have highlights the emergent need for collaboration between academia and market practitioners. There are “theories-in – use” which are academically rigorous but still there is gap between implementation of theory in practice. In this paper we identify theoretical dilemmas of the digital revolution and importance of challenges within practice. Preliminary results show that those industries that tried to narrow the gap and put necessary mechanisms in place to make use of big data for marketing are upfront on the market. INTRODUCTION Advances in digital technology has made a significant impact on marketing theory and practice. Technology expands the opportunity to capture better quality customer data, increase focus on customer relationship, rise of customer insight and Customer Relationship Management (CRM). Availability of big data made traditional marketing tools to work more powerful and innovative way. In current digital age of marketing some predictions of effects of the digital changes have come to function but still there is no definite answer to what works and what doesn’t in terms of implementing the changes in an organisation context. The choice of this specific topic is motivated by the need for a better understanding for impact of digital on marketing fild.This paper will discusses the potential positive impact of the big data on digital marketing. It also present the evidence of positive views in academia and highlight the gap between academia and practices. The main focus is on understanding the gap and providing recommendation for fillingit in. The aim of this paper is to identify theoretical dilemmas of the digital revolution and importance of challenges within practice. Preliminary results presented here show that those industries that tried to narrow the gap and put necessary mechanisms in place to make use of big data for marketing are upfront on the market. In our discussion we shall identify these industries and present evaluations of which industry sectors would need to be looking at understanding of impact that big data may have on their practices and businesses. Digital Marketing and Big data In early 90’s when views about digital changes has started Parsons at el (1998) believed that to achieve success in digital marketing consumer marketers should create a new model with five essential elements in new media environment. Figure below shows five success factors and issues that marketers should address around it. Figure 1. Digital marketing Framework and levers Parson et al (1998) International Conference on Communication, Media, Technology and Design 24 26 April 2014, Istanbul – Turkey 147 Today in digital age of marketing some predictions of effects of this changes have come to function but still there is no define answers on what works and what doesn’t in terms of implement it in organisation context.S. Dibb (2012). There are deferent explanations, arguments and views about impact of digital on marketing strategy in the literature. At first, it is important to define what is meant by digital marketing, what are the challenges brought by it and then understand how it is adopted. Simply, Digital Marketing (2012) can be defined as “a sub branch of traditional Marketing using modern digital channels for the placement of products such as downloadable music, and primarily for communicating with stakeholders e.g. customers and investors about brand, products and business progress”. According to (Smith, 2007) the digital marketing refers “The use of digital technologies to create an integrated, targeted and measurable communication which helps to acquire and retain customers while building deeper relationships with them”. There are a number of accepted theoretical frameworks however as Parsons et al (1998) suggested potentialities offered by digital marketing need to consider carefully where and how to build in each organisation by the senior managers. The most recent developments in this area has been triggered by growing amount of digital data now known as Big Data. Tech American Foundation (2004) defines Big Data as a “term that describes large volumes of high velocity, complex and variable data that require advanced techniques and technologies to enable the capture storage, distribution, management and analysis of information”. D. Krajicek (2013) argues that the big challenge of Big Data is the ability to focus on what is meaningful not on what is possible, with so much information at their fingerprint marketers and their research partners can and often do fall into “more is better” fallacy. Knowing something and knowing it quickly is not enough. Therefore to have valuable Big data it needs to be sorted by professional people who have skills to understand dynamics of market and can identify what is relevant and meaningful. G. Day (2011). Data should be used for achieve competitive advantage by creating effective relationship with the target segments. According to K. Kendall (2014) with de right capabilities, you can take a whole range of new data sources such as web browsing, social data and geotracking data and develop much more complete profile about your customers and then with this information you can segment better. Successful Big Data initiatives should start with a specific and clearly defined business requirement then leaders of these initiatives need to assess the technical requirement and identify gap in their capabilities and then plan the investment to close those gaps (Big Data Analytics 2014) The impact and current challenges Bileviciene (2012) suggest that well conducted market research is the basis for successful marketing and well conducted study is the basis of successful market segmentation. Generally marketing management is broken down into a series of steps, which include market research, segmentation of markets and positioning the company’s offering in such a way as to appeal to the targeted segments. (OU Business school, 2007) Market segmentation refers to the process of defining and subdividing a large homogenous market into clearly identifiable segments having similar needs, wants, or demand characteristics. Its objective is to design a marketing mix that precisely matches the expectations of customers in the targeted segment (Business dictation, 2013). The goal for segmentation is to break down the target market into different consumers groups. According to Kotler and Armstrong (2011) traditionally customers were classified based on four types of segmentation variables, geographic, demographic, psychographic and behavioural. There are many focuses, beliefs and arguments in the field of market segmentation. Many researchers believe that the traditional variables of demographic and geographic segments are out-dated and the theory regarding segmentation has become too narrow (Quinn and Dibb, 2010). According to Lin (2002), these variables should be a part of a new, expanded view of the market segmentation theory that focuses more on customer’s personalities and values. Dibb and Simkin (2009) argue that priorities of market segmentation research aim to exploring the applicability of new segmentation bases across different products and contexts, developing more flexible data analysis techniques, creating new research designs and data collection approaches, however practical questions about implementation and integration have received less attention. According to S. Dibb (2012) in academic perspective segmentation still has strategic and tactical role as shown on figure below. But in practice as Dibb argues “some things have not changed” and: Segmentation’s strategic role still matters Implementation is as much of a pain as always Even the smartest segments need embedding International Conference on Communication, Media, Technology and Design 24 26 April 2014, Istanbul – Turkey 148 Figure 2: role of segmentation S. Dibb (2012) Dilemmas with the Implementation of digital change arise for various reasons. Some academics believed that greater access to data would reduce the need for more traditional segmentation but research done on the field shows that traditional segmentation works equal to CRM ( W. Boulding et al 2005). Even thought the marketing literature offers insights for improving the effectiveness of digital changes in marketing filed there is limitation on how an organisation adapts its customer information processes once the technology is adjusted into the organisation. (J. Peltier et al 2012) suggest that there is an urgent need for data management studies that captures insights from other disciplines including organisational behaviour, change management and technology implementation. Reibstein et al (2009) also highlights the emergent need for collaboration between academia and market practitioners. They point out that there is a “digital skill gap” within the marketing filed. Authors argue that there are “theories-in – use” which are academically rigorous but still there is gap between implementation of theory in practice. Changes brought by technology and availability of di",
"title": ""
},
{
"docid": "782e5dad69e951d854e10a1922b1b270",
"text": "Many experimental studies indicate that people are motivated by reciprocity. Rabin [Amer. Rev. 83 (1993) 1281] develops techniques for incorporating such concerns into game theo economics. His theory is developed for normal form games, and he abstracts from information the sequential structure of a strategic situation. We develop a theory of reciprocity for ext games in which the sequential structure of a strategic situation is made explicit, and propose solution concept—sequential reciprocity equilibrium—for which we prove an equilibrium exis result. The model is applied in several examples, and it is shown that it captures very well the in meaning of reciprocity as well as certain qualitative features of experimental evidence. 2003 Elsevier Inc. All rights reserved. JEL classification: A13; C70; D63",
"title": ""
},
{
"docid": "13ec102cd2f9f80fbb827cd702a57a8b",
"text": "This paper presents a mutual capacitive touch screen panel (TSP) readout IC (ROIC) with a differential continuousmode parallel operation architecture (DCPA). The proposed architecture achieves a high product of signal-to-noise ratio (SNR) and frame rate, which is a requirement of ROIC for large-sized TSP. DCPA is accomplished by using the proposed differential sensing method with a parallel architecture in a continuousmode. This architecture is implemented using a continuous-type transmitter for parallel signaling and a differential-architecture receiver. A continuous-type differential charge amplifier removes the common-mode noise component, and reduces the self-noise by the band-pass filtering effect of the continuous-mode charge amplifier. In addition, the differential parallel architecture cancels the timing skew problem caused by the continuous-mode parallel operation and effectively enhances the power spectrum density of the signal. The proposed ROIC was fabricated using a 0.18-μm CMOS process and occupied an active area of 1.25 mm2. The proposed system achieved a 72 dB SNR and 240 Hz frame rate with a 32 channel TX by 10 channel RX mutual capacitive TSP. Moreover, the proposed differential-parallel architecture demonstrated higher immunity to lamp noise and display noise. The proposed system consumed 42.5 mW with a 3.3-V supply.",
"title": ""
},
{
"docid": "714641a148e9a5f02bb13d5485203d70",
"text": "The aim of this paper is to present a review of recently used current control techniques for three-phase voltagesource pulsewidth modulated converters. Various techniques, different in concept, have been described in two main groups: linear and nonlinear. The first includes proportional integral stationary and synchronous) and state feedback controllers, and predictive techniques with constant switching frequency. The second comprises bang-bang (hysteresis, delta modulation) controllers and predictive controllers with on-line optimization. New trends in the current control—neural networks and fuzzy-logicbased controllers—are discussed, as well. Selected oscillograms accompany the presentation in order to illustrate properties of the described controller groups.",
"title": ""
},
{
"docid": "b2fb874fa2dadb8d3b2a23b111a85660",
"text": "The aim of the present research is to study the rel ationship between “internet addiction” and “meta-co gnitive skills” with “academic achievement” in students of Islamic Azad University, Hamedan branch. This is de criptive – correlational method is used. To measure meta-cogni tive skills and internet addiction of students Well s questionnaire and Young questionnaire are used resp ectively. The population of the study is students o f Islamic Azad University of Hamedan. Using proportional stra tified random sampling the sample size was 375 stud ents. The results of the study showed that there is no signif icant relationship between two variables of “meta-c ognition” and “Internet addiction”(P >0.184).However, there is a significant relationship at 5% level between the tw o variables \"meta-cognition\" and \"academic achievement\" (P<0.00 2). Also, a significant inverse relationship was ob served between the average of two variables of \"Internet a ddiction\" and \"academic achievement\" at 5% level (P <0.031). There is a significant difference in terms of metacognition among the groups of different fields of s tudies. Furthermore, there is a significant difference in t erms of internet addiction scores among students be longing to different field of studies. In explaining the acade mic achievement variable variance of “meta-cognitio ” and “Internet addiction” using combined regression, it was observed that the above mentioned variables exp lain 16% of variable variance of academic achievement simultane ously.",
"title": ""
},
{
"docid": "42e2aec24a5ab097b5fff3ec2fe0385d",
"text": "Online freelancing marketplaces have grown quickly in recent years. In theory, these sites offer workers the ability to earn money without the obligations and potential social biases associated with traditional employment frameworks. In this paper, we study whether two prominent online freelance marketplaces - TaskRabbit and Fiverr - are impacted by racial and gender bias. From these two platforms, we collect 13,500 worker profiles and gather information about workers' gender, race, customer reviews, ratings, and positions in search rankings. In both marketplaces, we find evidence of bias: we find that gender and race are significantly correlated with worker evaluations, which could harm the employment opportunities afforded to the workers. We hope that our study fuels more research on the presence and implications of discrimination in online environments.",
"title": ""
},
{
"docid": "984dba43888e7a3572d16760eba6e9a5",
"text": "This study developed an integrated model to explore the antecedents and consequences of online word-of-mouth in the context of music-related communication. Based on survey data from college students, online word-of-mouth was measured with two components: online opinion leadership and online opinion seeking. The results identified innovativeness, Internet usage, and Internet social connection as significant predictors of online word-of-mouth, and online forwarding and online chatting as behavioral consequences of online word-of-mouth. Contrary to the original hypothesis, music involvement was found not to be significantly related to online word-of-mouth. Theoretical implications of the findings and future research directions are discussed.",
"title": ""
},
{
"docid": "e66f94aeea80b7efb6a35abd9a764aea",
"text": "A non-linear poroelastic finite element model of the lumbar spine was developed to investigate spinal response during daily dynamic physiological activities. Swelling was simulated by imposing a boundary pore pressure of 0.25 MPa at all external surfaces. Partial saturation of the disc was introduced to circumvent the negative pressures otherwise computed upon unloading. The loading conditions represented a pre-conditioning full day followed by another day of loading: 8h rest under a constant compressive load of 350 N, followed by 16 h loading phase under constant or cyclic compressive load varying in between 1000 and 1600 N. In addition, the effect of one or two short resting periods in the latter loading phase was studied. The model yielded fairly good agreement with in-vivo and in-vitro measurements. Taking the partial saturation of the disc into account, no negative pore pressures were generated during unloading and recovery phase. Recovery phase was faster than the loading period with equilibrium reached in only approximately 3h. With time and during the day, the axial displacement, fluid loss, axial stress and disc radial strain increased whereas the pore pressure and disc collagen fiber strains decreased. The fluid pressurization and collagen fiber stiffening were noticeable early in the morning, which gave way to greater compression stresses and radial strains in the annulus bulk as time went by. The rest periods dampened foregoing differences between the early morning and late in the afternoon periods. The forgoing diurnal variations have profound effects on lumbar spine biomechanics and risk of injury.",
"title": ""
},
{
"docid": "4b2d4ac1be5eeec4a7e370dfa768a5af",
"text": "A new technology evaluation of fingerprint verification algorithms has been organized following the approach of the previous FVC2000 and FVC2002 evaluations, with the aim of tracking the quickly evolving state-ofthe-art of fingerprint recognition systems. Three sensors have been used for data collection, including a solid state sweeping sensor, and two optical sensors of different characteristics. The competition included a new category dedicated to “ light” systems, characterized by limited computational and storage resources. This paper summarizes the main activities of the FVC2004 organization and provides a first overview of the evaluation. Results will be further elaborated and officially presented at the International Conference on Biometric Authentication (Hong Kong) on July 2004.",
"title": ""
},
{
"docid": "f282a0e666a2b2f3f323870fc07217bd",
"text": "The cultivation of pepper has great importance in all regions of Brazil, due to its characteristics of profi tability, especially when the producer and processing industry add value to the product, or its social importance because it employs large numbers of skilled labor. Peppers require monthly temperatures ranging between 21 and 30 °C, with an average of 18 °C. At low temperatures, there is a decrease in germination, wilting of young parts, and slow growth. Plants require adequate level of nitrogen, favoring plants and fruit growth. Most the cultivars require large spacing for adequate growth due to the canopy of the plants. Proper insect, disease, and weed control prolong the harvest of fruits for longer periods, reducing losses. The crop cycle and harvest period are directly affected by weather conditions, incidence of pests and diseases, and cultural practices including adequate fertilization, irrigation, and adoption of phytosanitary control measures. In general for most cultivars, the fi rst harvest starts 90 days after sowing, which can be prolonged for a couple of months depending on the plant physiological condition.",
"title": ""
},
{
"docid": "b3c81ac4411c2461dcec7be210ce809c",
"text": "The rapid proliferation of the Internet and the cost-effective growth of its key enabling technologies are revolutionizing information technology and creating unprecedented opportunities for developing largescale distributed applications. At the same time, there is a growing concern over the security of Web-based applications, which are rapidly being deployed over the Internet [4]. For example, e-commerce—the leading Web-based application—is projected to have a market exceeding $1 trillion over the next several years. However, this application has already become a security nightmare for both customers and business enterprises as indicated by the recent episodes involving unauthorized access to credit card information. Other leading Web-based applications with considerable information security and privacy issues include telemedicine-based health-care services and online services or businesses involving both public and private sectors. Many of these applications are supported by workflow management systems (WFMSs) [1]. A large number of public and private enterprises are in the forefront of adopting Internetbased WFMSs and finding ways to improve their services and decision-making processes, hence we are faced with the daunting challenge of ensuring the security and privacy of information in such Web-based applications [4]. Typically, a Web-based application can be represented as a three-tier architecture, depicted in the figure, which includes a Web client, network servers, and a back-end information system supported by a suite of databases. For transaction-oriented applications, such as e-commerce, middleware is usually provided between the network servers and back-end systems to ensure proper interoperability. Considerable security challenges and vulnerabilities exist within each component of this architecture. Existing public-key infrastructures (PKIs) provide encryption mechanisms for ensuring information confidentiality, as well as digital signature techniques for authentication, data integrity and non-repudiation [11]. As no access authorization services are provided in this approach, it has a rather limited scope for Web-based applications. The strong need for information security on the Internet is attributable to several factors, including the massive interconnection of heterogeneous and distributed systems, the availability of high volumes of sensitive information at the end systems maintained by corporations and government agencies, easy distribution of automated malicious software by malfeasors, the ease with which computer crimes can be committed anonymously from across geographic boundaries, and the lack of forensic evidence in computer crimes, which makes the detection and prosecution of criminals extremely difficult. Two classes of services are crucial for a secure Internet infrastructure. These include access control services and communication security services. Access James B.D. Joshi,",
"title": ""
},
{
"docid": "6e7120fe85a49693d2341cc224da7470",
"text": "OBJECTIVE\nThe Psychosocial Assessment Tool (PAT) was developed to screen for psychosocial risk in families of a child diagnosed with cancer. The current study is the first describing the cross-cultural adaptation, reliability, validity, and usability of the PAT in an European country (Dutch translation).\n\n\nMETHODS\nA total of 117 families (response rate 59%) of newly diagnosed children with cancer completed the PAT2.0 and validation measures.\n\n\nRESULTS\nAcceptable reliability was obtained for the PAT total score (α = .72) and majority of subscales (0.50-0.82). Two subscales showed inadequate internal consistency (Social Support α = .19; Family Beliefs α = .20). Validity and usability were adequate. Of the families, 66% scored low (Universal), 29% medium (Targeted), and 5% high (Clinical) risk.\n\n\nCONCLUSIONS\nThis study confirms the cross-cultural applicability, reliability, and validity of the PAT total score. Reliability left room for improvement on subscale level. Future research should indicate whether the PAT can be used to provide cost-effective care.",
"title": ""
},
{
"docid": "b0bf389688f9a11125c6bbd7202b6e2c",
"text": "Ascariasis, a worldwide parasitic disease, is regarded by some authorities as the most common parasitic infection in humans. The causative organism is Ascaris lumbricoides, which normally lives in the lumen of the small intestine. From the intestine, the worm can invade the bile duct or pancreatic duct, but invasion into the gallbladder is quite rare because of the anatomical features of the cystic duct, which is narrow and tortuous. Once it enters the gallbladder, it is exceedingly rare for the worm to migrate back to the intestine. We report a case of gallbladder ascariasis with worm migration back into the intestine, in view of its rare presentation.",
"title": ""
},
{
"docid": "d679b627d35a0797a8c70acce931f661",
"text": "In this work, we tackle the problem of crowd counting in images. We present a Convolutional Neural Network (CNN) based density estimation approach to solve this problem. Predicting a high resolution density map in one go is a challenging task. Hence, we present a two branch CNN architecture for generating high resolution density maps, where the first branch generates a low resolution density map, and the second branch incorporates the low resolution prediction and feature maps from the first branch to generate a high resolution density map. We also propose a multi-stage extension of our approach where each stage in the pipeline utilizes the predictions from all the previous stages. Empirical comparison with the previous state-of-the-art crowd counting methods shows that our method achieves the lowest mean absolute error on three challenging crowd counting benchmarks: Shanghaitech, WorldExpo’10, and UCF datasets.",
"title": ""
},
{
"docid": "0daa16a3f40612946187d6c66ccd96f4",
"text": "A 60 GHz frequency band planar diplexer based on Substrate Integrated Waveguide (SIW) technology is presented in this research. The 5th order millimeter wave SIW filter is investigated first, and then the 60 GHz SIW diplexer is designed and been simulated. SIW-microstrip transitions are also included in the final design. The relative bandwidths of up and down channels are 1.67% and 1.6% at 59.8 GHz and 62.2 GHz respectively. Simulation shows good channel isolation, small return losses and moderate insertion losses in pass bands. The diplexer can be easily integrated in millimeter wave integrated circuits.",
"title": ""
}
] | scidocsrr |
b911e784b37a1675b21acb722d294daf | Predicting Visual Exemplars of Unseen Classes for Zero-Shot Learning | [
{
"docid": "dc3495ec93462e68f606246205a8416d",
"text": "State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information. In these formulations the current best complement to visual features are attributes: manually-encoded vectors describing shared characteristics among categories. Despite good performance, attributes have limitations: (1) finer-grained recognition requires commensurately more attributes, and (2) attributes do not provide a natural language interface. We propose to overcome these limitations by training neural language models from scratch, i.e. without pre-training and only consuming words and characters. Our proposed models train end-to-end to align with the fine-grained and category-specific content of images. Natural language provides a flexible and compact way of encoding only the salient visual aspects for distinguishing categories. By training on raw text, our model can do inference on raw text as well, providing humans a familiar mode both for annotation and retrieval. Our model achieves strong performance on zero-shot text-based image retrieval and significantly outperforms the attribute-based state-of-the-art for zero-shot classification on the Caltech-UCSD Birds 200-2011 dataset.",
"title": ""
}
] | [
{
"docid": "d46329330906d2ea997cb63cb465bec0",
"text": "We consider a multilingual weakly supervised learning scenario where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide the learning in other languages. Past approaches project labels across bitext and use them as features or gold labels for training. We propose a new method that projects model expectations rather than labels, which facilities transfer of model uncertainty across language boundaries. We encode expectations as constraints and train a discriminative CRF model using Generalized Expectation Criteria (Mann and McCallum, 2010). Evaluated on standard Chinese-English and German-English NER datasets, our method demonstrates F1 scores of 64% and 60% when no labeled data is used. Attaining the same accuracy with supervised CRFs requires 12k and 1.5k labeled sentences. Furthermore, when combined with labeled examples, our method yields significant improvements over state-of-the-art supervised methods, achieving best reported numbers to date on Chinese OntoNotes and German CoNLL-03 datasets.",
"title": ""
},
{
"docid": "9c9e36a64d82beada8807546636aef20",
"text": "Nowadays, FMCW (Frequency Modulated Continuous Wave) radar is widely adapted due to the use of solid state microwave amplifier to generate signal source. The FMCW radar can be implemented and analyzed at low cost and less complexity by using Software Defined Radio (SDR). In this paper, SDR based FMCW radar for target detection and air traffic control radar application is implemented in real time. The FMCW radar model is implemented using open source software and hardware. GNU Radio is utilized for software part of the radar and USRP (Universal Software Radio Peripheral) N210 for hardware part. Log-periodic antenna operating at 1GHZ frequency is used for transmission and reception of radar signals. From the beat signal obtained at receiver end and range resolution of signal, target is detected. Further low pass filtering followed by Fast Fourier Transform (FFT) is performed to reduce computational complexity.",
"title": ""
},
{
"docid": "631b473342cc30360626eaea0734f1d8",
"text": "Argument extraction is the task of identifying arguments, along with their components in text. Arguments can be usually decomposed into a claim and one or more premises justifying it. The proposed approach tries to identify segments that represent argument elements (claims and premises) on social Web texts (mainly news and blogs) in the Greek language, for a small set of thematic domains, including articles on politics, economics, culture, various social issues, and sports. The proposed approach exploits distributed representations of words, extracted from a large non-annotated corpus. Among the novel aspects of this work is the thematic domain itself which relates to social Web, in contrast to traditional research in the area, which concentrates mainly on law documents and scientific publications. The huge increase of social web communities, along with their user tendency to debate, makes the identification of arguments in these texts a necessity. In addition, a new manually annotated corpus has been constructed that can be used freely for research purposes. Evaluation results are quite promising, suggesting that distributed representations can contribute positively to the task of argument extraction.",
"title": ""
},
{
"docid": "768336582eb1aece4454ec461f3840d2",
"text": "This paper presents an Iterative Linear Quadratic Regulator (ILQR) me thod for locally-optimal feedback control of nonlinear dynamical systems. The method is applied to a musculo-s ke etal arm model with 10 state dimensions and 6 controls, and is used to compute energy-optimal reach ing movements. Numerical comparisons with three existing methods demonstrate that the new method converge s substantially faster and finds slightly better solutions.",
"title": ""
},
{
"docid": "f52b170e25eaf9478e520a0e81e96386",
"text": "General unsupervised learning is a long-standing conceptual problem in machine learning. Supervised learning is successful because it can be solved by the minimization of the training error cost function. Unsupervised learning is not as successful, because the unsupervised objective may be unrelated to the supervised task of interest. For an example, density modelling and reconstruction have often been used for unsupervised learning, but they did not produced the sought-after performance gains, because they have no knowledge of the sought-after supervised tasks. In this paper, we present an unsupervised cost function which we name the Output Distribution Matching (ODM) cost, which measures a divergence between the distribution of predictions and distributions of labels. The ODM cost is appealing because it is consistent with the supervised cost in the following sense: a perfect supervised classifier is also perfect according to the ODM cost. Therefore, by aggressively optimizing the ODM cost, we are almost guaranteed to improve our supervised performance whenever the space of possible predictions is exponentially large. We demonstrate that the ODM cost works well on number of small and semiartificial datasets using no (or almost no) labelled training cases. Finally, we show that the ODM cost can be used for one-shot domain adaptation, which allows the model to classify inputs that differ from the input distribution in significant ways without the need for prior exposure to the new domain.",
"title": ""
},
{
"docid": "a0251ae10bfabd188766aa2453b8cebb",
"text": "This paper presents the development of automatic vehicle plate detection system using image processing technique. The famous name for this system is Automatic Number Plate Recognition (ANPR). Automatic vehicle plate detection system is commonly used in field of safety and security systems especially in car parking area. Beside the safety aspect, this system is applied to monitor road traffic such as the speed of vehicle and identification of the vehicle's owner. This system is designed to assist the authorities in identifying the stolen vehicle not only for car but motorcycle as well. In this system, the Optical Character Recognition (OCR) technique was the prominent technique employed by researchers to analyse image of vehicle plate. The limitation of this technique was the incapability of the technique to convert text or data accurately. Besides, the characters, the background and the size of the vehicle plate are varied from one country to other country. Hence, this project proposes a combination of image processing technique and OCR to obtain the accurate vehicle plate recognition for vehicle in Malaysia. The outcome of this study is the system capable to detect characters and numbers of vehicle plate in different backgrounds (black and white) accurately. This study also involves the development of Graphical User Interface (GUI) to ease user in recognizing the characters and numbers in the vehicle or license plates.",
"title": ""
},
{
"docid": "1bf93bf9bd826c4701df5d2036b83226",
"text": "In this paper, we propose a novel end-to-end neural architecture for ranking candidate answers, that adapts a hierarchical recurrent neural network and a latent topic clustering module. With our proposed model, a text is encoded to a vector representation from an wordlevel to a chunk-level to effectively capture the entire meaning. In particular, by adapting the hierarchical structure, our model shows very small performance degradations in longer text comprehension while other state-of-the-art recurrent neural network models suffer from it. Additionally, the latent topic clustering module extracts semantic information from target samples. This clustering module is useful for any text related tasks by allowing each data sample to find its nearest topic cluster, thus helping the neural network model analyze the entire data. We evaluate our models on the Ubuntu Dialogue Corpus and consumer electronic domain question answering dataset, which is related to Samsung products. The proposed model shows state-of-the-art results for ranking question-answer pairs.",
"title": ""
},
{
"docid": "e5a1f6546de9683e7dc90af147d73d40",
"text": "Progress in both speech and language processing has spurred efforts to support applications that rely on spoken rather than written language input. A key challenge in moving from text-based documents to such spoken documents is that spoken language lacks explicit punctuation and formatting, which can be crucial for good performance. This article describes different levels of speech segmentation, approaches to automatically recovering segment boundary locations, and experimental results demonstrating impact on several language processing tasks. The results also show a need for optimizing segmentation for the end task rather than independently.",
"title": ""
},
{
"docid": "02b3d799fa78e2c23de1cbb7a04e0ee9",
"text": "Users derive many benefits by storing personal data in cloud computing services; however the drawback of storing data in these services is that the user cannot access his/her own data when an internet connection is not available. To solve this problem in an efficient and elegant way, we propose the cloud-dew architecture. Cloud-dew architecture is an extension of the client-server architecture. In the extension, servers are further classified into cloud servers and dew servers. The dew servers are web servers that reside on user’s local computers and have a pluggable structure so that scripts and databases of websites can be installed easily. The cloud-dew architecture not only makes the personal data stored in the cloud continuously accessible by the user, but also enables a new application: web-surfing without an internet connection. An experimental system is presented to demonstrate the ideas of the cloud-dew architecture.",
"title": ""
},
{
"docid": "919342b88482e827c3923d66e0c50cb7",
"text": "Scoring sentences in documents given abstract summaries created by humans is important in extractive multi-document summarization. In this paper, we formulate extractive summarization as a two step learning problem building a generative model for pattern discovery and a regression model for inference. We calculate scores for sentences in document clusters based on their latent characteristics using a hierarchical topic model. Then, using these scores, we train a regression model based on the lexical and structural characteristics of the sentences, and use the model to score sentences of new documents to form a summary. Our system advances current state-of-the-art improving ROUGE scores by ∼7%. Generated summaries are less redundant and more coherent based upon manual quality evaluations.",
"title": ""
},
{
"docid": "69058572e8baaef255a3be6ac9eef878",
"text": "Web developers often want to repurpose interactive behaviors from third-party web pages, but struggle to locate the specific source code that implements the behavior. This task is challenging because developers must find and connect all of the non-local interactions between event-based JavaScript code, declarative CSS styles, and web page content that combine to express the behavior.\n The Scry tool embodies a new approach to locating the code that implements interactive behaviors. A developer selects a page element; whenever the element changes, Scry captures the rendering engine's inputs (DOM, CSS) and outputs (screenshot) for the element. For any two captured element states, Scry can compute how the states differ and which lines of JavaScript code were responsible. Using Scry, a developer can locate an interactive behavior's implementation by picking two output states; Scry indicates the JavaScript code directly responsible for their differences.",
"title": ""
},
{
"docid": "390505bd6f04e899a15c64c26beac606",
"text": "Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To close this gap, a two-stage deep neural network (TDNN) is proposed. In particular, in the first stage, using the rated essays for nontarget prompts as the training data, a shallow model is learned to select essays with an extreme quality for the target prompt, serving as pseudo training data; in the second stage, an end-to-end hybrid deep model is proposed to learn a prompt-dependent rating model consuming the pseudo training data from the first step. Evaluation of the proposed TDNN on the standard ASAP dataset demonstrates a promising improvement for the prompt-independent AES task.",
"title": ""
},
{
"docid": "8b1a811e09ba0c468044e2bf2d6ef700",
"text": "Ensembles of learning machines are promising for software effort estimation (SEE), but need to be tailored for this task to have their potential exploited. A key issue when creating ensembles is to produce diverse and accurate base models. Depending on how differently different performance measures behave for SEE, they could be used as a natural way of creating SEE ensembles. We propose to view SEE model creation as a multiobjective learning problem. A multiobjective evolutionary algorithm (MOEA) is used to better understand the tradeoff among different performance measures by creating SEE models through the simultaneous optimisation of these measures. We show that the performance measures behave very differently, presenting sometimes even opposite trends. They are then used as a source of diversity for creating SEE ensembles. A good tradeoff among different measures can be obtained by using an ensemble of MOEA solutions. This ensemble performs similarly or better than a model that does not consider these measures explicitly. Besides, MOEA is also flexible, allowing emphasis of a particular measure if desired. In conclusion, MOEA can be used to better understand the relationship among performance measures and has shown to be very effective in creating SEE models.",
"title": ""
},
{
"docid": "6922a913c6ede96d5062f055b55377e7",
"text": "This paper presents the issue of a nonharmonic multitone generation with the use of singing bowls and the digital signal processors. The authors show the possibility of therapeutic applications of such multitone signals. Some known methods of the digital generation of the tone signal with the additional modulation are evaluated. Two projects of the very precise multitone generators are presented. In described generators, the digital signal processors synthesize the signal, while the additional microcontrollers realize the operator's interface. As a final result, the sound of the original singing bowls is confronted with the sound synthesized by one of the generators.",
"title": ""
},
{
"docid": "041b308fe83ac9d5a92e33fd9c84299a",
"text": "Spaceborne synthetic aperture radar systems are severely constrained to a narrow swath by ambiguity limitations. Here a vertically scanned-beam synthetic aperture system (SCANSAR) is proposed as a solution to this problem. The potential length of synthetic aperture must be shared between beam positions, so the along-track resolution is poorer; a direct tradeoff exists between resolution and swath width. The length of the real aperture is independently traded against the number of scanning positions. Design curves and equations are presented for spaceborne SCANSARs for altitudes between 400 and 1400 km and inner angles of incidence between 20° and 40°. When the real antenna is approximately square, it may also be used for a microwave radiometer. The combined radiometer and synthetic-aperture (RADISAR) should be useful for those applications where the poorer resolution of the radiometer is useful for some purposes, but the finer resolution of the radar is needed for others.",
"title": ""
},
{
"docid": "564ef7dc6c0d7bed77c198bc8d1b6d9f",
"text": "A method for requirements analysis is proposed that accounts for individual and personal goals, and the effect of time and context on personal requirements. First a framework to analyse the issues inherent in requirements that change over time and location is proposed. The implications of the framework on system architecture are considered as three implementation pathways: functional specifications, development of customisable features and automatic adaptation by the system. These pathways imply the need to analyse system architecture requirements. A scenario-based analysis method is described for specifying requirements goals and their potential change. The method addresses goal setting for measurement and monitoring, and conflict resolution when requirements at different layers (group, individual) and from different sources (personal, advice from an external authority) conflict. The method links requirements analysis to design by modelling alternative solution pathways. Different implementation pathways have cost–benefit implications for stakeholders, so cost–benefit analysis techniques are proposed to assess trade-offs between goals and implementation strategies. The use of the framework is illustrated with two case studies in assistive technology domains: e-mail and a personalised navigation system. The first case study illustrates personal requirements to help cognitively disabled users communicate via e-mail, while the second addresses personal and mobile requirements to help disabled users make journeys on their own, assisted by a mobile PDA guide. In both case studies the experience from requirements analysis to implementation, requirements monitoring, and requirements evolution is reported.",
"title": ""
},
{
"docid": "89a04e656c8e42a78363a5087771b58d",
"text": "Analyzing the security of Wearable Internet-of-Things (WIoT) devices is considered a complex task due to their heterogeneous nature. In addition, there is currently no mechanism that performs security testing for WIoT devices in different contexts. In this article, we propose an innovative security testbed framework targeted at wearable devices, where a set of security tests are conducted, and a dynamic analysis is performed by realistically simulating environmental conditions in which WIoT devices operate. The architectural design of the proposed testbed and a proof-of-concept, demonstrating a preliminary analysis and the detection of context-based attacks executed by smartwatch devices, are presented.",
"title": ""
},
{
"docid": "4899e13d5c85b63a823db9c4340824e7",
"text": "With the prevalence of server blades and systems-on-a-chip (SoCs), interconnection networks are becoming an important part of the microprocessor landscape. However, there is limited tool support available for their design. While performance simulators have been built that enable performance estimation while varying network parameters, these cover only one metric of interest in modern designs. System power consumption is increasingly becoming equally, if not more important than performance. It is now critical to get detailed power-performance tradeoff information early in the microarchitectural design cycle. This is especially so as interconnection networks consume a significant fraction of total system power. It is exactly this gap that the work presented in this paper aims to fill.We present Orion, a power-performance interconnection network simulator that is capable of providing detailed power characteristics, in addition to performance characteristics, to enable rapid power-performance trade-offs at the architectural-level. This capability is provided within a general framework that builds a simulator starting from a microarchitectural specification of the interconnection network. A key component of this construction is the architectural-level parameterized power models that we have derived as part of this effort. Using component power models and a synthesized efficient power (and performance) simulator, a microarchitect can rapidly explore the design space. As case studies, we demonstrate the use of Orion in determining optimal system parameters, in examining the effect of diverse traffic conditions, as well as evaluating new network microarchitectures. In each of the above, the ability to simultaneously monitor power and performance is key in determining suitable microarchitectures.",
"title": ""
},
{
"docid": "73f6ba4ad9559cd3c6f7a88223e4b556",
"text": "A recurring problem faced when training neural networks is that there is typically not enough data to maximize the generalization capability of deep neural networks. There are many techniques to address this, including data augmentation, dropout, and transfer learning. In this paper, we introduce an additional method, which we call smart augmentation and we show how to use it to increase the accuracy and reduce over fitting on a target network. Smart augmentation works, by creating a network that learns how to generate augmented data during the training process of a target network in a way that reduces that networks loss. This allows us to learn augmentations that minimize the error of that network. Smart augmentation has shown the potential to increase accuracy by demonstrably significant measures on all data sets tested. In addition, it has shown potential to achieve similar or improved performance levels with significantly smaller network sizes in a number of tested cases.",
"title": ""
}
] | scidocsrr |
9aa2c511236d5da09d4026a3eb15f6d1 | One-step and Two-step Classification for Abusive Language Detection on Twitter | [
{
"docid": "79ece5e02742de09b01908668383e8f2",
"text": "Hate speech in the form of racist and sexist remarks are a common occurrence on social media. For that reason, many social media services address the problem of identifying hate speech, but the definition of hate speech varies markedly and is largely a manual effort (BBC, 2015; Lomas, 2015). We provide a list of criteria founded in critical race theory, and use them to annotate a publicly available corpus of more than 16k tweets. We analyze the impact of various extra-linguistic features in conjunction with character n-grams for hatespeech detection. We also present a dictionary based the most indicative words in our data.",
"title": ""
}
] | [
{
"docid": "18959618a153812f6c4f38ce2803084a",
"text": "This decade sees a growing number of applications of Unmanned Aerial Vehicles (UAVs) or drones. UAVs are now being experimented for commercial applications in public areas as well as used in private environments such as in farming. As such, the development of efficient communication protocols for UAVs is of much interest. This paper compares and contrasts recent communication protocols of UAVs with that of Vehicular Ad Hoc Networks (VANETs) using Wireless Access in Vehicular Environments (WAVE) protocol stack as the reference model. The paper also identifies the importance of developing light-weight communication protocols for certain applications of UAVs as they can be both of low processing power and limited battery energy.",
"title": ""
},
{
"docid": "391f9b889b1c3ffe3e8ee422d108edcd",
"text": "Does the brain of a bilingual process language differently from that of a monolingual? We compared how bilinguals and monolinguals recruit classic language brain areas in response to a language task and asked whether there is a neural signature of bilingualism. Highly proficient and early-exposed adult Spanish-English bilinguals and English monolinguals participated. During functional magnetic resonance imaging (fMRI), participants completed a syntactic sentence judgment task [Caplan, D., Alpert, N., & Waters, G. Effects of syntactic structure and propositional number on patterns of regional cerebral blood flow. Journal of Cognitive Neuroscience, 10, 541552, 1998]. The sentences exploited differences between Spanish and English linguistic properties, allowing us to explore similarities and differences in behavioral and neural responses between bilinguals and monolinguals, and between a bilingual's two languages. If bilinguals' neural processing differs across their two languages, then differential behavioral and neural patterns should be observed in Spanish and English. Results show that behaviorally, in English, bilinguals and monolinguals had the same speed and accuracy, yet, as predicted from the Spanish-English structural differences, bilinguals had a different pattern of performance in Spanish. fMRI analyses revealed that both monolinguals (in one language) and bilinguals (in each language) showed predicted increases in activation in classic language areas (e.g., left inferior frontal cortex, LIFC), with any neural differences between the bilingual's two languages being principled and predictable based on the morphosyntactic differences between Spanish and English. However, an important difference was that bilinguals had a significantly greater increase in the blood oxygenation level-dependent signal in the LIFC (BA 45) when processing English than the English monolinguals. The results provide insight into the decades-old question about the degree of separation of bilinguals' dual-language representation. The differential activation for bilinguals and monolinguals opens the question as to whether there may possibly be a neural signature of bilingualism. Differential activation may further provide a fascinating window into the language processing potential not recruited in monolingual brains and reveal the biological extent of the neural architecture underlying all human language.",
"title": ""
},
{
"docid": "616a973d41a67b4fe07d264c4ebfe26f",
"text": "Every year, novel NVIDIA GPU designs are introduced. This rapid architectural and technological progression, coupled with a reluctance by manufacturers to disclose low-level details, makes it difficult for even the most proficient GPU software designers to remain up-to-date with the technological advances at a microarchitectural level. To address this dearth of public, microarchitectural-level information on the novel NVIDIA GPUs, independent researchers have resorted to microbenchmarks-based dissection and discovery. This has led to a prolific line of publications that shed light on instruction encoding, and memory hierarchy's geometry and features at each level. Namely, research that describes the performance and behavior of the Kepler, Maxwell and Pascal architectures. In this technical report, we continue this line of research by presenting the microarchitectural details of the NVIDIA Volta architecture, discovered through microbenchmarks and instruction set disassembly. Additionally, we compare quantitatively our Volta findings against its predecessors, Kepler, Maxwell and Pascal.",
"title": ""
},
{
"docid": "ac7591b1a0011b38ae88f5a4dd7ad200",
"text": "A succinct overview of some of the major research approaches to the study of leadership is provided as a foundation for the introduction of a multicomponent model of leadership that draws on those findings, complexity theory, and the concept of emergence. The major aspects of the model include: the personal characteristics and capacities, thoughts, feelings, behaviors, and human working relationships of leaders, followers, and other stake holders, the organization’s systems, including structures, processes, contents, and internal situations, the organization’s performance and outcomes, and the external environment(s), ecological niches, and external situations in which an enterprise functions. The relationship between this model and other approaches in the literature as well as directions in research on leadership and implications for consulting practice are discussed.",
"title": ""
},
{
"docid": "dfc9099b1b31d5f214b341c65fbb8e92",
"text": "In this communication, a dual-feed dual-polarized microstrip antenna with low cross polarization and high isolation is experimentally studied. Two different feed mechanisms are designed to excite a dual orthogonal linearly polarized mode from a single radiating patch. One of the two modes is excited by an aperture-coupled feed, which comprises a compact resonant annular-ring slot and a T-shaped microstrip feedline; while the other is excited by a pair of meandering strips with a 180$^{\\circ}$ phase differences. Both linearly polarized modes are designed to operate at 2400-MHz frequency band, and from the measured results, it is found that the isolation between the two feeding ports is less than 40 dB across a 10-dB input-impedance bandwidth of 14%. In addition, low cross polarization is observed from the radiation patterns of the two modes, especially at the broadside direction. Simulation analyses are also carried out to support the measured results.",
"title": ""
},
{
"docid": "7ce71bb026f852efae77914443fee7f5",
"text": "OBJECTIVE\nThis study aimed to compare mental health, quality of life, empathy, and burnout in medical students from a medical institution in the USA and another one in Brazil.\n\n\nMETHODS\nThis cross-cultural study included students enrolled in the first and second years of their undergraduate medical training. We evaluated depression, anxiety, and stress (DASS 21), empathy, openness to spirituality, and wellness (ESWIM), burnout (Oldenburg), and quality of life (WHOQOL-Bref) and compared them between schools.\n\n\nRESULTS\nA total of 138 Brazilian and 73 US medical students were included. The comparison between all US medical students and all Brazilian medical students revealed that Brazilians reported more depression and stress and US students reported greater wellness, less exhaustion, and greater environmental quality of life. In order to address a possible response bias favoring respondents with better mental health, we also compared all US medical students with the 50% of Brazilian medical students who reported better mental health. In this comparison, we found Brazilian medical students had higher physical quality of life and US students again reported greater environmental quality of life. Cultural, social, infrastructural, and curricular differences were compared between institutions. Some noted differences were that students at the US institution were older and were exposed to smaller class sizes, earlier patient encounters, problem-based learning, and psychological support.\n\n\nCONCLUSION\nWe found important differences between Brazilian and US medical students, particularly in mental health and wellness. These findings could be explained by a complex interaction between several factors, highlighting the importance of considering cultural and school-level influences on well-being.",
"title": ""
},
{
"docid": "679759d8f8e4c4ef5a2bb1356a61d7f5",
"text": "This paper describes a method of implementing two factor authentication using mobile phones. The proposed method guarantees that authenticating to services, such as online banking or ATM machines, is done in a very secure manner. The proposed system involves using a mobile phone as a software token for One Time Password generation. The generated One Time Password is valid for only a short user-defined period of time and is generated by factors that are unique to both, the user and the mobile device itself. Additionally, an SMS-based mechanism is implemented as both a backup mechanism for retrieving the password and as a possible mean of synchronization. The proposed method has been implemented and tested. Initial results show the success of the proposed method.",
"title": ""
},
{
"docid": "99ed46c953a7a00e6d9a5dbd214cae77",
"text": "A number of important problems in theoretical computer science and machine learning can be interpreted as recovering a certain basis. These include certain tensor decompositions, Independent Component Analysis (ICA), spectral clustering and Gaussian mixture learning. Each of these problems reduces to an instance of our general model, which we call a “Basis Encoding Function” (BEF). We show that learning a basis within this model can then be provably and efficiently achieved using a first order iteration algorithm (gradient iteration). Our algorithm goes beyond tensor methods, providing a function-based generalization for a number of existing methods including the classical matrix power method, the tensor power iteration as well as cumulant-based FastICA. Our framework also unifies the unusual phenomenon observed in these domains that they can be solved using efficient non-convex optimization. Specifically, we describe a class of BEFs such that their local maxima on the unit sphere are in one-to-one correspondence with the basis elements. This description relies on a certain “hidden convexity” property of these functions. We provide a complete theoretical analysis of gradient iteration even when the BEF is perturbed. We show convergence and complexity bounds polynomial in dimension and other relevant parameters, such as perturbation size. Our perturbation results can be considered as a non-linear version of the classical Davis-Kahan theorem for perturbations of eigenvectors of symmetric matrices. In addition we show that our algorithm exhibits fast (superlinear) convergence and relate the speed of convergence to the properties of the BEF. Moreover, the gradient iteration algorithm can be easily and efficiently implemented in practice. Finally we apply our framework by providing the first provable algorithm for recovery in a general perturbed ICA model. ar X iv :1 41 1. 14 20 v3 [ cs .L G ] 3 N ov 2 01 5",
"title": ""
},
{
"docid": "f48ee93659a25bee9a49e8be6c789987",
"text": "what design is from a theoretical point of view, which is a role of the descriptive model. However, descriptive models are not necessarily helpful in directly deriving either the architecture of intelligent CAD or the knowledge representation for intelligent CAD. For this purpose, we need a computable design process model that should coincide, at least to some extent, with a cognitive model that explains actual design activities. One of the major problems in developing so-called intelligent computer-aided design (CAD) systems (ten Hagen and Tomiyama 1987) is the representation of design knowledge, which is a two-part process: the representation of design objects and the representation of design processes. We believe that intelligent CAD systems will be fully realized only when these two types of representation are integrated. Progress has been made in the representation of design objects, as can be seen, for example, in geometric modeling; however, almost no significant results have been seen in the representation of design processes, which implies that we need a design theory to formalize them. According to Finger and Dixon (1989), design process models can be categorized into a descriptive model that explains how design is done, a cognitive model that explains the designer’s behavior, a prescriptive model that shows how design must be done, and a computable model that expresses a method by which a computer can accomplish a task. A design theory for intelligent CAD is not useful when it is merely descriptive or cognitive; it must also be computable. We need a general model of design Articles",
"title": ""
},
{
"docid": "490fe197e7ed6c658160c8a04ee1fc82",
"text": "Automatic concept learning from large scale imbalanced data sets is a key issue in video semantic analysis and retrieval, which means the number of negative examples is far more than that of positive examples for each concept in the training data. The existing methods adopt generally under-sampling for the majority negative examples or over-sampling for the minority positive examples to balance the class distribution on training data. The main drawbacks of these methods are: (1) As a key factor that affects greatly the performance, in most existing methods, the degree of re-sampling needs to be pre-fixed, which is not generally the optimal choice; (2) Many useful negative samples may be discarded in under-sampling. In addition, some works only focus on the improvement of the computational speed, rather than the accuracy. To address the above issues, we propose a new approach and algorithm named AdaOUBoost (Adaptive Over-sampling and Under-sampling Boost). The novelty of AdaOUBoost mainly lies in: adaptively over-sample the minority positive examples and under-sample the majority negative examples to form different sub-classifiers. And combine these sub-classifiers according to their accuracy to create a strong classifier, which aims to use fully the whole training data and improve the performance of the class-imbalance learning classifier. In AdaOUBoost, first, our clustering-based under-sampling method is employed to divide the majority negative examples into some disjoint subsets. Then, for each subset of negative examples, we utilize the borderline-SMOTE (synthetic minority over-sampling technique) algorithm to over-sample the positive examples with different size, train each sub-classifier using each of them, and get the classifier by fusing these sub-classifiers with different weights. Finally, we combine these classifiers in each subset of negative examples to create a strong classifier. We compare the performance between AdaOUBoost and the state-of-the-art methods on TRECVID 2008 benchmark with all 20 concepts, and the results show the AdaOUBoost can achieve the superior performance in large scale imbalanced data sets.",
"title": ""
},
{
"docid": "dfa62c69b1ab26e7e160100b69794674",
"text": "Canonical correlation analysis (CCA) is a well established technique for identifying linear relationships among two variable sets. Kernel CCA (KCCA) is the most notable nonlinear extension but it lacks interpretability and robustness against irrelevant features. The aim of this article is to introduce two nonlinear CCA extensions that rely on the recently proposed Hilbert-Schmidt independence criterion and the centered kernel target alignment. These extensions determine linear projections that provide maximally dependent projected data pairs. The paper demonstrates that the use of linear projections allows removing irrelevant features, whilst extracting combinations of strongly associated features. This is exemplified through a simulation and the analysis of recorded data that are available in the literature.",
"title": ""
},
{
"docid": "ca1ba40c90720275fdf5b749a9f8ed10",
"text": "In this technological world one of the general method for user to save their data is cloud. Most of the cloud storage company provides some storage space as free to its users. Both individuals and corporate are storing their files in the cloud infrastructure so it becomes a problem for a forensics analyst to perform evidence acquisition and examination. One reason that makes evidence acquisition more difficult is user data always saved in remote computer on cloud. Various cloud companies available in the market serving storage as one of their services and everyone delivering different kinds of features and facilities in the storage technology. One area of difficulty is the acquisition of evidential data associated to a cybercrime stored in a different cloud company service. Due to lack of understanding about the location of evidence data regarding which place it is saved could also affect an analytical process and it take a long time to speak with all cloud service companies to find whether data is saved within their cloud. By analyzing two cloud service companies (IDrive and Mega cloud drive) this study elaborates the various steps involved in the activity of obtaining evidence on a user account through a browser and then via cloud software application on a Windows 7 machine. This paper will detail findings for both the Mega cloud drive and IDrive client software, to find the different evidence that IDrive and the mega cloud drive leaves behind on a user computer. By establishing the artifacts on a user machine will give an overall idea regarding kind of evidence residue in user computer for investigators. Key evidences discovered on this investigation comprises of RAM memory captures, registry files application logs, file time and date values and browser artifacts are acquired from these two cloud companies on a user windows machine.",
"title": ""
},
{
"docid": "8ab80b9f51166e7b5cc1b60da443bc6b",
"text": "How to represent a map of the environment is a key question of robotics. In this paper, we focus on suggesting a representation well-suited for online map building from vision-based data and online planning in 3D. We propose to combine a commonly-used representation in computer graphics and surface reconstruction, projective Truncated Signed Distance Field (TSDF), with a representation frequently used for collision checking and collision costs in planning, Euclidean Signed Distance Field (ESDF), and validate this combined approach in simulation. We argue that this type of map is better-suited for robotic applications than existing representations.",
"title": ""
},
{
"docid": "28f6751a043201fd8313944b4f79101f",
"text": "FLLL 2 Preface This is a printed collection of the contents of the lecture \" Genetic Algorithms: Theory and Applications \" which I gave first in the winter semester 1999/2000 at the Johannes Kepler University in Linz. The reader should be aware that this manuscript is subject to further reconsideration and improvement. Corrections, complaints, and suggestions are cordially welcome. The sources were manifold: Chapters 1 and 2 were written originally for these lecture notes. All examples were implemented from scratch. The third chapter is a distillation of the books of Goldberg [13] and Hoffmann [15] and a handwritten manuscript of the preceding lecture on genetic algorithms which was given by Andreas Stöckl in 1993 at the Johannes Kepler University. Chapters 4, 5, and 7 contain recent adaptations of previously published material from my own master thesis and a series of lectures which was given by Francisco Herrera and myself at the Second Summer School on Advanced Control at the Slovak Technical University, Bratislava, in summer 1997 [4]. Chapter 6 was written originally, however, strongly influenced by A. Geyer-Schulz's works and H. Hörner's paper on his C++ GP kernel [18]. I would like to thank all the students attending the first GA lecture in Winter 1999/2000, for remaining loyal throughout the whole term and for contributing much to these lecture notes with their vivid, interesting, and stimulating questions, objections, and discussions. Last but not least, I want to express my sincere gratitude to Sabine Lumpi and Susanne Saminger for support in organizational matters, and Pe-ter Bauer for proofreading .",
"title": ""
},
{
"docid": "324c0fe0d57734b54dd03e468b7b4603",
"text": "This paper studies the use of received signal strength indicators (RSSI) applied to fingerprinting method in a Bluetooth network for indoor positioning. A Bayesian fusion (BF) method is proposed to combine the statistical information from the RSSI measurements and the prior information from a motion model. Indoor field tests are carried out to verify the effectiveness of the method. Test results show that the proposed BF algorithm achieves a horizontal positioning accuracy of about 4.7 m on the average, which is about 6 and 7 % improvement when compared with Bayesian static estimation and a point Kalman filter method, respectively.",
"title": ""
},
{
"docid": "7cfc2866218223ba6bd56eb1f10ce29f",
"text": "This paper deals with prediction of anopheles number, the main vector of malaria risk, using environmental and climate variables. The variables selection is based on an automatic machine learning method using regression trees, and random forests combined with stratified two levels cross validation. The minimum threshold of variables importance is accessed using the quadratic distance of variables importance while the optimal subset of selected variables is used to perform predictions. Finally the results revealed to be qualitatively better, at the selection, the prediction, and the CPU time point of view than those obtained by GLM-Lasso method.",
"title": ""
},
{
"docid": "2d55c21d2d222501e85595b6b35a956f",
"text": "OBJECTIVE\nAlthough the prevalence of children with pervasive developmental disorders (PDD) has increased, empirical data about the role and practices of occupational therapists have not been reported in the literature. This descriptive study investigated the practice of occupational therapists with children with PDD.\n\n\nMETHOD\nA survey was mailed to 500 occupational therapists in the Sensory Integration Special Interest Section or School System Special Interest Section of the American Occupational Therapy Association in eastern and midwestern United States. The valid return rate was 58% (292 respondents). The survey used Likert scale items to measure frequency of performance problems observed in children with PDD, performance areas addressed in intervention, perceived improvement in performance, and frequency of use of and competency in intervention approaches.\n\n\nRESULTS\nThe respondents primarily worked in schools and reported that in the past 5 years they had served an increasing number of children with PDD. Most respondents provided direct services and appeared to use holistic approaches in which they addressed multiple performance domains. They applied sensory integration and environmental modification approaches most frequently and believed that they were most competent in using these approaches. Respondents who reported more frequent use of and more competence in sensory integration approaches perceived more improvement in children's sensory processing. Respondents who reported more frequent use of and more competence in child-centered play perceived more improvement in children's sensory integration and play skills.",
"title": ""
},
{
"docid": "d9bd41c14c5e37ad08fc4811bb943089",
"text": "With the increased global use of online media platforms, there are more opportunities than ever to misuse those platforms or perpetrate fraud. One such fraud is within the music industry, where perpetrators create automated programs, streaming songs to generate revenue or increase popularity of an artist. With growing annual revenue of the digital music industry, there are significant financial incentives for perpetrators with fraud in mind. The focus of the study is extracting user behavioral patterns and utilising them to train and compare multiple supervised classification method to detect fraud. The machine learning algorithms examined are Logistic Regression, Support Vector Machines, Random Forest and Artificial Neural Networks. The study compares performance of these algorithms trained on imbalanced datasets carrying different fractions of fraud. The trained models are evaluated using the Precision Recall Area Under the Curve (PR AUC) and a F1-score. Results show that the algorithms achieve similar performance when trained on balanced and imbalanced datasets. It also shows that Random Forest outperforms the other methods for all datasets tested in this experiment.",
"title": ""
},
{
"docid": "6dbfefb384a3dbd28beee2d0daebae52",
"text": "Many NLP applications require disambiguating polysemous words. Existing methods that learn polysemous word vector representations involve first detecting various senses and optimizing the sensespecific embeddings separately, which are invariably more involved than single sense learning methods such as word2vec. Evaluating these methods is also problematic, as rigorous quantitative evaluations in this space is limited, especially when compared with single-sense embeddings. In this paper, we propose a simple method to learn a word representation, given any context. Our method only requires learning the usual single sense representation, and coefficients that can be learnt via a single pass over the data. We propose several new test sets for evaluating word sense induction, relevance detection, and contextual word similarity, significantly supplementing the currently available tests. Results on these and other tests show that while our method is embarrassingly simple, it achieves excellent results when compared to the state of the art models for unsupervised polysemous word representation learning. Our code and data are at https://github.com/dingwc/",
"title": ""
},
{
"docid": "97af4f8e35a7d773bb85969dd027800b",
"text": "For an intelligent transportation system (ITS), traffic incident detection is one of the most important issues, especially for urban area which is full of signaled intersections. In this paper, we propose a novel traffic incident detection method based on the image signal processing and hidden Markov model (HMM) classifier. First, a traffic surveillance system was set up at a typical intersection of china, traffic videos were recorded and image sequences were extracted for image database forming. Second, compressed features were generated through several image processing steps, image difference with FFT was used to improve the recognition rate. Finally, HMM was used for classification of traffic signal logics (East-West, West-East, South-North, North-South) and accident of crash, the total correct rate is 74% and incident recognition rate is 84%. We believe, with more types of incident adding to the database, our detection algorithm could serve well for the traffic surveillance system.",
"title": ""
}
] | scidocsrr |
62b8ef39d2ec05c9aee2b4445c1e5c4e | A Large-Displacement 3-DOF Flexure Parallel Mechanism with Decoupled Kinematics Structure | [
{
"docid": "f7f90e224c71091cc3e6356ab1ec0ea5",
"text": "A new two-degrees-of-freedom (2-DOF) compliant parallel micromanipulator (CPM) utilizing flexure joints has been proposed for two-dimensional (2-D) nanomanipulation in this paper. The system is developed by a careful design and proper selection of electrical and mechanical components. Based upon the developed PRB model, both the position and velocity kinematic modelings have been performed in details, and the CPM's workspace area is determined analytically in view of the physical constraints imposed by pizeo-actuators and flexure hinges. Moreover, in order to achieve a maximum workspace subjected to the given dexterity indices, kinematic optimization of the design parameters has been carried out, which leads to a manipulator satisfying the requirement of this work. Simulation results reveal that the designed CPM can perform a high dexterous manipulation within its workspace.",
"title": ""
}
] | [
{
"docid": "816575ea7f7903784abba96180190ea3",
"text": "The decision tree output of Quinlan's ID3 algorithm is one of its major weaknesses. Not only can it be incomprehensible and difficult to manipulate, but its use in expert systems frequently demands irrelevant information to be supplied. This report argues that the problem lies in the induction algorithm itself and can only be remedied by radically altering the underlying strategy. It describes a new algorithm, PRISM which, although based on ID3, uses a different induction strategy to induce rules which are modular, thus avoiding many of the problems associated with decision trees.",
"title": ""
},
{
"docid": "59daeea2c602a1b1d64bae95185f9505",
"text": "Traumatic brain injury (TBI) triggers endoplasmic reticulum (ER) stress and impairs autophagic clearance of damaged organelles and toxic macromolecules. In this study, we investigated the effects of the post-TBI administration of docosahexaenoic acid (DHA) on improving hippocampal autophagy flux and cognitive functions of rats. TBI was induced by cortical contusion injury in Sprague–Dawley rats, which received DHA (16 mg/kg in DMSO, intraperitoneal administration) or vehicle DMSO (1 ml/kg) with an initial dose within 15 min after the injury, followed by a daily dose for 3 or 7 days. First, RT-qPCR reveals that TBI induced a significant elevation in expression of autophagy-related genes in the hippocampus, including SQSTM1/p62 (sequestosome 1), lysosomal-associated membrane proteins 1 and 2 (Lamp1 and Lamp2), and cathepsin D (Ctsd). Upregulation of the corresponding autophagy-related proteins was detected by immunoblotting and immunostaining. In contrast, the DHA-treated rats did not exhibit the TBI-induced autophagy biogenesis and showed restored CTSD protein expression and activity. T2-weighted images and diffusion tensor imaging (DTI) of ex vivo brains showed that DHA reduced both gray matter and white matter damages in cortical and hippocampal tissues. DHA-treated animals performed better than the vehicle control group on the Morris water maze test. Taken together, these findings suggest that TBI triggers sustained stimulation of autophagy biogenesis, autophagy flux, and lysosomal functions in the hippocampus. Swift post-injury DHA administration restores hippocampal lysosomal biogenesis and function, demonstrating its therapeutic potential.",
"title": ""
},
{
"docid": "3732f96144d7f28c88670dd63aff63a1",
"text": "The problem of defining and classifying power system stability has been addressed by several previous CIGRE and IEEE Task Force reports. These earlier efforts, however, do not completely reflect current industry needs, experiences and understanding. In particular, the definitions are not precise and the classifications do not encompass all practical instability scenarios. This report developed by a Task Force, set up jointly by the CIGRE Study Committee 38 and the IEEE Power System Dynamic Performance Committee, addresses the issue of stability definition and classification in power systems from a fundamental viewpoint and closely examines the practical ramifications. The report aims to define power system stability more precisely, provide a systematic basis for its classification, and discuss linkages to related issues such as power system reliability and security.",
"title": ""
},
{
"docid": "50d0b1e141bcea869352c9b96b0b2ad5",
"text": "In this paper we present the features of a Question/Answering (Q/A) system that had unparalleled performance in the TREC-9 evaluations. We explain the accuracy of our system through the unique characteristics of its architecture: (1) usage of a wide-coverage answer type taxonomy; (2) repeated passage retrieval; (3) lexico-semantic feedback loops; (4) extraction of the answers based on machine learning techniques; and (5) answer caching. Experimental results show the effects of each feature on the overall performance of the Q/A system and lead to general conclusions about Q/A from large text collections.",
"title": ""
},
{
"docid": "b9400c6d317f60dc324877d3a739fd17",
"text": "The present article presents a tutorial on how to estimate and interpret various effect sizes. The 5th edition of the Publication Manual of the American Psychological Association (2001) described the failure to report effect sizes as a “defect” (p. 5), and 23 journals have published author guidelines requiring effect size reporting. Although dozens of effect size statistics have been available for some time, many researchers were trained at a time when effect sizes were not emphasized, or perhaps even taught. Consequently, some readers may appreciate a review of how to estimate and interpret various effect sizes. In addition to the tutorial, the authors recommend effect size interpretations that emphasize direct and explicit comparisons of effects in a new study with those reported in the prior related literature, with a focus on evaluating result replicability.",
"title": ""
},
{
"docid": "d1c2936521b0a3270163ea4d9123e4da",
"text": "Large-scale instance-level image retrieval aims at retrieving specific instances of objects or scenes. Simultaneously retrieving multiple objects in a test image adds to the difficulty of the problem, especially if the objects are visually similar. This paper presents an efficient approach for per-exemplar multi-label image classification, which targets the recognition and localization of products in retail store images. We achieve runtime efficiency through the use of discriminative random forests, deformable dense pixel matching and genetic algorithm optimization. Cross-dataset recognition is performed, where our training images are taken in ideal conditions with only one single training image per product label, while the evaluation set is taken using a mobile phone in real-life scenarios in completely different conditions. In addition, we provide a large novel dataset and labeling tools for products image search, to motivate further research efforts on multi-label retail products image classification. The proposed approach achieves promising results in terms of both accuracy and runtime efficiency on 680 annotated images of our dataset, and 885 test images of GroZi-120 dataset. We make our dataset of 8350 different product images and the 680 test images from retail stores with complete annotations available to the wider community.",
"title": ""
},
{
"docid": "5db123f7b584b268f908186c67d3edcb",
"text": "From the point of view of a programmer, the robopsychology is a synonym for the activity is done by developers to implement their machine learning applications. This robopsychological approach raises some fundamental theoretical questions of machine learning. Our discussion of these questions is constrained to Turing machines. Alan Turing had given an algorithm (aka the Turing Machine) to describe algorithms. If it has been applied to describe itself then this brings us to Turing’s notion of the universal machine. In the present paper, we investigate algorithms to write algorithms. From a pedagogy point of view, this way of writing programs can be considered as a combination of learning by listening and learning by doing due to it is based on applying agent technology and machine learning. As the main result we introduce the problem of learning and then we show that it cannot easily be handled in reality therefore it is reasonable to use machine learning algorithm for learning Turing machines.",
"title": ""
},
{
"docid": "fc3aeb32f617f7a186d41d56b559a2aa",
"text": "Existing neural relation extraction (NRE) models rely on distant supervision and suffer from wrong labeling problems. In this paper, we propose a novel adversarial training mechanism over instances for relation extraction to alleviate the noise issue. As compared with previous denoising methods, our proposed method can better discriminate those informative instances from noisy ones. Our method is also efficient and flexible to be applied to various NRE architectures. As shown in the experiments on a large-scale benchmark dataset in relation extraction, our denoising method can effectively filter out noisy instances and achieve significant improvements as compared with the state-of-theart models.",
"title": ""
},
{
"docid": "66d5101d55595754add37e9e50952056",
"text": "The cognitive neural prosthetic (CNP) is a very versatile method for assisting paralyzed patients and patients with amputations. The CNP records the cognitive state of the subject, rather than signals strictly related to motor execution or sensation. We review a number of high-level cortical signals and their application for CNPs, including intention, motor imagery, decision making, forward estimation, executive function, attention, learning, and multi-effector movement planning. CNPs are defined by the cognitive function they extract, not the cortical region from which the signals are recorded. However, some cortical areas may be better than others for particular applications. Signals can also be extracted in parallel from multiple cortical areas using multiple implants, which in many circumstances can increase the range of applications of CNPs. The CNP approach relies on scientific understanding of the neural processes involved in cognition, and many of the decoding algorithms it uses also have parallels to underlying neural circuit functions. 169 A nn u. R ev . P sy ch ol . 2 01 0. 61 :1 69 -1 90 . D ow nl oa de d fr om a rj ou rn al s. an nu al re vi ew s. or g by C al if or ni a In st itu te o f T ec hn ol og y on 0 1/ 03 /1 0. F or p er so na l u se o nl y. ANRV398-PS61-07 ARI 17 November 2009 19:51 Cognitive neural prosthetics (CNPs): instruments that consist of an array of electrodes, a decoding algorithm, and an external device controlled by the processed cognitive signal Decoding algorithms: computer algorithms that interpret neural signals for the purposes of understanding their function or for providing control signals to machines",
"title": ""
},
{
"docid": "b43c4d5d97120963a3ea84a01d029819",
"text": "Research into the translation of the output of automatic speech recognition (ASR) systems is hindered by the dearth of datasets developed for that explicit purpose. For SpanishEnglish translation, in particular, most parallel data available exists only in vastly different domains and registers. In order to support research on cross-lingual speech applications, we introduce the Fisher and Callhome Spanish-English Speech Translation Corpus, supplementing existing LDC audio and transcripts with (a) ASR 1-best, lattice, and oracle output produced by the Kaldi recognition system and (b) English translations obtained on Amazon’s Mechanical Turk. The result is a four-way parallel dataset of Spanish audio, transcriptions, ASR lattices, and English translations of approximately 38 hours of speech, with defined training, development, and held-out test sets. We conduct baseline machine translation experiments using models trained on the provided training data, and validate the dataset by corroborating a number of known results in the field, including the utility of in-domain (information, conversational) training data, increased performance translating lattices (instead of recognizer 1-best output), and the relationship between word error rate and BLEU score.",
"title": ""
},
{
"docid": "1b347401820c826db444cc3580bde210",
"text": "Utilization of Natural Fibers in Plastic Composites: Problems and Opportunities Roger M. Rowell, Anand R, Sanadi, Daniel F. Caulfield and Rodney E. Jacobson Forest Products Laboratory, ESDA, One Gifford Pinchot Drive, Madison, WI 53705 Department of Forestry, 1630 Linden Drive, University of Wisconsin, WI 53706 recycled. Results suggest that agro-based fibers are a viable alternative to inorganic/material based reinforcing fibers in commodity fiber-thermoplastic composite materials as long as the right processing conditions are used and for applications where higher water absorption may be so critical. These renewable fibers hav low densities and high specific properties and their non-abrasive nature permits a high volume of filling in the composite. Kenaf fivers, for example, have excellent specific properties and have potential to be outstanding reinforcing fillers in plastics. In our experiments, several types of natural fibers were blended with polyprolylene(PP) and then injection molded, with the fiber weight fractions varying to 60%. A compatibilizer or a coupling agent was used to improve the interaction and adhesion between the non-polar matrix and the polar lignocellulosic fibers. The specific tensile and flexural moduli of a 50% by weight (39% by volume) of kenaf-PP composites compares favorably with 40% by weight of glass fiber (19% by volume)-PP injection molded composites. Furthermore, prelimimary results sugget that natural fiber-PP composites can be regrounded and",
"title": ""
},
{
"docid": "701ddde2a7ff66c6767a2978ce7293f2",
"text": "Epigenetics is the study of heritable changesin gene expression that does not involve changes to theunderlying DNA sequence, i.e. a change in phenotype notinvolved by a change in genotype. At least three mainfactor seems responsible for epigenetic change including DNAmethylation, histone modification and non-coding RNA, eachone sharing having the same property to affect the dynamicof the chromatin structure by acting on Nucleosomes position. A nucleosome is a DNA-histone complex, where around150 base pairs of double-stranded DNA is wrapped. Therole of nucleosomes is to pack the DNA into the nucleusof the Eukaryote cells, to form the Chromatin. Nucleosomepositioning plays an important role in gene regulation andseveral studies shows that distinct DNA sequence featureshave been identified to be associated with nucleosomepresence. Starting from this suggestion, the identificationof nucleosomes on a genomic scale has been successfullyperformed by DNA sequence features representation andclassical supervised classification methods such as SupportVector Machines, Logistic regression and so on. Taking inconsideration the successful application of the deep neuralnetworks on several challenging classification problems, inthis paper we want to study how deep learning network canhelp in the identification of nucleosomes.",
"title": ""
},
{
"docid": "e4ce5d47a095fcdadbe5c16bb90445d4",
"text": "Artificial neural network (ANN) has been widely applied in flood forecasting and got good results. However, it can still not go beyond one or two hidden layers for the problematic non-convex optimization. This paper proposes a deep learning approach by integrating stacked autoencoders (SAE) and back propagation neural networks (BPNN) for the prediction of stream flow, which simultaneously takes advantages of the powerful feature representation capability of SAE and superior predicting capacity of BPNN. To further improve the non-linearity simulation capability, we first classify all the data into several categories by the K-means clustering. Then, multiple SAE-BP modules are adopted to simulate their corresponding categories of data. The proposed approach is respectively compared with the support-vector-machine (SVM) model, the BP neural network model, the RBF neural network model and extreme learning machine (ELM) model. The experimental results show that the SAE-BP integrated algorithm performs much better than other benchmarks.",
"title": ""
},
{
"docid": "348f9c689c579cf07085b6e263c53ff5",
"text": "Over recent years, interest has been growing in Bitcoin, an innovation which has the potential to play an important role in e-commerce and beyond. The aim of our paper is to provide a comprehensive empirical study of the payment and investment features of Bitcoin and their implications for the conduct of ecommerce. Since network externality theory suggests that the value of a network and its take-up are interlinked, we investigate both adoption and price formation. We discover that Bitcoin returns are driven primarily by its popularity, the sentiment expressed in newspaper reports on the cryptocurrency, and total number of transactions. The paper also reports on the first global survey of merchants who have adopted this technology and model the share of sales paid for with this alternative currency, using both ordinary and Tobit regressions. Our analysis examines how country, customer and company-specific characteristics interact with the proportion of sales attributed to Bitcoin. We find that company features, use of other payment methods, customers’ knowledge about Bitcoin, as well as the size of both the official and unofficial economy are significant determinants. The results presented allow a better understanding of the practical and theoretical ramifications of this innovation.",
"title": ""
},
{
"docid": "1c079b53b0967144a183f65a16e10158",
"text": "Android has provided dynamic code loading (DCL) since API level one. DCL allows an app developer to load additional code at runtime. DCL raises numerous challenges with regards to security and accountability analysis of apps. While previous studies have investigated DCL on Android, in this paper we formulate and answer three critical questions that are missing from previous studies: (1) Where does the loaded code come from (remotely fetched or locally packaged), and who is the responsible entity to invoke its functionality? (2) In what ways is DCL utilized to harden mobile apps, specifically, application obfuscation? (3) What are the security risks and implications that can be found from DCL in off-the-shelf apps? We design and implement DYDROID, a system which uses both dynamic and static analysis to analyze dynamically loaded code. Dynamic analysis is used to automatically exercise apps, capture DCL behavior, and intercept the loaded code. Static analysis is used to investigate malicious behavior and privacy leakage in that dynamically loaded code. We have used DYDROID to analyze over 46K apps with little manual intervention, allowing us to conduct a large-scale measurement to investigate five aspects of DCL, such as source identification, malware detection, vulnerability analysis, obfuscation analysis, and privacy tracking analysis. We have several interesting findings. (1) 27 apps are found to violate the content policy of Google Play by executing code downloaded from remote servers. (2) We determine the distribution, pros/cons, and implications of several common obfuscation methods, including DEX encryption/loading. (3) DCL’s stealthiness enables it to be a channel to deploy malware, and we find 87 apps loading malicious binaries which are not detected by existing antivirus tools. (4) We found 14 apps that are vulnerable to code injection attacks due to dynamically loading code which is writable by other apps. (5) DCL is mainly used by third-party SDKs, meaning that app developers may not know what sort of sensitive functionality is injected into their apps.",
"title": ""
},
{
"docid": "f5658fe48ecc31e72fbfbcb12f843a44",
"text": "PURPOSE OF REVIEW\nThe current review discusses the integration of guideline and evidence-based palliative care into heart failure end-of-life (EOL) care.\n\n\nRECENT FINDINGS\nNorth American and European heart failure societies recommend the integration of palliative care into heart failure programs. Advance care planning, shared decision-making, routine measurement of symptoms and quality of life and specialist palliative care at heart failure EOL are identified as key components to an effective heart failure palliative care program. There is limited evidence to support the effectiveness of the individual elements. However, results from the palliative care in heart failure trial suggest an integrated heart failure palliative care program can significantly improve quality of life for heart failure patients at EOL.\n\n\nSUMMARY\nIntegration of a palliative approach to heart failure EOL care helps to ensure patients receive the care that is congruent with their values, wishes and preferences. Specialist palliative care referrals are limited to those who are truly at heart failure EOL.",
"title": ""
},
{
"docid": "c88f3c3b6bf8ad80b20216caf1a7cad6",
"text": "This study examined the effects of heavy resistance training on physiological acute exercise-induced fatigue (5 × 10 RM leg press) changes after two loading protocols with the same relative intensity (%) (5 × 10 RMRel) and the same absolute load (kg) (5 × 10 RMAbs) as in pretraining in men (n = 12). Exercise-induced neuromuscular (maximal strength and muscle power output), acute cytokine and hormonal adaptations (i.e., total and free testosterone, cortisol, growth hormone (GH), insulin-like growth factor-1 (IGF-1), IGF binding protein-3 (IGFBP-3), interleukin-1 receptor antagonist (IL-1ra), IL-1β, IL-6, and IL-10 and metabolic responses (i.e., blood lactate) were measured before and after exercise. The resistance training induced similar acute responses in serum cortisol concentration but increased responses in anabolic hormones of FT and GH, as well as inflammation-responsive cytokine IL-6 and the anti-inflammatory cytokine IL-10, when the same relative load was used. This response was balanced by a higher release of pro-inflammatory cytokines IL-1β and cytokine inhibitors (IL-1ra) when both the same relative and absolute load was used after training. This enhanced hormonal and cytokine response to strength exercise at a given relative exercise intensity after strength training occurred with greater accumulated fatigue and metabolic demand (i.e., blood lactate accumulation). The magnitude of metabolic demand or the fatigue experienced during the resistance exercise session influences the hormonal and cytokine response patterns. Similar relative intensities may elicit not only higher exercise-induced fatigue but also an increased acute hormonal and cytokine response during the initial phase of a resistance training period.",
"title": ""
},
{
"docid": "f4535d47191caaa1e830e5d8fae6e1ba",
"text": "Automated Lymph Node (LN) detection is an important clinical diagnostic task but very challenging due to the low contrast of surrounding structures in Computed Tomography (CT) and to their varying sizes, poses, shapes and sparsely distributed locations. State-of-the-art studies show the performance range of 52.9% sensitivity at 3.1 false-positives per volume (FP/vol.), or 60.9% at 6.1 FP/vol. for mediastinal LN, by one-shot boosting on 3D HAAR features. In this paper, we first operate a preliminary candidate generation stage, towards -100% sensitivity at the cost of high FP levels (-40 per patient), to harvest volumes of interest (VOI). Our 2.5D approach consequently decomposes any 3D VOI by resampling 2D reformatted orthogonal views N times, via scale, random translations, and rotations with respect to the VOI centroid coordinates. These random views are then used to train a deep Convolutional Neural Network (CNN) classifier. In testing, the CNN is employed to assign LN probabilities for all N random views that can be simply averaged (as a set) to compute the final classification probability per VOI. We validate the approach on two datasets: 90 CT volumes with 388 mediastinal LNs and 86 patients with 595 abdominal LNs. We achieve sensitivities of 70%/83% at 3 FP/vol. and 84%/90% at 6 FP/vol. in mediastinum and abdomen respectively, which drastically improves over the previous state-of-the-art work.",
"title": ""
},
{
"docid": "da9a6e165744245fd19ab788790c37c9",
"text": "Worldwide medicinal use of cannabis is rapidly escalating, despite limited evidence of its efficacy from preclinical and clinical studies. Here we show that cannabidiol (CBD) effectively reduced seizures and autistic-like social deficits in a well-validated mouse genetic model of Dravet syndrome (DS), a severe childhood epilepsy disorder caused by loss-of-function mutations in the brain voltage-gated sodium channel NaV1.1. The duration and severity of thermally induced seizures and the frequency of spontaneous seizures were substantially decreased. Treatment with lower doses of CBD also improved autistic-like social interaction deficits in DS mice. Phenotypic rescue was associated with restoration of the excitability of inhibitory interneurons in the hippocampal dentate gyrus, an important area for seizure propagation. Reduced excitability of dentate granule neurons in response to strong depolarizing stimuli was also observed. The beneficial effects of CBD on inhibitory neurotransmission were mimicked and occluded by an antagonist of GPR55, suggesting that therapeutic effects of CBD are mediated through this lipid-activated G protein-coupled receptor. Our results provide critical preclinical evidence supporting treatment of epilepsy and autistic-like behaviors linked to DS with CBD. We also introduce antagonism of GPR55 as a potential therapeutic approach by illustrating its beneficial effects in DS mice. Our study provides essential preclinical evidence needed to build a sound scientific basis for increased medicinal use of CBD.",
"title": ""
},
{
"docid": "d6cb714b47b056e1aea8ef0682f4ae51",
"text": "Arti cial neural networks are being used with increasing frequency for high dimensional problems of regression or classi cation. This article provides a tutorial overview of neural networks, focusing on back propagation networks as a method for approximating nonlinear multivariable functions. We explain, from a statistician's vantage point, why neural networks might be attractive and how they compare to other modern regression techniques.",
"title": ""
}
] | scidocsrr |
e095a3f8b3f574aa8111915f4094dc1a | Securing Embedded User Interfaces: Android and Beyond | [
{
"docid": "b7758121f5c24dd87e6c5fd795140066",
"text": "Conflicts between security and usability goals can be avoided by considering the goals together throughout an iterative design process. A successful design involves addressing users' expectations and inferring authorization based on their acts of designation.",
"title": ""
}
] | [
{
"docid": "3f5083aca7cb8952ba5bf421cb34fab6",
"text": "Thyroid gland is butterfly shaped organ which consists of two cone lobes and belongs to the endocrine system. It lies in front of the neck below the adams apple. Thyroid disorders are some kind of abnormalities in thyroid gland which can give rise to nodules like hypothyroidism, hyperthyroidism, goiter, benign and malignant etc. Ultrasound (US) is one among the hugely used modality to detect the thyroid disorders because it has some benefits over other techniques like non-invasiveness, low cost, free of ionizing radiations etc. This paper provides a concise overview about segmentation of thyroid nodules and importance of neural networks comparative to other techniques.",
"title": ""
},
{
"docid": "62bf93deeb73fab74004cb3ced106bac",
"text": "Since the publication of the Design Patterns book, a large number of object-oriented design patterns have been identified and codified. As part of the pattern form, objectoriented design patterns must indicate their relationships with other patterns, but these relationships are typically described very briefly, and different collections of patterns describe different relationships in different ways. In this paper we describe and classify the common relationships between object oriented design patterns. Practitioners can use these relationships to help them identity those patterns which may be applicable to a particular problem, and pattern writers can use these relationships to help them integrate new patterns into the body of the patterns literature.",
"title": ""
},
{
"docid": "8a13bb1aa34da7284fc1777e2d23ca5e",
"text": "By using a sparse representation or low-rank representation of data, the graph-based subspace clustering has recently attracted considerable attention in computer vision, given its capability and efficiency in clustering data. However, the graph weights built using the representation coefficients are not the exact ones as the traditional definition is in a deterministic way. The two steps of representation and clustering are conducted in an independent manner, thus an overall optimal result cannot be guaranteed. Furthermore, it is unclear how the clustering performance will be affected by using this graph. For example, the graph parameters, i.e., the weights on edges, have to be artificially pre-specified while it is very difficult to choose the optimum. To this end, in this paper, a novel subspace clustering via learning an adaptive low-rank graph affinity matrix is proposed, where the affinity matrix and the representation coefficients are learned in a unified framework. As such, the pre-computed graph regularizer is effectively obviated and better performance can be achieved. Experimental results on several famous databases demonstrate that the proposed method performs better against the state-of-the-art approaches, in clustering.",
"title": ""
},
{
"docid": "e2f57214cd2ec7b109563d60d354a70f",
"text": "Despite the recent successes in machine learning, there remain many open challenges. Arguably one of the most important and interesting open research problems is that of data efficiency. Supervised machine learning models, and especially deep neural networks, are notoriously data hungry, often requiring millions of labeled examples to achieve desired performance. However, labeled data is often expensive or difficult to obtain, hindering advances in interesting and important domains. What avenues might we pursue to increase the data efficiency of machine learning models? One approach is semi-supervised learning. In contrast to labeled data, unlabeled data is often easy and inexpensive to obtain. Semi-supervised learning is concerned with leveraging unlabeled data to improve performance in supervised tasks. Another approach is active learning: in the presence of a labeling mechanism (oracle), how can we choose examples to be labeled in a way that maximizes the gain in performance? In this thesis we are concerned with developing models that enable us to improve data efficiency of powerful models by jointly pursuing both of these approaches. Deep generative models parameterized by neural networks have emerged recently as powerful and flexible tools for unsupervised learning. They are especially useful for modeling high-dimensional and complex data. We propose a deep generative model with a discriminative component. By including the discriminative component in the model, after training is complete the model is used for classification rather than variational approximations. The model further includes stochastic inputs of arbitrary dimension for increased flexibility and expressiveness. We leverage the stochastic layer to learn representations of the data which naturally accommodate semi-supervised learning. We develop an efficient Gibbs sampling procedure to marginalize the stochastic inputs while inferring labels. We extend the model to include uncertainty in the weights, allowing us to explicitly capture model uncertainty, and demonstrate how this allows us to use the model for active learning as well as semi-supervised learning. I would like to dedicate this thesis to my loving wife, parents, and sister . . .",
"title": ""
},
{
"docid": "e442b7944062f6201e779aa1e7d6c247",
"text": "We present pigeo, a Python geolocation prediction tool that predicts a location for a given text input or Twitter user. We discuss the design, implementation and application of pigeo, and empirically evaluate it. pigeo is able to geolocate informal text and is a very useful tool for users who require a free and easy-to-use, yet accurate geolocation service based on pre-trained models. Additionally, users can train their own models easily using pigeo’s API.",
"title": ""
},
{
"docid": "93325e6f1c13889fb2573f4631d021a5",
"text": "The difference between a computer game and a simulator can be a small one both require the same capabilities from the computer: realistic graphics, behavior consistent with the laws of physics, a variety of scenarios where difficulties can emerge, and some assessment technique to inform users of performance. Computer games are a multi-billion dollar industry in the United States, and as the production costs and complexity of games have increased, so has the effort to make their creation easier. Commercial software products have been developed to greatly simpl ify the game-making process, allowing developers to focus on content rather than on programming. This paper investigates Unity3D game creation software for making threedimensional engine-room simulators. Unity3D is arguably the best software product for game creation, and has been used for numerous popular and successful commercial games. Maritime universities could greatly benefit from making custom simulators to fit specific applications and requirements, as well as from reducing the cost of purchasing simulators. We use Unity3D to make a three-dimensional steam turbine simulator that achieves a high degree of realism. The user can walk around the turbine, open and close valves, activate pumps, and run the turbine. Turbine operating parameters such as RPM, condenser vacuum, lube oil temperature. and governor status are monitored. In addition, the program keeps a log of any errors made by the operator. We find that with the use of Unity3D, students and faculty are able to make custom three-dimensional ship and engine room simulators that can be used as training and evaluation tools.",
"title": ""
},
{
"docid": "1a66727305984ae359648e4bd3e75ba2",
"text": "Self-organizing models constitute valuable tools for data visualization, clustering, and data mining. Here, we focus on extensions of basic vector-based models by recursive computation in such a way that sequential and tree-structured data can be processed directly. The aim of this article is to give a unified review of important models recently proposed in literature, to investigate fundamental mathematical properties of these models, and to compare the approaches by experiments. We first review several models proposed in literature from a unifying perspective, thereby making use of an underlying general framework which also includes supervised recurrent and recursive models as special cases. We shortly discuss how the models can be related to different neuron lattices. Then, we investigate theoretical properties of the models in detail: we explicitly formalize how structures are internally stored in different context models and which similarity measures are induced by the recursive mapping onto the structures. We assess the representational capabilities of the models, and we shortly discuss the issues of topology preservation and noise tolerance. The models are compared in an experiment with time series data. Finally, we add an experiment for one context model for tree-structured data to demonstrate the capability to process complex structures.",
"title": ""
},
{
"docid": "f64390896e5529f676484b9b0f4eab84",
"text": "Identifying the object that attracts human visual attention is an essential function for automatic services in smart environments. However, existing solutions can compute the gaze direction without providing the distance to the target. In addition, most of them rely on special devices or infrastructure support. This paper explores the possibility of using a smartphone to detect the visual attention of a user. By applying the proposed VADS system, acquiring the location of the intended object only requires one simple action: gazing at the intended object and holding up the smartphone so that the object as well as user's face can be simultaneously captured by the front and rear cameras. We extend the current advances of computer vision to develop efficient algorithms to obtain the distance between the camera and user, the user's gaze direction, and the object's direction from camera. The object's location can then be computed by solving a trigonometric problem. VADS has been prototyped on commercial off-the-shelf (COTS) devices. Extensive evaluation results show that VADS achieves low error (about 1.5° in angle and 0.15m in distance for objects within 12m) as well as short latency. We believe that VADS enables a large variety of applications in smart environments.",
"title": ""
},
{
"docid": "8eb907b00933dfa59c95b919dd0579e9",
"text": "Human eye gaze is a strong candidate to create a new application area based on human-computer interaction. To implement a really practical gaze-based interaction system, gaze detection must be realized without placing any restriction on the user's behavior or comfort. This paper describes a gaze tracking system that offers freehead, simple personal calibration. It does not require the user wear anything on her head, and she can move her head freely. Personal calibration takes only a very short time; the user is asked to look at two markers on the screen. An experiment shows that the accuracy of the implemented system is about 1.0 degrees (view angle).",
"title": ""
},
{
"docid": "73d9e6a019b45639927752bdc4070876",
"text": "An increasingly important challenge in data analytics is dirty data in the form of missing, duplicate, incorrect, or inconsistent values. In the SampleClean project, we have developed a new suite of algorithms to estimate the results of different types of analytic queries after applying data cleaning only to a sample. First, this article describes methods for computing statistically bounded estimates of SUM, COUNT, and AVG queries from samples of data corrupted with duplications and incorrect values. Some types of data error, such as duplication, can affect sampling probabilities so results have to be re-weighted to compensate for biases. Then it presents an application of these query processing and data cleaning methods to materialized views maintenance. The view cleaning algorithm applies hashing to efficiently maintain a uniform sample of rows in a materialized view, and then dirty data query processing techniques to correct stale query results. Finally, the article describes a gradient-descent algorithm that extends this idea to the increasingly common Machine Learning-based analytics.",
"title": ""
},
{
"docid": "a8a4bad208ee585ae4b4a0b3c5afe97a",
"text": "English-speaking children with specific language impairment (SLI) are known to have particular difficulty with the acquisition of grammatical morphemes that carry tense and agreement features, such as the past tense -ed and third-person singular present -s. In this study, an Extended Optional Infinitive (EOI) account of SLI is evaluated. In this account, -ed, -s, BE, and DO are regarded as finiteness markers. This model predicts that finiteness markers are omitted for an extended period of time for nonimpaired children, and that this period will be extended for a longer time in children with SLI. At the same time, it predicts that if finiteness markers are present, they will be used correctly. These predictions are tested in this study. Subjects were 18 5-year-old children with SLI with expressive and receptive language deficits and two comparison groups of children developing language normally: 22 CA-equivalent (5N) and 20 younger, MLU-equivalent children (3N). It was found that the children with SLI used nonfinite forms of lexical verbs, or omitted BE and DO, more frequently than children in the 5N and 3N groups. At the same time, like the normally developing children, when the children with SLI marked finiteness, they did so appropriately. Most strikingly, the SLI group was highly accurate in marking agreement on BE and DO forms. The findings are discussed in terms of the predictions of the EOI model, in comparison to other models of the grammatical limitations of children with SLI.",
"title": ""
},
{
"docid": "b2a9264030e56595024ce0e02da6c73f",
"text": "Traditional citation analysis has been widely applied to detect patterns of scientific collaboration, map the landscapes of scholarly disciplines, assess the impact of research outputs, and observe knowledge transfer across domains. It is, however, limited, as it assumes all citations are of similar value and weights each equally. Content-based citation analysis (CCA) addresses a citation’s value by interpreting each one based on its context at both the syntactic and semantic levels. This paper provides a comprehensive overview of CAA research in terms of its theoretical foundations, methodical approaches, and example applications. In addition, we highlight how increased computational capabilities and publicly available full-text resources have opened this area of research to vast possibilities, which enable deeper citation analysis, more accurate citation prediction, and increased knowledge discovery.",
"title": ""
},
{
"docid": "725bfdbd65a62d3d7ac50fee087d752f",
"text": "BACKGROUND\nIndividuals with autism spectrum disorders (ASDs) often display symptoms from other diagnostic categories. Studies of clinical and psychosocial outcome in adult patients with ASDs without concomitant intellectual disability are few. The objective of this paper is to describe the clinical psychiatric presentation and important outcome measures of a large group of normal-intelligence adult patients with ASDs.\n\n\nMETHODS\nAutistic symptomatology according to the DSM-IV-criteria and the Gillberg & Gillberg research criteria, patterns of comorbid psychopathology and psychosocial outcome were assessed in 122 consecutively referred adults with normal intelligence ASDs. The subjects consisted of 5 patients with autistic disorder (AD), 67 with Asperger's disorder (AS) and 50 with pervasive developmental disorder not otherwise specified (PDD NOS). This study group consists of subjects pooled from two studies with highly similar protocols, all seen on an outpatient basis by one of three clinicians.\n\n\nRESULTS\nCore autistic symptoms were highly prevalent in all ASD subgroups. Though AD subjects had the most pervasive problems, restrictions in non-verbal communication were common across all three subgroups and, contrary to current DSM criteria, so were verbal communication deficits. Lifetime psychiatric axis I comorbidity was very common, most notably mood and anxiety disorders, but also ADHD and psychotic disorders. The frequency of these diagnoses did not differ between the ASD subgroups or between males and females. Antisocial personality disorder and substance abuse were more common in the PDD NOS group. Of all subjects, few led an independent life and very few had ever had a long-term relationship. Female subjects more often reported having been bullied at school than male subjects.\n\n\nCONCLUSION\nASDs are clinical syndromes characterized by impaired social interaction and non-verbal communication in adulthood as well as in childhood. They also carry a high risk for co-existing mental health problems from a broad spectrum of disorders and for unfavourable psychosocial life circumstances. For the next revision of DSM, our findings especially stress the importance of careful examination of the exclusion criterion for adult patients with ASDs.",
"title": ""
},
{
"docid": "63405ca71cf052b0011106e5fda6a9ea",
"text": "Device-to-Device (D2D) communication has emerged as a promising technology for optimizing spectral efficiency in future cellular networks. D2D takes advantage of the proximity of communicating devices for efficient utilization of available resources, improving data rates, reducing latency, and increasing system capacity. The research community is actively investigating the D2D paradigm to realize its full potential and enable its smooth integration into the future cellular system architecture. Existing surveys on this paradigm largely focus on interference and resource management. We review recently proposed solutions in over explored and under explored areas in D2D. These solutions include protocols, algorithms, and architectures in D2D. Furthermore, we provide new insights on open issues in these areas. Finally, we discuss potential future research directions.",
"title": ""
},
{
"docid": "5d48cd6c8cc00aec5f7f299c346405c9",
"text": ".................................................................................................................................... iii Acknowledgments..................................................................................................................... iv Table of",
"title": ""
},
{
"docid": "d5d2e1feeb2d0bf2af49e1d044c9e26a",
"text": "ISSN: 2167-0811 (Print) 2167-082X (Online) Journal homepage: http://www.tandfonline.com/loi/rdij20 Algorithmic Transparency in the News Media Nicholas Diakopoulos & Michael Koliska To cite this article: Nicholas Diakopoulos & Michael Koliska (2016): Algorithmic Transparency in the News Media, Digital Journalism, DOI: 10.1080/21670811.2016.1208053 To link to this article: http://dx.doi.org/10.1080/21670811.2016.1208053",
"title": ""
},
{
"docid": "5a2bf6b24abcbad24f4c01847b66dd2e",
"text": "Sparse representations of text such as bag-ofwords models or extended explicit semantic analysis (ESA) representations are commonly used in many NLP applications. However, for short texts, the similarity between two such sparse vectors is not accurate due to the small term overlap. While there have been multiple proposals for dense representations of words, measuring similarity between short texts (sentences, snippets, paragraphs) requires combining these token level similarities. In this paper, we propose to combine ESA representations and word2vec representations as a way to generate denser representations and, consequently, a better similarity measure between short texts. We study three densification mechanisms that involve aligning sparse representation via many-to-many, many-to-one, and oneto-one mappings. We then show the effectiveness of these mechanisms on measuring similarity between short texts.",
"title": ""
},
{
"docid": "c8f39a710ca3362a4d892879f371b318",
"text": "While sentiment and emotion analysis has received a considerable amount of research attention, the notion of understanding and detecting the intensity of emotions is relatively less explored. This paper describes a system developed for predicting emotion intensity in tweets. Given a Twitter message, CrystalFeel uses features derived from parts-of-speech, ngrams, word embedding, and multiple affective lexicons including Opinion Lexicon, SentiStrength, AFFIN, NRC Emotion & Hash Emotion, and our in-house developed EI Lexicons to predict the degree of the intensity associated with fear, anger, sadness, and joy in the tweet. We found that including the affective lexicons-based features allowed the system to obtain strong prediction performance, while revealing interesting emotion word-level and message-level associations. On gold test data, CrystalFeel obtained Pearson correlations of .717 on average emotion intensity and of .816 on sentiment intensity.",
"title": ""
},
{
"docid": "7a3053844afda6f06785058f1dda4648",
"text": "Mutation analysis evaluates a testing technique by measur- ing how well it detects seeded faults (mutants). Mutation analysis is hampered by inherent scalability problems — a test suite is executed for each of a large number of mutants. Despite numerous optimizations presented in the literature, this scalability issue remains, and this is one of the reasons why mutation analysis is hardly used in practice. Whereas most previous optimizations attempted to stati- cally reduce the number of executions or their computational overhead, this paper exploits information available only at run time to further reduce the number of executions. First, state infection conditions can reveal — with a single test execution of the unmutated program — which mutants would lead to a different state, thus avoiding unnecessary test executions. Second, determining whether an infected execution state propagates can further reduce the number of executions. Mutants that are embedded in compound expressions may infect the state locally without affecting the outcome of the compound expression. Third, those mutants that do infect the state can be partitioned based on the resulting infected state — if two mutants lead to the same infected state, only one needs to be executed as the result of the other can be inferred. We have implemented these optimizations in the Major mu- tation framework and empirically evaluated them on 14 open source programs. The optimizations reduced the mutation analysis time by 40% on average.",
"title": ""
}
] | scidocsrr |
2196d51908364187a9c56b0f73884c8c | A fully-adaptive wideband 0.5–32.75Gb/s FPGA transceiver in 16nm FinFET CMOS technology | [
{
"docid": "09af9b0987537e54b7456fb36407ffe3",
"text": "The introduction of high-speed backplane transceivers inside FPGAs has addressed critical issues such as the ease in scalability of performance, high availability, flexible architectures, the use of standards, and rapid time to market. These have been crucial to address the ever-increasing demand for bandwidth in communication and storage systems [1-3], requiring novel techniques in receiver (RX) and clocking circuits.",
"title": ""
}
] | [
{
"docid": "c246f445b8341d2ae400a1fba2f64205",
"text": "This paper presents a novel design of cylindrical modified Luneberg lens antenna at millimeter-wave (mm-wave) frequencies in which no dielectric is needed as lens material. The cylindrical modified Luneberg lens consists of two air-filled, almost-parallel plates whose spacing continuously varies with the radius to simulate the general Luneberg's Law. A planar antipodal linearly-tapered slot antenna (ALTSA) is placed between the parallel plates at the focal position of the lens as a feed antenna. A combined ray-optics/diffraction method and CST-MWS are used to analyze and design this lens antenna. Measured results of a fabricated cylindrical modified Luneberg lens with a diameter of 100 mm show good agreement with theoretical predictions. At the design frequency of 30 GHz, the measured 3-dB E- and H-plane beamwidths are 8.6° and 68°, respectively. The first sidelobe level in the E-plane is -20 dB, and the cross-polarization is -28 dB below peak. The measured aperture efficiency is 68% at 30 GHz, and varies between 50% and 71% over the tested frequency band of 29-32 GHz. Due to its rotational symmetry, this lens can be used to launch multiple beams by implementing an arc array of planar ALTSA elements at the periphery of the lens. A 21-element antenna array with a -3-D dB beam crossover and a scan angle of 180° is demonstrated. The measured overall scan coverage is up to ±80° with gain drop less than -3 dB.",
"title": ""
},
{
"docid": "7d44a9227848baaf54b9bfb736727551",
"text": "Introduction: The causal relation between tongue thrust swallowing or habit and development of anterior open bite continues to be made in clinical orthodontics yet studies suggest a lack of evidence to support a cause and effect. Treatment continues to be directed towards closing the anterior open bite frequently with surgical intervention to reposition the maxilla and mandible. This case report illustrates a highly successful non-surgical orthodontic treatment without extractions.",
"title": ""
},
{
"docid": "09ee1b6d80facc1c21248e855f17a17d",
"text": "AIM\nTo examine the relationship between calf circumference and muscle mass, and to evaluate the suitability of calf circumference as a surrogate marker of muscle mass for the diagnosis of sarcopenia among middle-aged and older Japanese men and women.\n\n\nMETHODS\nA total of 526 adults aged 40-89 years participated in the present cross-sectional study. The maximum calf circumference was measured in a standing position. Appendicular skeletal muscle mass was measured using dual-energy X-ray absorptiometry, and the skeletal muscle index was calculated as appendicular skeletal muscle mass divided by the square of the height (kg/m(2)). The cut-off values for sarcopenia were defined as a skeletal muscle index of less than -2 standard deviations of the mean value for Japanese young adults, as defined previously.\n\n\nRESULTS\nCalf circumference was positively correlated with appendicular skeletal muscle (r = 0.81 in men, r = 0.73 in women) and skeletal muscle index (r = 0.80 in men, r = 0.69 in women). In receiver operating characteristic analysis, the optimal calf circumference cut-off values for predicting sarcopenia were 34 cm (sensitivity 88%, specificity 91%) in men and 33 cm (sensitivity 76%, specificity 73%) in women.\n\n\nCONCLUSIONS\nCalf circumference was positively correlated with appendicular skeletal muscle mass and skeletal muscle index, and could be used as a surrogate marker of muscle mass for diagnosing sarcopenia. The suggested cut-off values of calf circumference for predicting low muscle mass are <34 cm in men and <33 cm in women.",
"title": ""
},
{
"docid": "abd5a7566cefd263be3c082b4974c1e6",
"text": "Interconnect architectures which leverage high-bandwidth optical channels offer a promising solution to address the increasing chip-to-chip I/O bandwidth demands. This paper describes a dense, high-speed, and low-power CMOS optical interconnect transceiver architecture. Vertical-cavity surface-emitting laser (VCSEL) data rate is extended for a given average current and corresponding reliability level with a four-tap current summing FIR transmitter. A low-voltage integrating and double-sampling optical receiver front-end provides adequate sensitivity in a power efficient manner by avoiding linear high-gain elements common in conventional transimpedance-amplifier (TIA) receivers. Clock recovery is performed with a dual-loop architecture which employs baud-rate phase detection and feedback interpolation to achieve reduced power consumption, while high-precision phase spacing is ensured at both the transmitter and receiver through adjustable delay clock buffers. A prototype chip fabricated in 1 V 90 nm CMOS achieves 16 Gb/s operation while consuming 129 mW and occupying 0.105 mm2.",
"title": ""
},
{
"docid": "2059db0707ffc28fd62b7387ba6d09ae",
"text": "Embedded quantization is a mechanism employed by many lossy image codecs to progressively refine the distortion of a (transformed) image. Currently, the most common approach to do so in the context of wavelet-based image coding is to couple uniform scalar deadzone quantization (USDQ) with bitplane coding (BPC). USDQ+BPC is convenient for its practicality and has proved to achieve competitive coding performance. But the quantizer established by this scheme does not allow major variations. This paper introduces a multistage quantization scheme named general embedded quantization (GEQ) that provides more flexibility to the quantizer. GEQ schemes can be devised for specific decoding rates achieving optimal coding performance. Practical approaches of GEQ schemes achieve coding performance similar to that of USDQ+BPC while requiring fewer quantization stages. The performance achieved by GEQ is evaluated in this paper through experimental results carried out in the framework of modern image coding systems.",
"title": ""
},
{
"docid": "c2177b7e3cdca3800b3d465229835949",
"text": "BACKGROUND\nIn 2010, the World Health Organization published benchmarks for training in osteopathy in which osteopathic visceral techniques are included. The purpose of this study was to identify and critically appraise the scientific literature concerning the reliability of diagnosis and the clinical efficacy of techniques used in visceral osteopathy.\n\n\nMETHODS\nDatabases MEDLINE, OSTMED.DR, the Cochrane Library, Osteopathic Research Web, Google Scholar, Journal of American Osteopathic Association (JAOA) website, International Journal of Osteopathic Medicine (IJOM) website, and the catalog of Académie d'ostéopathie de France website were searched through December 2017. Only inter-rater reliability studies including at least two raters or the intra-rater reliability studies including at least two assessments by the same rater were included. For efficacy studies, only randomized-controlled-trials (RCT) or crossover studies on unhealthy subjects (any condition, duration and outcome) were included. Risk of bias was determined using a modified version of the quality appraisal tool for studies of diagnostic reliability (QAREL) in reliability studies. For the efficacy studies, the Cochrane risk of bias tool was used to assess their methodological design. Two authors performed data extraction and analysis.\n\n\nRESULTS\nEight reliability studies and six efficacy studies were included. The analysis of reliability studies shows that the diagnostic techniques used in visceral osteopathy are unreliable. Regarding efficacy studies, the least biased study shows no significant difference for the main outcome. The main risks of bias found in the included studies were due to the absence of blinding of the examiners, an unsuitable statistical method or an absence of primary study outcome.\n\n\nCONCLUSIONS\nThe results of the systematic review lead us to conclude that well-conducted and sound evidence on the reliability and the efficacy of techniques in visceral osteopathy is absent.\n\n\nTRIAL REGISTRATION\nThe review is registered PROSPERO 12th of December 2016. Registration number is CRD4201605286 .",
"title": ""
},
{
"docid": "1675208fd7adefb20784a7708d655763",
"text": "The number of crime incidents that is reported per day in India is increasing dramatically. The criminals today use various advanced technologies and commit crimes in really tactful ways. This makes crime investigation a more complicated process. Thus the police officers have to perform a lot of manual tasks to get a thread for investigation. This paper deals with the study of data mining based systems for analyzing crime information and thus automates the crime investigation procedure of the police officers. The majority of these frameworks utilize a blend of data mining methods such as clustering and classification for the effective investigation of the criminal acts.",
"title": ""
},
{
"docid": "976507b0b89c2202ab603ccedae253f5",
"text": "We present a natural language generator based on the sequence-to-sequence approach that can be trained to produce natural language strings as well as deep syntax dependency trees from input dialogue acts, and we use it to directly compare two-step generation with separate sentence planning and surface realization stages to a joint, one-step approach. We were able to train both setups successfully using very little training data. The joint setup offers better performance, surpassing state-of-the-art with regards to ngram-based scores while providing more relevant outputs.",
"title": ""
},
{
"docid": "fdb88cbc66d6eccb76cfbecdaf596c77",
"text": "Recent studies show that more than 86% of Internet paths allow well-designed TCP extensions, meaning that it is still possible to deploy transport layer improvements despite the existence of middleboxes in the network. Hence, the blame for the slow evolution of protocols (with extensions taking many years to nbecome widely used) should be placed on end systems.\n In this paper, we revisit the case for moving protocols stacks up into user space in order to ease the deployment of new protocols, extensions, or performance optimizations. We present MultiStack, operating system support for user-level protocol stacks. MultiStack runs within commodity operating systems, can concurrently host a large number of isolated stacks, has a fall-back path to the legacy host stack, and is able to process packets at rates of 10Gb/s.\n We validate our design by showing that our mux/demux layer can validate and switch packets at line rate (up to 14.88 Mpps) on a 10 Gbit port using 1-2 cores, and that a proof-of-concept HTTP server running over a basic userspace TCP outperforms by 18-90% both the same server and nginx running over the kernel's stack.",
"title": ""
},
{
"docid": "28f8be68a0fe4762af272a0e11d53f7d",
"text": "In this article, we address the cross-domain (i.e., street and shop) clothing retrieval problem and investigate its real-world applications for online clothing shopping. It is a challenging problem due to the large discrepancy between street and shop domain images. We focus on learning an effective feature-embedding model to generate robust and discriminative feature representation across domains. Existing triplet embedding models achieve promising results by finding an embedding metric in which the distance between negative pairs is larger than the distance between positive pairs plus a margin. However, existing methods do not address the challenges in the cross-domain clothing retrieval scenario sufficiently. First, the intradomain and cross-domain data relationships need to be considered simultaneously. Second, the number of matched and nonmatched cross-domain pairs are unbalanced. To address these challenges, we propose a deep cross-triplet embedding algorithm together with a cross-triplet sampling strategy. The extensive experimental evaluations demonstrate the effectiveness of the proposed algorithms well. Furthermore, we investigate two novel online shopping applications, clothing trying on and accessories recommendation, based on a unified cross-domain clothing retrieval framework.",
"title": ""
},
{
"docid": "5816f70a7f4d7d0beb6e0653db962df3",
"text": "Packaging appearance is extremely important in cigarette manufacturing. Typically, there are two types of cigarette packaging defects: (1) cigarette laying defects such as incorrect cigarette numbers and irregular layout; (2) tin paper handle defects such as folded paper handles. In this paper, an automated vision-based defect inspection system is designed for cigarettes packaged in tin containers. The first type of defects is inspected by counting the number of cigarettes in a tin container. First k-means clustering is performed to segment cigarette regions. After noise filtering, valid cigarette regions are identified by estimating individual cigarette area using linear regression. The k clustering centers and area estimation function are learned off-line on training images. The second kind of defect is detected by checking the segmented paper handle region. Experimental results on 500 test images demonstrate the effectiveness of the proposed inspection system. The proposed method also contributes to the general detection and classification system such as identifying mitosis in early diagnosis of cervical cancer.",
"title": ""
},
{
"docid": "37a6f3773aebf46cc40266b8bb5692af",
"text": "The theory of myofascial pain syndrome (MPS) caused by trigger points (TrPs) seeks to explain the phenomena of muscle pain and tenderness in the absence of evidence for local nociception. Although it lacks external validity, many practitioners have uncritically accepted the diagnosis of MPS and its system of treatment. Furthermore, rheumatologists have implicated TrPs in the pathogenesis of chronic widespread pain (FM syndrome). We have critically examined the evidence for the existence of myofascial TrPs as putative pathological entities and for the vicious cycles that are said to maintain them. We find that both are inventions that have no scientific basis, whether from experimental approaches that interrogate the suspect tissue or empirical approaches that assess the outcome of treatments predicated on presumed pathology. Therefore, the theory of MPS caused by TrPs has been refuted. This is not to deny the existence of the clinical phenomena themselves, for which scientifically sound and logically plausible explanations based on known neurophysiological phenomena can be advanced.",
"title": ""
},
{
"docid": "8a339bdfd3966e56b0132ca82c2eb824",
"text": "This paper introduces a novel spectral framework for solving Markov decision processes (MDPs) by jointly learning representations and optimal policies. The major components of the framework described in this paper include: (i) A general scheme for constructing representations or basis functions by diagonalizing symmetric diffusion operators (ii) A specific instantiation of this approach where global basis functions called proto-value functions (PVFs) are formed using the eigenvectors of the graph Laplacian on an undirected graph formed from state transitions induced by the MDP (iii) A three-phased procedure called representation policy iteration comprising of a sample collection phase, a representation learning phase that constructs basis functions from samples, and a final parameter estimation phase that determines an (approximately) optimal policy within the (linear) subspace spanned by the (current) basis functions. (iv) A specific instantiation of the RPI framework using least-squares policy iteration (LSPI) as the parameter estimation method (v) Several strategies for scaling the proposed approach to large discrete and continuous state spaces, including the Nyström extension for out-of-sample interpolation of eigenfunctions, and the use of Kronecker sum factorization to construct compact eigenfunctions in product spaces such as factored MDPs (vi) Finally, a series of illustrative discrete and continuous control tasks, which both illustrate the concepts and provide a benchmark for evaluating the proposed approach. Many challenges remain to be addressed in scaling the proposed framework to large MDPs, and several elaboration of the proposed framework are briefly summarized at the end.",
"title": ""
},
{
"docid": "0da299fb53db5980a10e0ae8699d2209",
"text": "Modern heuristics or metaheuristics are optimization algorithms that have been increasingly used during the last decades to support complex decision-making in a number of fields, such as logistics and transportation, telecommunication networks, bioinformatics, finance, and the like. The continuous increase in computing power, together with advancements in metaheuristics frameworks and parallelization strategies, are empowering these types of algorithms as one of the best alternatives to solve rich and real-life combinatorial optimization problems that arise in a number of financial and banking activities. This article reviews some of the works related to the use of metaheuristics in solving both classical and emergent problems in the finance arena. A non-exhaustive list of examples includes rich portfolio optimization, index tracking, enhanced indexation, credit risk, stock investments, financial project scheduling, option pricing, feature selection, bankruptcy and financial distress prediction, and credit risk assessment. This article also discusses some open opportunities for researchers in the field, and forecast the evolution of metaheuristics to include real-life uncertainty conditions into the optimization problems being considered.",
"title": ""
},
{
"docid": "f2b3643ca7a9a1759f038f15847d7617",
"text": "Despite significant advances in image segmentation techniques, evaluation of these techniques thus far has been largely subjective. Typically, the effectiveness of a new algorithm is demonstrated only by the presentation of a few segmented images and is otherwise left to subjective evaluation by the reader. Little effort has been spent on the design of perceptually correct measures to compare an automatic segmentation of an image to a set of hand-segmented examples of the same image. This paper demonstrates how a modification of the Rand index, the Normalized Probabilistic Rand (NPR) index, meets the requirements of largescale performance evaluation of image segmentation. We show that the measure has a clear probabilistic interpretation as the maximum likelihood estimator of an underlying Gibbs model, can be correctly normalized to account for the inherent similarity in a set of ground truth images, and can be computed efficiently for large datasets. Results are presented on images from the publicly available Berkeley Segmentation dataset.",
"title": ""
},
{
"docid": "679e7b448f0b3bc2f1713cdb852ac6b2",
"text": "There are many advantages of using high frequency PWM (in the range of 50 to 100 kHz) in motor drive applications. High motor efficiency, fast control response, lower motor torque ripple, close to ideal sinusoidal motor current waveform, smaller filter size, lower cost filter, etc. are a few of the advantages. However, higher frequency PWM is also associated with severe voltage reflection and motor insulation breakdown issues at the motor terminals. If standard Si IGBT based inverters are employed, losses in the switches make it difficult to overcome significant drop in efficiency of converting electrical power to mechanical power. Work on SiC and GaN based inverter has progressed and variable frequency drives (VFDs) can now be operated efficiently at carrier frequencies in the 50 to 200 kHz range, using these devices. Using soft magnetic material, the overall efficiency of filtering can be improved. The switching characteristics of SiC and GaN devices are such that even at high switching frequency, the turn on and turn off losses are minimal. Hence, there is not much penalty in increasing the carrier frequency of the VFD. Losses in AC motors due to PWM waveform are significantly reduced. All the above features put together improves system efficiency. This paper presents results obtained on using a 6-in-1 GaN module for VFD application, operating at a carrier frequency of 100 kHz with an output sine wave filter. Experimental results show the improvement in motor efficiency and system efficiency on using a GaN based VFD in comparison to the standard Si IGBT based VFD.",
"title": ""
},
{
"docid": "72e6d897e8852fca481d39237cf04e36",
"text": "CONTEXT\nPrimary care physicians report high levels of distress, which is linked to burnout, attrition, and poorer quality of care. Programs to reduce burnout before it results in impairment are rare; data on these programs are scarce.\n\n\nOBJECTIVE\nTo determine whether an intensive educational program in mindfulness, communication, and self-awareness is associated with improvement in primary care physicians' well-being, psychological distress, burnout, and capacity for relating to patients.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nBefore-and-after study of 70 primary care physicians in Rochester, New York, in a continuing medical education (CME) course in 2007-2008. The course included mindfulness meditation, self-awareness exercises, narratives about meaningful clinical experiences, appreciative interviews, didactic material, and discussion. An 8-week intensive phase (2.5 h/wk, 7-hour retreat) was followed by a 10-month maintenance phase (2.5 h/mo).\n\n\nMAIN OUTCOME MEASURES\nMindfulness (2 subscales), burnout (3 subscales), empathy (3 subscales), psychosocial orientation, personality (5 factors), and mood (6 subscales) measured at baseline and at 2, 12, and 15 months.\n\n\nRESULTS\nOver the course of the program and follow-up, participants demonstrated improvements in mindfulness (raw score, 45.2 to 54.1; raw score change [Delta], 8.9; 95% confidence interval [CI], 7.0 to 10.8); burnout (emotional exhaustion, 26.8 to 20.0; Delta = -6.8; 95% CI, -4.8 to -8.8; depersonalization, 8.4 to 5.9; Delta = -2.5; 95% CI, -1.4 to -3.6; and personal accomplishment, 40.2 to 42.6; Delta = 2.4; 95% CI, 1.2 to 3.6); empathy (116.6 to 121.2; Delta = 4.6; 95% CI, 2.2 to 7.0); physician belief scale (76.7 to 72.6; Delta = -4.1; 95% CI, -1.8 to -6.4); total mood disturbance (33.2 to 16.1; Delta = -17.1; 95% CI, -11 to -23.2), and personality (conscientiousness, 6.5 to 6.8; Delta = 0.3; 95% CI, 0.1 to 5 and emotional stability, 6.1 to 6.6; Delta = 0.5; 95% CI, 0.3 to 0.7). Improvements in mindfulness were correlated with improvements in total mood disturbance (r = -0.39, P < .001), perspective taking subscale of physician empathy (r = 0.31, P < .001), burnout (emotional exhaustion and personal accomplishment subscales, r = -0.32 and 0.33, respectively; P < .001), and personality factors (conscientiousness and emotional stability, r = 0.29 and 0.25, respectively; P < .001).\n\n\nCONCLUSIONS\nParticipation in a mindful communication program was associated with short-term and sustained improvements in well-being and attitudes associated with patient-centered care. Because before-and-after designs limit inferences about intervention effects, these findings warrant randomized trials involving a variety of practicing physicians.",
"title": ""
},
{
"docid": "11d1a8d8cd9fdabfbdc77d4a0accf007",
"text": "Blockchain technology like Bitcoin is a rapidly growing field of research which has found a wide array of applications. However, the power consumption of the mining process in the Bitcoin blockchain alone is estimated to be at least as high as the electricity consumption of Ireland which constitutes a serious liability to the widespread adoption of blockchain technology. We propose a novel instantiation of a proof of human-work which is a cryptographic proof that an amount of human work has been exercised, and show its use in the mining process of a blockchain. Next to our instantiation there is only one other instantiation known which relies on indistinguishability obfuscation, a cryptographic primitive whose existence is only conjectured. In contrast, our construction is based on the cryptographic principle of multiparty computation (which we use in a black box manner) and thus is the first known feasible proof of human-work scheme. Our blockchain mining algorithm called uMine, can be regarded as an alternative energy-efficient approach to mining.",
"title": ""
},
{
"docid": "b418470025d74d745e75225861a1ed7e",
"text": "The brain which is composed of more than 100 billion nerve cells is a sophisticated biochemical factory. For many years, neurologists, psychotherapists, researchers, and other health care professionals have studied the human brain. With the development of computer and information technology, it makes brain complex spectrum analysis to be possible and opens a highlight field for the study of brain science. In the present work, observation and exploring study of the activities of brain under brainwave music stimulus are systemically made by experimental and spectrum analysis technology. From our results, the power of the 10.5Hz brainwave appears in the experimental figures, it was proved that upper alpha band is entrained under the special brainwave music. According to the Mozart effect and the analysis of improving memory performance, the results confirm that upper alpha band is indeed related to the improvement of learning efficiency.",
"title": ""
},
{
"docid": "6416eb9235954730b8788b7b744d9e5b",
"text": "This paper presents a machine learning based handover management scheme for LTE to improve the Quality of Experience (QoE) of the user in the presence of obstacles. We show that, in this scenario, a state-of-the-art handover algorithm is unable to select the appropriate target cell for handover, since it always selects the target cell with the strongest signal without taking into account the perceived QoE of the user after the handover. In contrast, our scheme learns from past experience how the QoE of the user is affected when the handover was done to a certain eNB. Our performance evaluation shows that the proposed scheme substantially improves the number of completed downloads and the average download time compared to state-of-the-art. Furthermore, its performance is close to an optimal approach in the coverage region affected by an obstacle.",
"title": ""
}
] | scidocsrr |
e5acf9f83c5142fe6b9a57179ce7787b | Friending your way up the ladder: Connecting massive multiplayer online game behaviors with offline leadership | [
{
"docid": "90e76229ff20e253d8d28e09aad432dc",
"text": "Playing online games is experience-oriented but few studies have explored the user’s initial (trial) reaction to game playing and how this further influences a player’s behavior. Drawing upon the Uses and Gratifications theory, we investigated players’ multiple gratifications for playing (i.e. achievement, enjoyment and social interaction) and their experience with the service mechanisms offered after they had played an online game. This study explores the important antecedents of players’ proactive ‘‘stickiness” to a specific online game and examines the relationships among these antecedents. The results show that both the gratifications and service mechanisms significantly affect a player’s continued motivation to play, which is crucial to a player’s proactive stickiness to an online game. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "b8be5a7904829b247436fa9c544110a6",
"text": "Realization of Randomness had always been a controversial concept with great importance both from theoretical and practical Perspectives. This realization has been revolutionized in the light of recent studies especially in the realms of Chaos Theory, Algorithmic Information Theory and Emergent behavior in complex systems. We briefly discuss different definitions of Randomness and also different methods for generating it. The connection between all these approaches and the notion of Normality as the necessary condition of being unpredictable would be discussed. Then a complex-system-based Random Number Generator would be introduced. We will analyze its paradoxical features (Conservative Nature and reversibility in spite of having considerable variation) by using information theoretic measures in connection with other measures. The evolution of this Random Generator is equivalent to the evolution of its probabilistic description in terms of probability distribution over blocks of different lengths. By getting the aid of simulations we will show the ability of this system to preserve normality during the process of coarse graining. Keywords—Random number generators; entropy; correlation information; elementary cellular automata; reversibility",
"title": ""
},
{
"docid": "6c4b027910830aea8e679720232cacf4",
"text": "In this paper we introduce a new, high-quality, dataset of images containing fruits. We also present the results of some numerical experiment for training a neural network to detect fruits. We discuss the reason why we chose to use fruits in this project by proposing a few applications that could use such classifier.",
"title": ""
},
{
"docid": "d04a6ca9c09b8c10daf64c9f7830c992",
"text": "Slave servo clocks have an essential role in hardware and software synchronization techniques based on Precision Time Protocol (PTP). The objective of servo clocks is to remove the drift between slave and master nodes, while keeping the output timing jitter within given uncertainty boundaries. Up to now, no univocal criteria exist for servo clock design. In fact, the relationship between controller design, performances and uncertainty sources is quite evanescent. In this paper, we propose a quite simple, but exhaustive linear model, which is expected to be used in the design of enhanced servo clock architectures.",
"title": ""
},
{
"docid": "bc4b1b48794f9db934c705ef3821cdcf",
"text": "Expanding access to financial services holds the promise to help reduce poverty and spur economic development. But, as a practical matter, commercial banks have faced challenges expanding access to poor and low-income households in developing economies, and nonprofits have had limited reach. We review recent innovations that are improving the quantity and quality of financial access. They are taking possibilities well beyond early models centered on providing “microcredit” for small business investment. We focus on new credit mechanisms and devices that help households manage cash flows, save, and cope with risk. Our eye is on contract designs, product innovations, regulatory policy, and ultimately economic and social impacts. We relate the innovations and empirical evidence to theoretical ideas, drawing links in particular to new work in behavioral economics and to randomized evaluation methods.",
"title": ""
},
{
"docid": "c26eabb377db5f1033ec6d354d890a6f",
"text": "Recurrent neural networks have recently shown significant potential in different language applications, ranging from natural language processing to language modelling. This paper introduces a research effort to use such networks to develop and evaluate natural language acquisition on a humanoid robot. Here, the problem is twofold. First, the focus will be put on using the gesture-word combination stage observed in infants to transition from single to multi-word utterances. Secondly, research will be carried out in the domain of connecting action learning with language learning. In the former, the long-short term memory architecture will be implemented, whilst in the latter multiple time-scale recurrent neural networks will be used. This will allow for comparison between the two architectures, whilst highlighting the strengths and shortcomings of both with respect to the language learning problem. Here, the main research efforts, challenges and expected outcomes are described.",
"title": ""
},
{
"docid": "8e6debae3b3d3394e87e671a14f8819e",
"text": "Access to large, diverse RGB-D datasets is critical for training RGB-D scene understanding algorithms. However, existing datasets still cover only a limited number of views or a restricted scale of spaces. In this paper, we introduce Matterport3D, a large-scale RGB-D dataset containing 10,800 panoramic views from 194,400 RGB-D images of 90 building-scale scenes. Annotations are provided with surface reconstructions, camera poses, and 2D and 3D semantic segmentations. The precise global alignment and comprehensive, diverse panoramic set of views over entire buildings enable a variety of supervised and self-supervised computer vision tasks, including keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and region classification.",
"title": ""
},
{
"docid": "c1ca7ef76472258c6359111dd4d014d5",
"text": "Online forums contain huge amounts of valuable user-generated content. In current forum systems, users have to passively wait for other users to visit the forum systems and read/answer their questions. The user experience for question answering suffers from this arrangement. In this paper, we address the problem of \"pushing\" the right questions to the right persons, the objective being to obtain quick, high-quality answers, thus improving user satisfaction. We propose a framework for the efficient and effective routing of a given question to the top-k potential experts (users) in a forum, by utilizing both the content and structures of the forum system. First, we compute the expertise of users according to the content of the forum system—-this is to estimate the probability of a user being an expert for a given question based on the previous question answering of the user. Specifically, we design three models for this task, including a profile-based model, a thread-based model, and a cluster-based model. Second, we re-rank the user expertise measured in probability by utilizing the structural relations among users in a forum system. The results of the two steps can be integrated naturally in a probabilistic model that computes a final ranking score for each user. Experimental results show that the proposals are very promising.",
"title": ""
},
{
"docid": "15a56973f3751dbc069fe62cd076682c",
"text": "The software QBlade under General Public License is used for analysis and design of wind turbines. QBlade uses the Blade Element Momentum (BEM) method for the simulation of wind turbines and it is integrated with the XFOIL airfoil design and analysis. It is possible to predict wind turbine performance with it. Nowadays, Computational Fluid Dynamics (CFD) is used for optimization and design of turbine application. In this study, Horizontal wind turbine with a rotor diameter of 2 m, was designed and objected to performance analysis by QBlade and Ansys-Fluent. The graphic of the power coefficient vs. tip speed ratio (TSR) was obtained for each result. When the results are compared, the good agreement has been seen.",
"title": ""
},
{
"docid": "13150a58d86b796213501d26e4b41e5b",
"text": "In this work, CoMoO4@NiMoO4·xH2O core-shell heterostructure electrode is directly grown on carbon fabric (CF) via a feasible hydrothermal procedure with CoMoO4 nanowires (NWs) as the core and NiMoO4 nanosheets (NSs) as the shell. This core-shell heterostructure could provide fast ion and electron transfer, a large number of active sites, and good strain accommodation. As a result, the CoMoO4@NiMoO4·xH2O electrode yields high-capacitance performance with a high specific capacitance of 1582 F g-1, good cycling stability with the capacitance retention of 97.1% after 3000 cycles and good rate capability. The electrode also shows excellent mechanical flexibility. Also, a flexible Fe2O3 nanorods/CF electrode with enhanced electrochemical performance was prepared. A solid-state asymmetric supercapacitor device is successfully fabricated by using flexible CoMoO4@NiMoO4·xH2O as the positive electrode and Fe2O3 as the negative electrode. The asymmetric supercapacitor with a maximum voltage of 1.6 V demonstrates high specific energy (41.8 Wh kg-1 at 700 W kg-1), high power density (12000 W kg-1 at 26.7 Wh kg-1), and excellent cycle ability with the capacitance retention of 89.3% after 5000 cycles (at the current density of 3A g-1).",
"title": ""
},
{
"docid": "17642e2f5ac7d6594df72deacab332fb",
"text": "Paraphrase patterns are semantically equivalent patterns, which are useful in both paraphrase recognition and generation. This paper presents a pivot approach for extracting paraphrase patterns from bilingual parallel corpora, whereby the paraphrase patterns in English are extracted using the patterns in another language as pivots. We make use of log-linear models for computing the paraphrase likelihood between pattern pairs and exploit feature functions based on maximum likelihood estimation (MLE), lexical weighting (LW), and monolingual word alignment (MWA). Using the presented method, we extract more than 1 million pairs of paraphrase patterns from about 2 million pairs of bilingual parallel sentences. The precision of the extracted paraphrase patterns is above 78%. Experimental results show that the presented method significantly outperforms a well-known method called discovery of inference rules from text (DIRT). Additionally, the log-linear model with the proposed feature functions are effective. The extracted paraphrase patterns are fully analyzed. Especially, we found that the extracted paraphrase patterns can be classified into five types, which are useful in multiple natural language processing (NLP) applications.",
"title": ""
},
{
"docid": "b0c62e2049ea4f8ada0d506e06adb4bb",
"text": "In the past year, convolutional neural networks have been shown to perform extremely well for stereo estimation. However, current architectures rely on siamese networks which exploit concatenation followed by further processing layers, requiring a minute of GPU computation per image pair. In contrast, in this paper we propose a matching network which is able to produce very accurate results in less than a second of GPU computation. Towards this goal, we exploit a product layer which simply computes the inner product between the two representations of a siamese architecture. We train our network by treating the problem as multi-class classification, where the classes are all possible disparities. This allows us to get calibrated scores, which result in much better matching performance when compared to existing approaches.",
"title": ""
},
{
"docid": "9c8f6dddcb9bb099eea4433534cb40da",
"text": "There has been an increasing interest in the applications of polarimctric n~icrowavc radiometers for ocean wind remote sensing. Aircraft and spaceborne radiometers have found significant wind direction signals in sea surface brightness temperatures, in addition to their sensitivities on wind speeds. However, it is not yet understood what physical scattering mechanisms produce the observed wind direction dependence. To this encl, polari]nctric microwave emissions from wind-generated sea surfaces are investigated with a polarimctric two-scale scattering model of sea surfaces, which relates the directional wind-wave spectrum to passive microwave signatures of sea surfaces. T)leoretical azimuthal modulations are found to agree well with experimental observations foI all Stokes paranletcrs from nearnadir to 65° incidence angles. The up/downwind asymmetries of brightness temperatures are interpreted usiIlg the hydrodynamic modulation. The contributions of Bragg scattering by short waves, geometric optics scattering by long waves and sea foam are examined. The geometric optics scattering mechanism underestimates the directicmal signals in the first three Stokes paranletcrs, and most importantly it predicts no signals in the fourth Stokes parameter (V), in disagreement with experimental datfi. In contrast, the Bragg scattering and and contributes to most of the wind direction signals from the two-scale model correctly predicts the phase changes of tl}e up/crosswind asymmetries in 7j U from middle to high incidence angles. The accuracy of the Bragg scattering theory for radiometric emission from water ripples is corroborated by the numerical Monte Carlo simulation of rough surface scattering. ‘I’his theoretical interpretation indicates the potential use of ]Jolarimctric brightness temperatures for retrieving the directional wave spectrum of capillary waves.",
"title": ""
},
{
"docid": "5d417375c4ce7c47a90808971f215c91",
"text": "While the RGB2GRAY conversion with fixed parameters is a classical and widely used tool for image decolorization, recent studies showed that adapting weighting parameters in a two-order multivariance polynomial model has great potential to improve the conversion ability. In this paper, by viewing the two-order model as the sum of three subspaces, it is observed that the first subspace in the two-order model has the dominating importance and the second and the third subspace can be seen as refinement. Therefore, we present a semiparametric strategy to take advantage of both the RGB2GRAY and the two-order models. In the proposed method, the RGB2GRAY result on the first subspace is treated as an immediate grayed image, and then the parameters in the second and the third subspace are optimized. Experimental results show that the proposed approach is comparable to other state-of-the-art algorithms in both quantitative evaluation and visual quality, especially for images with abundant colors and patterns. This algorithm also exhibits good resistance to noise. In addition, instead of the color contrast preserving ratio using the first-order gradient for decolorization quality metric, the color contrast correlation preserving ratio utilizing the second-order gradient is calculated as a new perceptual quality metric.",
"title": ""
},
{
"docid": "60d21d395c472eb36bdfd014c53d918a",
"text": "We introduce a fully differentiable approximation to higher-order inference for coreference resolution. Our approach uses the antecedent distribution from a span-ranking architecture as an attention mechanism to iteratively refine span representations. This enables the model to softly consider multiple hops in the predicted clusters. To alleviate the computational cost of this iterative process, we introduce a coarse-to-fine approach that incorporates a less accurate but more efficient bilinear factor, enabling more aggressive pruning without hurting accuracy. Compared to the existing state-of-the-art span-ranking approach, our model significantly improves accuracy on the English OntoNotes benchmark, while being far more computationally efficient.",
"title": ""
},
{
"docid": "4fa7f7f723c2f2eee4c0e2c294273c74",
"text": "Tracking human vital signs of breathing and heart rates during sleep is important as it can help to assess the general physical health of a person and provide useful clues for diagnosing possible diseases. Traditional approaches (e.g., Polysomnography (PSG)) are limited to clinic usage. Recent radio frequency (RF) based approaches require specialized devices or dedicated wireless sensors and are only able to track breathing rate. In this work, we propose to track the vital signs of both breathing rate and heart rate during sleep by using off-the-shelf WiFi without any wearable or dedicated devices. Our system re-uses existing WiFi network and exploits the fine-grained channel information to capture the minute movements caused by breathing and heart beats. Our system thus has the potential to be widely deployed and perform continuous long-term monitoring. The developed algorithm makes use of the channel information in both time and frequency domain to estimate breathing and heart rates, and it works well when either individual or two persons are in bed. Our extensive experiments demonstrate that our system can accurately capture vital signs during sleep under realistic settings, and achieve comparable or even better performance comparing to traditional and existing approaches, which is a strong indication of providing non-invasive, continuous fine-grained vital signs monitoring without any additional cost.",
"title": ""
},
{
"docid": "e6d5781d32e76d9c5f7c4ea985568986",
"text": "We present a baseline convolutional neural network (CNN) structure and image preprocessing methodology to improve facial expression recognition algorithm using CNN. To analyze the most efficient network structure, we investigated four network structures that are known to show good performance in facial expression recognition. Moreover, we also investigated the effect of input image preprocessing methods. Five types of data input (raw, histogram equalization, isotropic smoothing, diffusion-based normalization, difference of Gaussian) were tested, and the accuracy was compared. We trained 20 different CNN models (4 networks × 5 data input types) and verified the performance of each network with test images from five different databases. The experiment result showed that a three-layer structure consisting of a simple convolutional and a max pooling layer with histogram equalization image input was the most efficient. We describe the detailed training procedure and analyze the result of the test accuracy based on considerable observation.",
"title": ""
},
{
"docid": "1e2a64369279d178ee280ed7e2c0f540",
"text": "We describe what is to our knowledge a novel technique for phase unwrapping. Several algorithms based on unwrapping the most-reliable pixels first have been proposed. These were restricted to continuous paths and were subject to difficulties in defining a starting pixel. The technique described here uses a different type of reliability function and does not follow a continuous path to perform the unwrapping operation. The technique is explained in detail and illustrated with a number of examples.",
"title": ""
},
{
"docid": "ed9d72566cdf3e353bf4b1e589bf85eb",
"text": "In the last few years progress has been made in understanding basic mechanisms involved in damage to the inner ear and various potential therapeutic approaches have been developed. It was shown that hair cell loss mediated by noise or toxic drugs may be prevented by antioxidants, inhibitors of intracellular stress pathways and neurotrophic factors/neurotransmission blockers. Moreover, there is hope that once hair cells are lost, their regeneration can be induced or that stem cells can be used to build up new hair cells. However, although tremendous progress has been made, most of the concepts discussed in this review are still in the \"animal stage\" and it is difficult to predict which approach will finally enter clinical practice. In my opinion it is highly probable that some concepts of hair cell protection will enter clinical practice first, while others, such as the use of stem cells to restore hearing, are still far from clinical utility.",
"title": ""
},
{
"docid": "f6227013273d148321cab1eef83c40e5",
"text": "The advanced features of 5G mobile wireless network systems yield new security requirements and challenges. This paper presents a comprehensive study on the security of 5G wireless network systems compared with the traditional cellular networks. The paper starts with a review on 5G wireless networks particularities as well as on the new requirements and motivations of 5G wireless security. The potential attacks and security services are summarized with the consideration of new service requirements and new use cases in 5G wireless networks. The recent development and the existing schemes for the 5G wireless security are presented based on the corresponding security services, including authentication, availability, data confidentiality, key management, and privacy. This paper further discusses the new security features involving different technologies applied to 5G, such as heterogeneous networks, device-to-device communications, massive multiple-input multiple-output, software-defined networks, and Internet of Things. Motivated by these security research and development activities, we propose a new 5G wireless security architecture, based on which the analysis of identity management and flexible authentication is provided. As a case study, we explore a handover procedure as well as a signaling load scheme to show the advantages of the proposed security architecture. The challenges and future directions of 5G wireless security are finally summarized.",
"title": ""
},
{
"docid": "c612ee4ad1b4daa030e86a59543ca53b",
"text": "The dominant approach for many NLP tasks are recurrent neura l networks, in particular LSTMs, and convolutional neural networks. However , these architectures are rather shallow in comparison to the deep convolutional n etworks which are very successful in computer vision. We present a new archite ctur for text processing which operates directly on the character level and uses o nly small convolutions and pooling operations. We are able to show that the performa nce of this model increases with the depth: using up to 29 convolutional layer s, we report significant improvements over the state-of-the-art on several public t ext classification tasks. To the best of our knowledge, this is the first time that very de ep convolutional nets have been applied to NLP.",
"title": ""
}
] | scidocsrr |
fa767556951ac4811dafd085c16dd885 | A dual-band unidirectional coplanar antenna for 2.4–5-GHz wireless applications | [
{
"docid": "9998497c000fa194bf414604ff0d69b2",
"text": "By embedding shorting vias, a dual-feed and dual-band L-probe patch antenna, with flexible frequency ratio and relatively small lateral size, is proposed. Dual resonant frequency bands are produced by two radiating patches located in different layers, with the lower patch supported by shorting vias. The measured impedance bandwidths, determined by 10 dB return loss, of the two operating bands reach 26.6% and 42.2%, respectively. Also the radiation patterns are stable over both operating bands. Simulation results are compared well with experiments. This antenna is highly suitable to be used as a base station antenna for multiband operation.",
"title": ""
},
{
"docid": "feabda74915bd452c8d7b386147ff03e",
"text": "A novel uni-planar dual-band monopole antenna capable of generating two wide bands for 2.4/5 GHz WLAN operation is presented. The antenna has a simple structure consisting of a driven strip and a coupled shorted strip. The antenna occupies a small area of 6 times 20 mm2 on an FR4 substrate. The small area allows the antenna to be easily employed in the narrow space between the top edge of the display panel and the casing of the laptop computer to operate as an internal antenna. It is believed that the size of the antenna is about the smallest among the existing uni-planar internal laptop antennas for 2.4/5 GHz WLAN operation.",
"title": ""
}
] | [
{
"docid": "2313822a08269b3dd125190c4874b808",
"text": "General-purpose knowledge bases are increasingly growing in terms of depth (content) and width (coverage). Moreover, algorithms for entity linking and entity retrieval have improved tremendously in the past years. These developments give rise to a new line of research that exploits and combines these developments for the purposes of text-centric information retrieval applications. This tutorial focuses on a) how to retrieve a set of entities for an ad-hoc query, or more broadly, assessing relevance of KB elements for the information need, b) how to annotate text with such elements, and c) how to use this information to assess the relevance of text. We discuss different kinds of information available in a knowledge graph and how to leverage each most effectively.\n We start the tutorial with a brief overview of different types of knowledge bases, their structure and information contained in popular general-purpose and domain-specific knowledge bases. In particular, we focus on the representation of entity-centric information in the knowledge base through names, terms, relations, and type taxonomies. Next, we will provide a recap on ad-hoc object retrieval from knowledge graphs as well as entity linking and retrieval. This is essential technology, which the remainder of the tutorial builds on. Next we will cover essential components within successful entity linking systems, including the collection of entity name information and techniques for disambiguation with contextual entity mentions. We will present the details of four previously proposed systems that successfully leverage knowledge bases to improve ad-hoc document retrieval. These systems combine the notion of entity retrieval and semantic search on one hand, with text retrieval models and entity linking on the other. Finally, we also touch on entity aspects and links in the knowledge graph as it can help to understand the entities' context.\n This tutorial is the first to compile, summarize, and disseminate progress in this emerging area and we provide both an overview of state-of-the-art methods and outline open research problems to encourage new contributions.",
"title": ""
},
{
"docid": "dc237f28c6e1af3d32b94f2f76070b31",
"text": "INTRODUCTION\nA \"giant\" lipoma is defined as a tumor having dimensions greater than 10 cm. Giant lipomas are rare and giant breast lipomas are exceptionally uncommon. Only six cases have been described in world literature till date. Herein we describe a case of giant breast lipoma and discuss its surgical management.\n\n\nCASE REPORT\nA 43-year-old lady presented with left sided unilateral gigantomastia. Clinical examination, radiology and histopathology diagnosed lipoma. Excision of the tumor was planned, together with correction of the breast deformity by reduction mammoplasty using McKissok technique. A tumor measuring 19 cm × 16 cm × 10 cm and weighing 1647 grams was removed. The nipple areola complex was set by infolding of the vertical pedicles and the lateral and medial flaps were approximated to create the final breast contour. The patient is doing well on follow up.\n\n\nDISCUSSION\nGiant lipomas are rare and of them, giant breast lipomas are extremely uncommon. They can grow to immense proportions and cause significant aesthetic and functional problems. The treatment is excision. But reconstruction of the breast is almost always necessary to achieve a symmetric breast in terms of volume, shape, projection and nipple areola complex symmetry compared to the normal opposite breast. Few authors have used various mammoplasty techniques for reconstruction of the breast after giant lipoma excision. Our case has the following unique features: (i) It is the third largest breast lipoma described in the literature till date, weighing 1647 grams; (ii) The Mckissock technique has been used for parenchymal reshaping which has not been previously described for giant breast lipoma.\n\n\nCONCLUSION\nThis case demonstrates that reduction mammoplasty after giant lipoma removal is highly rewarding, resulting in a smaller-sized breast that is aesthetically more pleasing, has better symmetry with the contralateral breast, and provides relief from functional mass deficit.",
"title": ""
},
{
"docid": "d9188a0e02399e6e5a18f0b34443f0ce",
"text": "Recent advances in the statistical theory of hierarchical linear models should enable important breakthroughs in the measurement of psychological change and the study of correlates of change. A two-stage model of change is proposed here. At the first, or within-subject stage, an individual's status on some trait is modeled as a function of an individual growth trajectory plus random error. At the second, or between-subjects stage, the parameters of the individual growth trajectories vary as a function of differences between subjects in background characteristics, instructional experiences, and possibly experimental treatments. This two-stage conceptualization, illustrated with data on Head Start children, allows investigators to model individual change, predict future development, assess the quality of measurement instruments for distinguishing among growth trajectories, and to study systematic variation in growth trajectories as a function of background characteristics and experimental treatments.",
"title": ""
},
{
"docid": "659736f536f23c030f6c9cd86df88d1d",
"text": "Studies of human addicts and behavioural studies in rodent models of addiction indicate that key behavioural abnormalities associated with addiction are extremely long lived. So, chronic drug exposure causes stable changes in the brain at the molecular and cellular levels that underlie these behavioural abnormalities. There has been considerable progress in identifying the mechanisms that contribute to long-lived neural and behavioural plasticity related to addiction, including drug-induced changes in gene transcription, in RNA and protein processing, and in synaptic structure. Although the specific changes identified so far are not sufficiently long lasting to account for the nearly permanent changes in behaviour associated with addiction, recent work has pointed to the types of mechanism that could be involved.",
"title": ""
},
{
"docid": "6dd39d60e6cf733692c87126bdb31e24",
"text": "Computerized microscopy image analysis plays an important role in computer aided diagnosis and prognosis. Machine learning techniques have powered many aspects of medical investigation and clinical practice. Recently, deep learning is emerging as a leading machine learning tool in computer vision and has attracted considerable attention in biomedical image analysis. In this paper, we provide a snapshot of this fast-growing field, specifically for microscopy image analysis. We briefly introduce the popular deep neural networks and summarize current deep learning achievements in various tasks, such as detection, segmentation, and classification in microscopy image analysis. In particular, we explain the architectures and the principles of convolutional neural networks, fully convolutional networks, recurrent neural networks, stacked autoencoders, and deep belief networks, and interpret their formulations or modelings for specific tasks on various microscopy images. In addition, we discuss the open challenges and the potential trends of future research in microscopy image analysis using deep learning.",
"title": ""
},
{
"docid": "8e9c75f7971d75ed72b97756356e3c2c",
"text": "We present the results from the third shared task on multimodal machine translation. In this task a source sentence in English is supplemented by an image and participating systems are required to generate a translation for such a sentence into German, French or Czech. The image can be used in addition to (or instead of) the source sentence. This year the task was extended with a third target language (Czech) and a new test set. In addition, a variant of this task was introduced with its own test set where the source sentence is given in multiple languages: English, French and German, and participating systems are required to generate a translation in Czech. Seven teams submitted 45 different systems to the two variants of the task. Compared to last year, the performance of the multimodal submissions improved, but text-only systems remain competitive.",
"title": ""
},
{
"docid": "e9dc414cc8d29e9a8a66196423041788",
"text": "In this paper, we propose a rotated bounding box based convolutional neural network (RBox-CNN) for arbitrary-oriented ship detection. RBox-CNN is an end-to-end model based on Faster R-CNN. The region proposal network generates proposals as the rotated bounding box, and then the rotation region-of-interest (RRoI) pooling layer is applied to extract region features corresponding the proposals. In addition, the diagonal region-of-interest (DRoI) pooling layer is applied simultaneously to extract context features and alleviate the problem of misalignment in RRoI pooling layer. To stably predict locations with the angle, we apply the regression of distance's projection in width/height. Experiments on HRSC2016 show that our model achieves state-of-the-art detection accuracy on ship detection. Furthermore, RBox-CNN achieves a significant improvement on DOTA for oriented general object detection in remote sensing images.",
"title": ""
},
{
"docid": "51d534721e7003cf191189be37342394",
"text": "This paper addresses the problem of automatic player identification in broadcast sports videos filmed with a single side-view medium distance camera. Player identification in this setting is a challenging task because visual cues such as faces and jersey numbers are not clearly visible. Thus, this task requires sophisticated approaches to capture distinctive features from players to distinguish them. To this end, we use Convolutional Neural Networks (CNN) features extracted at multiple scales and encode them with an advanced pooling, called Fisher vector. We leverage it for exploring representations that have sufficient discriminatory power and ability to magnify subtle differences. We also analyze the distinguishing parts of the players and present a part based pooling approach to use these distinctive feature points. The resulting player representation is able to identify players even in difficult scenes. It achieves state-of-the-art results up to 96% on NBA basketball clips.",
"title": ""
},
{
"docid": "c67b6ea4909f47f814760e7ccd38426f",
"text": "Firewalls are core elements in network security. However, managing firewall rules, especially for enterprise networks, has become complex and error-prone. Firewall filtering rules have to be carefully written and organized in order to correctly implement the security policy. In addition, inserting or modifying a filtering rule requires thorough analysis of the relationship between this rule and other rules in order to determine the proper order of this rule and commit the updates. In this paper we present a set of techniques and algorithms that provide automatic discovery of firewall policy anomalies to reveal rule conflicts and potential problems in legacy firewalls, and anomaly-free policy editing for rule insertion, removal, and modification. This is implemented in a user-friendly tool called ¿Firewall Policy Advisor.¿ The Firewall Policy Advisor significantly simplifies the management of any generic firewall policy written as filtering rules, while minimizing network vulnerability due to firewall rule misconfiguration.",
"title": ""
},
{
"docid": "8cd52cdc44c18214c471716745e3c00f",
"text": "The design of electric vehicles require a complete paradigm shift in terms of embedded systems architectures and software design techniques that are followed within the conventional automotive systems domain. It is increasingly being realized that the evolutionary approach of replacing the engine of a car by an electric engine will not be able to address issues like acceptable vehicle range, battery lifetime performance, battery management techniques, costs and weight, which are the core issues for the success of electric vehicles. While battery technology has crucial importance in the domain of electric vehicles, how these batteries are used and managed pose new problems in the area of embedded systems architecture and software for electric vehicles. At the same time, the communication and computation design challenges in electric vehicles also have to be addressed appropriately. This paper discusses some of these research challenges.",
"title": ""
},
{
"docid": "9c17325056a96f5086d324936c9f06ce",
"text": "Fingertip suction is investigated using a compliant, underactuated, tendon-driven hand designed for underwater mobile manipulation. Tendon routing and joint stiffnesses are designed to provide ease of closure while maintaining finger rigidity, allowing the hand to pinch small objects, as well as secure large objects, without diminishing strength. While the hand is designed to grasp a range of objects, the addition of light suction flow to the fingertips is especially effective for small, low-friction (slippery) objects. Numerical simulations confirm that changing suction parameters can increase the object acquisition region, providing guidelines for future versions of the hand.",
"title": ""
},
{
"docid": "924e10782437c323b8421b156db50584",
"text": "Ontology Learning greatly facilitates the construction of ontologies by the ontology engineer. The notion of ontology learning that we propose here includes a number of complementary disciplines that feed on different types of unstructured and semi-structured data in order to support a semi-automatic, cooperative ontology engineering process. Our ontology learning framework proceeds through ontology import, extraction, pruning, and refinement, giving the ontology engineer a wealth of coordinated tools for ontology modelling. Besides of the general architecture, we show in this paper some exemplary techniques in the ontology learning cycle that we have implemented in our ontology learning environment, KAON Text-To-Onto.",
"title": ""
},
{
"docid": "8987e2effbb7038bfbbcdc44ba15e8ee",
"text": "Model neurons composed of hundreds of compartments are currently used for studying phenomena at the level of the single cell. Large network simulations require a simplified model of a single neuron that retains the electrotonic and synaptic integrative properties of the real cell. We introduce a method for reducing the number of compartments of neocortical pyramidal neuron models (from 400 to 8-9 compartments) through a simple collapsing method based on conserving the axial resistance rather than on the surface area of the dendritic tree. The reduced models retain the general morphology of the pyramidal cells on which they are based, allowing accurate positioning of synaptic inputs and ionic conductances on individual model cells, as well as construction of spatially accurate network models. The reduced models run significantly faster than the full models, yet faithfully reproduce their electrical responses.",
"title": ""
},
{
"docid": "4504134d0965b077b2462540a9b6950a",
"text": "This paper presents a novel dataset for training end-to-end task oriented conversational agents. The dataset contains conversations between an operator – a task expert, and a client who seeks information about the task. Along with the conversation transcriptions, we record database API calls performed by the operator, which capture a distilled meaning of the user query. We expect that the easy-to-get supervision of database calls will allow us to train end-to-end dialogue agents with significantly less training data. The dataset is collected using crowdsourcing and the conversations cover the well-known restaurant domain. Quality of the data is enforced by mutual control among contributors. The dataset is available for download under the Creative Commons 4.0 BY-SA license.",
"title": ""
},
{
"docid": "6c4816e8c94988e9305c4361b0e7b7da",
"text": "Automotive embedded applications like the engine management system are composed of multiple functional components that are tightly coupled via numerous communication dependencies and intensive data sharing, while also having real-time requirements. In order to cope with complexity, especially in multi-core settings, various communication mechanisms are used to ensure data consistency and temporal determinism along functional cause-effect chains. However, existing timing analysis methods generally only support very basic communication models that need to be extended to handle the analysis of industry grade problems which involve more complex communication semantics. In this work, we give an overview of communication semantics used in the automotive industry and the different constraints to be considered in the design process. We also propose a method for model transformation to increase the expressiveness of current timing analysis methods enabling them to work with more complex communication semantics. We demonstrate this transformation approach for concrete implementations of two communication semantics, namely, implicit and LET communication. We discuss the impact on end-to-end latencies and communication overheads based on a full blown engine management system. 1998 ACM Subject Classification C.3 Real-Time and Embedded Systems, D.4.4 Communications Management",
"title": ""
},
{
"docid": "a7e2c35ea12a06dbd31f839297efc535",
"text": "Lane classification is a fundamental problem for autonomous driving and map-aided localization. Many existing algorithms rely on special designed 1D or 2D filters to extract features of lane markings from either color images or LiDAR data. However, these handcrafted features could not be robust under various driving and lighting conditions.\n In this paper, we propose a novel algorithm to fuse color images and LiDAR data together. Our algorithm consists of two stages. In the first stage, we segment road surfaces and register LiDAR data with the corresponding color images. In the second stage, we train convolutional neural networks (CNNs) to classify image patches into lane markings and non-markings. Comparing with the algorithms based on handcrafted features, our algorithm learns a set of kernels to extract and integrate features from two different modalities. The pixel-level classification rate in our experiments shows that our algorithm is robust to different conditions such as shadows and occlusions.",
"title": ""
},
{
"docid": "1b37c9f413f1c12d80f5995a40df4684",
"text": "Various orodispersible drug formulations have been recently introduced into the market. Oral lyophilisates and orodispersible granules, tablets or films have enriched the therapeutic options. In particular, the paediatric and geriatric population may profit from the advantages like convenient administration, lack of swallowing, ease of use. Until now, only a few novel products made it to the market as the development and production usually is more expensive than for conventional oral drug dosage forms like tablets or capsules. The review reports the recent advances, existing and upcoming products, and the significance of formulating patient-friendly oral dosage forms. The preparation of the medicines can be performed both in pharmaceutical industry and in community pharmacies. Recent advances, e.g. drug printing technologies, may facilitate this process for community or hospital pharmacies. Still, regulatory guidelines and pharmacopoeial monographs lack appropriate methods, specifications and global harmonization to foster the development of innovative orodispersible drug dosage forms.",
"title": ""
},
{
"docid": "bf9ed2160f4f3132206c1651dadb592e",
"text": "In this paper, we present a probabilistic multi-task learning approach for visual saliency estimation in video. In our approach, the problem of visual saliency estimation is modeled by simultaneously considering the stimulus-driven and task-related factors in a probabilistic framework. In this framework, a stimulus-driven component simulates the low-level processes in human vision system using multi-scale wavelet decomposition and unbiased feature competition; while a task-related component simulates the high-level processes to bias the competition of the input features. Different from existing approaches, we propose a multi-task learning algorithm to learn the task-related “stimulus-saliency” mapping functions for each scene. The algorithm also learns various fusion strategies, which are used to integrate the stimulus-driven and task-related components to obtain the visual saliency. Extensive experiments were carried out on two public eye-fixation datasets and one regional saliency dataset. Experimental results show that our approach outperforms eight state-of-the-art approaches remarkably.",
"title": ""
},
{
"docid": "7c7beabf8bcaa2af706b6c1fd92ee8dd",
"text": "In this paper, two main contributions are presented to manage the power flow between a 11 wind turbine and a solar power system. The first one is to use the fuzzy logic controller as an 12 objective to find the maximum power point tracking, applied to a hybrid wind-solar system, at fixed 13 atmospheric conditions. The second one is to response to real-time control system constraints and 14 to improve the generating system performance. For this, a hardware implementation of the 15 proposed algorithm is performed using the Xilinx system generator. The experimental results show 16 that the suggested system presents high accuracy and acceptable execution time performances. The 17 proposed model and its control strategy offer a proper tool for optimizing the hybrid power system 18 performance which we can use in smart house applications. 19",
"title": ""
}
] | scidocsrr |
5d16913ebe6d8c77e5afb7e6b29ed012 | Train Model using Network Architecture and Log Records Trained Architecture N 2 . Predict Performance of Untrained Architecture | [
{
"docid": "1862f864cc1e24346c063ebc8a9e6a59",
"text": "We focus on knowledge base construction (KBC) from richly formatted data. In contrast to KBC from text or tabular data, KBC from richly formatted data aims to extract relations conveyed jointly via textual, structural, tabular, and visual expressions. We introduce Fonduer, a machine-learning-based KBC system for richly formatted data. Fonduer presents a new data model that accounts for three challenging characteristics of richly formatted data: (1) prevalent document-level relations, (2) multimodality, and (3) data variety. Fonduer uses a new deep-learning model to automatically capture the representation (i.e., features) needed to learn how to extract relations from richly formatted data. Finally, Fonduer provides a new programming model that enables users to convert domain expertise, based on multiple modalities of information, to meaningful signals of supervision for training a KBC system. Fonduer-based KBC systems are in production for a range of use cases, including at a major online retailer. We compare Fonduer against state-of-the-art KBC approaches in four different domains. We show that Fonduer achieves an average improvement of 41 F1 points on the quality of the output knowledge base---and in some cases produces up to 1.87x the number of correct entries---compared to expert-curated public knowledge bases. We also conduct a user study to assess the usability of Fonduer's new programming model. We show that after using Fonduer for only 30 minutes, non-domain experts are able to design KBC systems that achieve on average 23 F1 points higher quality than traditional machine-learning-based KBC approaches.",
"title": ""
},
{
"docid": "a026cb81bddfa946159d02b5bb2e341d",
"text": "In this paper we are concerned with the practical issues of working with data sets common to finance, statistics, and other related fields. pandas is a new library which aims to facilitate working with these data sets and to provide a set of fundamental building blocks for implementing statistical models. We will discuss specific design issues encountered in the course of developing pandas with relevant examples and some comparisons with the R language. We conclude by discussing possible future directions for statistical computing and data analysis using Python.",
"title": ""
},
{
"docid": "7c950863f51cbce128a37e50d78ec25f",
"text": "We explore efficient neural architecture search methods and show that a simple yet powerful evolutionary algorithm can discover new architectures with excellent performance. Our approach combines a novel hierarchical genetic representation scheme that imitates the modularized design pattern commonly adopted by human experts, and an expressive search space that supports complex topologies. Our algorithm efficiently discovers architectures that outperform a large number of manually designed models for image classification, obtaining top-1 error of 3.6% on CIFAR-10 and 20.3% when transferred to ImageNet, which is competitive with the best existing neural architecture search approaches. We also present results using random search, achieving 0.3% less top-1 accuracy on CIFAR-10 and 0.1% less on ImageNet whilst reducing the search time from 36 hours down to 1 hour.",
"title": ""
}
] | [
{
"docid": "b4cc3716abcb57b45a12c31daab8a89f",
"text": "The original ImageNet dataset is a popular large-scale benchmark for training Deep Neural Networks. Since the cost of performing experiments (e.g, algorithm design, architecture search, and hyperparameter tuning) on the original dataset might be prohibitive, we propose to consider a downsampled version of ImageNet. In contrast to the CIFAR datasets and earlier downsampled versions of ImageNet, our proposed ImageNet32x32 (and its variants ImageNet64x64 and ImageNet16x16) contains exactly the same number of classes and images as ImageNet, with the only difference that the images are downsampled to 32×32 pixels per image (64×64 and 16×16 pixels for the variants, respectively). Experiments on these downsampled variants are dramatically faster than on the original ImageNet and the characteristics of the downsampled datasets with respect to optimal hyperparameters appear to remain similar. The proposed datasets and scripts to reproduce our results are available at http://image-net.org/download-images and https://github.com/PatrykChrabaszcz/Imagenet32_Scripts",
"title": ""
},
{
"docid": "f17f2e754149474ea879711dc5bcd087",
"text": "In grasping, shape adaptation between hand and object has a major influence on grasp success. In this paper, we present an approach to grasping unknown objects that explicitly considers the effect of shape adaptability to simplify perception. Shape adaptation also occurs between the hand and the environment, for example, when fingers slide across the surface of the table to pick up a small object. Our approach to grasping also considers environmental shape adaptability to select grasps with high probability of success. We validate the proposed shape-adaptability-aware grasping approach in 880 real-world grasping trials with 30 objects. Our experiments show that the explicit consideration of shape adaptability of the hand leads to robust grasping of unknown objects. Simple perception suffices to achieve this robust grasping behavior.",
"title": ""
},
{
"docid": "3ca057959a24245764953a6aa1b2ed84",
"text": "Distant supervision for relation extraction is an efficient method to scale relation extraction to very large corpora which contains thousands of relations. However, the existing approaches have flaws on selecting valid instances and lack of background knowledge about the entities. In this paper, we propose a sentence-level attention model to select the valid instances, which makes full use of the supervision information from knowledge bases. And we extract entity descriptions from Freebase and Wikipedia pages to supplement background knowledge for our task. The background knowledge not only provides more information for predicting relations, but also brings better entity representations for the attention module. We conduct three experiments on a widely used dataset and the experimental results show that our approach outperforms all the baseline systems significantly.",
"title": ""
},
{
"docid": "a62a23df11fd72522a3d9726b60d4497",
"text": "In this paper, a simple single-phase grid-connected photovoltaic (PV) inverter topology consisting of a boost section, a low-voltage single-phase inverter with an inductive filter, and a step-up transformer interfacing the grid is considered. Ideally, this topology will not inject any lower order harmonics into the grid due to high-frequency pulse width modulation operation. However, the nonideal factors in the system such as core saturation-induced distorted magnetizing current of the transformer and the dead time of the inverter, etc., contribute to a significant amount of lower order harmonics in the grid current. A novel design of inverter current control that mitigates lower order harmonics is presented in this paper. An adaptive harmonic compensation technique and its design are proposed for the lower order harmonic compensation. In addition, a proportional-resonant-integral (PRI) controller and its design are also proposed. This controller eliminates the dc component in the control system, which introduces even harmonics in the grid current in the topology considered. The dynamics of the system due to the interaction between the PRI controller and the adaptive compensation scheme is also analyzed. The complete design has been validated with experimental results and good agreement with theoretical analysis of the overall system is observed.",
"title": ""
},
{
"docid": "1d8e2c9bd9cfa2ce283e01cbbcd6ca83",
"text": "Deep neural networks (DNNs) are vulnerable to adversarial examples, perturbations to correctly classified examples which can cause the model to misclassify. In the image domain, these perturbations are often virtually indistinguishable to human perception, causing humans and state-of-the-art models to disagree. However, in the natural language domain, small perturbations are clearly perceptible, and the replacement of a single word can drastically alter the semantics of the document. Given these challenges, we use a black-box population-based optimization algorithm to generate semantically and syntactically similar adversarial examples that fool well-trained sentiment analysis and textual entailment models with success rates of 97% and 70%, respectively. We additionally demonstrate that 92.3% of the successful sentiment analysis adversarial examples are classified to their original label by 20 human annotators, and that the examples are perceptibly quite similar. Finally, we discuss an attempt to use adversarial training as a defense, but fail to yield improvement, demonstrating the strength and diversity of our adversarial examples. We hope our findings encourage researchers to pursue improving the robustness of DNNs in the natural language domain.",
"title": ""
},
{
"docid": "8a7a8de5cae191a4493e5a0e4f34bbf1",
"text": "B-spline surfaces, although widely used, are incapable of describing surfaces of arbitrary topology. It is not possible to model a general closed surface or a surface with handles as a single non-degenerate B-spline. In practice such surfaces are often needed. In this paper, we present generalizations of biquadratic and bicubic B-spline surfaces that are capable of capturing surfaces of arbitrary topology (although restrictions are placed on the connectivity of the control mesh). These results are obtained by relaxing the sufficient but not necessary smoothness constraints imposed by B-splines and through the use of an n-sided generalization of Bézier surfaces called S-patches.",
"title": ""
},
{
"docid": "87552ea79b92986de3ce5306ef0266bc",
"text": "This paper presents a novel secondary frequency and voltage control method for islanded microgrids based on distributed cooperative control. The proposed method utilizes a sparse communication network where each DG unit only requires local and its neighbors’ information to perform control actions. The frequency controller restores the system frequency to the nominal value while maintaining the equal generation cost increment value among DG units. The voltage controller simultaneously achieves the critical bus voltage restoration and accurate reactive power sharing. Subsequently, the case when the DG unit ac-side voltage reaches its limit value is discussed and a controller output limitation method is correspondingly provided to selectively realize the desired control objective. This paper also provides a small-signal dynamic model of the microgrid with the proposed controller to evaluate the system dynamic performance. Finally, simulation results on a microgrid test system are presented to validate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "1cdee228f9813e4f33df1706ec4e7876",
"text": "Existing methods on sketch based image retrieval (SBIR) are usually based on the hand-crafted features whose ability of representation is limited. In this paper, we propose a sketch based image retrieval method via image-aided cross domain learning. First, the deep learning model is introduced to learn the discriminative features. However, it needs a large number of images to train the deep model, which is not suitable for the sketch images. Thus, we propose to extend the sketch training images via introducing the real images. Specifically, we initialize the deep models with extra image data, and then extract the generalized boundary from real images as the sketch approximation. The using of generalized boundary is under the assumption that their domain is similar with sketch domain. Finally, the neural network is fine-tuned with the sketch approximation data. Experimental results on Flicker15 show that the proposed method has a strong ability to link the associated image-sketch pairs and the results outperform state-of-the-arts methods.",
"title": ""
},
{
"docid": "dc817bc11276d76f8d97f67e4b1b2155",
"text": "Abstract A Security Operation Center (SOC) is made up of five distinct modules: event generators, event collectors, message database, analysis engines and reaction management software. The main problem encountered when building a SOC is the integration of all these modules, usually built as autonomous parts, while matching availability, integrity and security of data and their transmission channels. In this paper we will discuss the functional architecture needed to integrate those modules. Chapter one will introduce the concepts behind each module and briefly describe common problems encountered with each of them. In chapter two we will design the global architecture of the SOC. We will then focus on collection & analysis of data generated by sensors in chapters three and four. A short conclusion will describe further research & analysis to be performed in the field of SOC design.",
"title": ""
},
{
"docid": "ad389d8ee2c45746c3a44c7e0f86de40",
"text": "Deep Convolutional Neural Networks (CNN) have recently been shown to outperform previous state of the art approaches for image classification. Their success must in parts be attributed to the availability of large labeled training sets such as provided by the ImageNet benchmarking initiative. When training data is scarce, however, CNNs have proven to fail to learn descriptive features. Recent research shows that supervised pre-training on external data followed by domain-specific fine-tuning yields a significant performance boost when external data and target domain show similar visual characteristics. Transfer-learning from a base task to a highly dissimilar target task, however, has not yet been fully investigated. In this paper, we analyze the performance of different feature representations for classification of paintings into art epochs. Specifically, we evaluate the impact of training set sizes on CNNs trained with and without external data and compare the obtained models to linear models based on Improved Fisher Encodings. Our results underline the superior performance of fine-tuned CNNs but likewise propose Fisher Encodings in scenarios were training data is limited.",
"title": ""
},
{
"docid": "9c510d7ddeb964c5d762d63d9e284f44",
"text": "This paper explains the rationale for the development of reconfigurable manufacturing systems, which possess the advantages both of dedicated lines and of flexible systems. The paper defines the core characteristics and design principles of reconfigurable manufacturing systems (RMS) and describes the structure recommended for practical RMS with RMS core characteristics. After that, a rigorous mathematical method is introduced for designing RMS with this recommended structure. An example is provided to demonstrate how this RMS design method is used. The paper concludes with a discussion of reconfigurable assembly systems. © 2011 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1f3f352c7584fb6ec1924ca3621fb1fb",
"text": "The National Firearms Forensic Intelligence Database (NFFID (c) Crown Copyright 2003-2008) was developed by The Forensic Science Service (FSS) as an investigative tool for collating and comparing information from items submitted to the FSS to provide intelligence reports for the police and relevant government agencies. The purpose of these intelligence reports was to highlight current firearm and ammunition trends and their distribution within the country. This study reviews all the trends that have been highlighted by NFFID between September 2003 and September 2008. A total of 8887 guns of all types have been submitted to the FSS over the last 5 years, where an average of 21% of annual submissions are converted weapons. The makes, models, and modes of conversion of these weapons are described in detail. The number of trends identified by NFFID shows that this has been a valuable tool in the analysis of firearms-related crime.",
"title": ""
},
{
"docid": "66b9ad378e1444a6d5a1284a2a036296",
"text": "The relationship between nonverbal behavior and severity of depression was investigated by following depressed participants over the course of treatment and video recording a series of clinical interviews. Facial expressions and head pose were analyzed from video using manual and automatic systems. Both systems were highly consistent for FACS action units (AUs) and showed similar effects for change over time in depression severity. When symptom severity was high, participants made fewer affiliative facial expressions (AUs 12 and 15) and more non-affiliative facial expressions (AU 14). Participants also exhibited diminished head motion (i.e., amplitude and velocity) when symptom severity was high. These results are consistent with the Social Withdrawal hypothesis: that depressed individuals use nonverbal behavior to maintain or increase interpersonal distance. As individuals recover, they send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and revealed the same pattern of findings suggests that automatic facial expression analysis may be ready to relieve the burden of manual coding in behavioral and clinical science.",
"title": ""
},
{
"docid": "f7d06c6f2313417fd2795ce4c4402f0e",
"text": "Decades of research suggest that similarity in demographics, values, activities, and attitudes predicts higher marital satisfaction. The present study examined the relationship between similarity in Big Five personality factors and initial levels and 12-year trajectories of marital satisfaction in long-term couples, who were in their 40s and 60s at the beginning of the study. Across the entire sample, greater overall personality similarity predicted more negative slopes in marital satisfaction trajectories. In addition, spousal similarity on Conscientiousness and Extraversion more strongly predicted negative marital satisfaction outcomes among the midlife sample than among the older sample. Results are discussed in terms of the different life tasks faced by young, midlife, and older adults, and the implications of these tasks for the \"ingredients\" of marital satisfaction.",
"title": ""
},
{
"docid": "9a1cb9ebd7bfb9eb10898602399cd304",
"text": "HBase is a distributed column-oriented database built on top of HDFS. HBase is the Hadoop application to use when you require real-time read/write random access to very large datasets. HBase is a scalable data store targeted at random read and write access of (fairly-) structured data. It's modeled after Google's Big table and targeted to support large tables, on the order of billions of rows and millions of columns. It uses HDFS as the underlying file system and is designed to be fully distributed and highly available. Version 0.20 introduces significant performance improvement. Base's Table Input Format is designed to allow a Map Reduce program to operate on data stored in an HBase table. Table Output Format is for writing Map Reduce outputs into an HBase table. HBase has different storage characteristics than HDFS, such as the ability to do row updates and column indexing, so we can expect to see these features used by Hive in future releases. It is already possible to access HBase tables from Hive. This paper includes the step by step introduction to the HBase, Identify differences between apache HBase and a traditional RDBMS, The Problem with Relational Database Systems, Relation between the Hadoop and HBase, How an Apache HBase table is physically stored on disk. Later part of this paper introduces Map Reduce, HBase table and how Apache HBase Cells stores data, what happens to data when it is deleted. Last part explains difference between Big Data and HBase, Conclusion followed with the References.",
"title": ""
},
{
"docid": "06614a4d74d2d059944b9487f2966ff4",
"text": "In web search, relevance ranking of popular pages is relatively easy, because of the inclusion of strong signals such as anchor text and search log data. In contrast, with less popular pages, relevance ranking becomes very challenging due to a lack of information. In this paper the former is referred to as head pages, and the latter tail pages. We address the challenge by learning a model that can extract search-focused key n-grams from web pages, and using the key n-grams for searches of the pages, particularly, the tail pages. To the best of our knowledge, this problem has not been previously studied. Our approach has four characteristics. First, key n-grams are search-focused in the sense that they are defined as those which can compose \"good queries\" for searching the page. Second, key n-grams are learned in a relative sense using learning to rank techniques. Third, key n-grams are learned using search log data, such that the characteristics of key n-grams in the search log data, particularly in the heads; can be applied to the other data, particularly to the tails. Fourth, the extracted key n-grams are used as features of the relevance ranking model also trained with learning to rank techniques. Experiments validate the effectiveness of the proposed approach with large-scale web search datasets. The results show that our approach can significantly improve relevance ranking performance on both heads and tails; and particularly tails, compared with baseline approaches. Characteristics of our approach have also been fully investigated through comprehensive experiments.",
"title": ""
},
{
"docid": "a6758cba3f52ca27de6434c02987be9e",
"text": "This article addresses one of the key challenges of engaging a massive ad hoc crowd by providing sustainable incentives. The incentive model is based on a context-aware cyber-physical spatio-temporal serious game with the help of a mobile crowd sensing mechanism. To this end, this article describes a framework that can create an ad hoc social network of millions of people and provide context-aware serious-game services as an incentive. While interacting with different services, the massive crowd shares a rich trail of geo-tagged multimedia data, which acts as a crowdsourcing eco-system. The incentive model has been tested on the mass crowd at the Hajj since 2014. From our observations, we conclude that the framework provides a sustainable incentive mechanism that can solve many real-life problems such as reaching a person in a crowd within the shortest possible time, isolating significant events, finding lost individuals, handling emergency situations, helping pilgrims to perform ritual events based on location and time, and sharing geo-tagged multimedia resources among a community of interest within the crowd. The framework allows an ad hoc social network to be formed within a very large crowd, a community of interests to be created for each person, and information to be shared with the right community of interests. We present the communication paradigm of the framework, the serious game incentive model, and cloud-based massive geo-tagged social network architecture.",
"title": ""
},
{
"docid": "fe1f07a8a39cdb5bcdd868d6fc9f89a0",
"text": "The design of a high-gain microstrip grid array antenna (MGAA) for 24-GHz automotive radar sensor applications is first presented. An amplitude tapering technique utilizing variable line width on the individual radiating element is then applied to lower sidelobe level. Next, the MGAA is simplified to a microstrip comb array antenna (MCAA). The MCAA shows broader impedance bandwidth and lower cross-polarization radiation as compared with those of the MGAA. The MCAA is designed not as a travelling-wave but a standing-wave antenna. As a result, the match load and the reflection-cancelling structure can be avoided, which is important, especially in the millimeter-wave frequencies. Finally, an emphasis is given to 45° linearly-polarized MCAA because the radiation with the orthogonal polarization from cars coming from the opposite direction does not affect the radar operation.",
"title": ""
},
{
"docid": "b6b5afb72393e89c211bac283e39d8a3",
"text": "In order to promote the use of mushrooms as source of nutrients and nutraceuticals, several experiments were performed in wild and commercial species. The analysis of nutrients included determination of proteins, fats, ash, and carbohydrates, particularly sugars by HPLC-RI. The analysis of nutraceuticals included determination of fatty acids by GC-FID, and other phytochemicals such as tocopherols, by HPLC-fluorescence, and phenolics, flavonoids, carotenoids and ascorbic acid, by spectrophotometer techniques. The antimicrobial properties of the mushrooms were also screened against fungi, Gram positive and Gram negative bacteria. The wild mushroom species proved to be less energetic than the commercial sp., containing higher contents of protein and lower fat concentrations. In general, commercial species seem to have higher concentrations of sugars, while wild sp. contained lower values of MUFA but also higher contents of PUFA. alpha-Tocopherol was detected in higher amounts in the wild species, while gamma-tocopherol was not found in these species. Wild mushrooms revealed a higher content of phenols but a lower content of ascorbic acid, than commercial mushrooms. There were no differences between the antimicrobial properties of wild and commercial species. The ongoing research will lead to a new generation of foods, and will certainly promote their nutritional and medicinal use.",
"title": ""
},
{
"docid": "857d8003dff05b8e1ba5eeb8f6b3c14e",
"text": "Traditional static spectrum allocation policies have been to grant each wireless service exclusive usage of certain frequency bands, leaving several spectrum bands unlicensed for industrial, scientific and medical purposes. The rapid proliferation of low-cost wireless applications in unlicensed spectrum bands has resulted in spectrum scarcity among those bands. Since most applications in Wireless Sensor Networks (WSNs) utilize the unlicensed spectrum, network-wide performance of WSNs will inevitably degrade as their popularity increases. Sharing of under-utilized licensed spectrum among unlicensed devices is a promising solution to the spectrum scarcity issue. Cognitive Radio (CR) is a new paradigm in wireless communication that allows sensor nodes as the unlicensed users or Secondary Users (SUs) to detect and use the under-utilized licensed spectrum temporarily. Given that the licensed or Primary Users (PUs) are oblivious to the presence of SUs, the SUs access the licensed spectrum opportunistically without interfering the PUs, while improving their own performance. In this paper, we propose an approach to build Cognitive Radio-based Wireless Sensor Networks (CR-WSNs). We believe that CR-WSN is the next-generation WSN. Realizing that both WSNs and CR present unique challenges to the design of CR-WSNs, we provide an overview and conceptual design of WSNs from the perspective of CR. The open issues are discussed to motivate new research interests in this field. We also present our method to achieving context-awareness and intelligence, which are the key components in CR networks, to address an open issue in CR-WSN.",
"title": ""
}
] | scidocsrr |
97a6ba2b4cfe9b96377e57559cc35430 | Orchestrating Caching, Transcoding and Request Routing for Adaptive Video Streaming Over ICN | [
{
"docid": "d0253bb3efe714e6a34e8dd5fc7dcf81",
"text": "ICN has received a lot of attention in recent years, and is a promising approach for the Future Internet design. As multimedia is the dominating traffic in today's and (most likely) the Future Internet, it is important to consider this type of data transmission in the context of ICN. In particular, the adaptive streaming of multimedia content is a promising approach for usage within ICN, as the client has full control over the streaming session and has the possibility to adapt the multimedia stream to its context (e.g. network conditions, device capabilities), which is compatible with the paradigms adopted by ICN. In this article we investigate the implementation of adaptive multimedia streaming within networks adopting the ICN approach. In particular, we present our approach based on the recently ratified ISO/IEC MPEG standard Dynamic Adaptive Streaming over HTTP and the ICN representative Content-Centric Networking, including baseline evaluations and open research challenges.",
"title": ""
}
] | [
{
"docid": "8b3042021e48c86873e00d646f65b052",
"text": "We derive a numerical method for Darcy flow, hence also for Poisson’s equation in first order form, based on discrete exterior calculus (DEC). Exterior calculus is a generalization of vector calculus to smooth manifolds and DEC is its discretization on simplicial complexes such as triangle and tetrahedral meshes. We start by rewriting the governing equations of Darcy flow using the language of exterior calculus. This yields a formulation in terms of flux differential form and pressure. The numerical method is then derived by using the framework provided by DEC for discretizing differential forms and operators that act on forms. We also develop a discretization for spatially dependent Hodge star that varies with the permeability of the medium. This also allows us to address discontinuous permeability. The matrix representation for our discrete non-homogeneous Hodge star is diagonal, with positive diagonal entries. The resulting linear system of equations for flux and pressure are saddle type, with a diagonal matrix as the top left block. Our method requires the use of meshes in which each simplex contains its circumcenter. The performance of the proposed numerical method is illustrated on many standard test problems. These include patch tests in two and three dimensions, comparison with analytically known solution in two dimensions, layered medium with alternating permeability values, and a test with a change in permeability along the flow direction. A short introduction to the relevant parts of smooth and discrete exterior calculus is included in this paper. We also include a discussion of the boundary condition in terms of exterior calculus.",
"title": ""
},
{
"docid": "80759a5c2e60b444ed96c9efd515cbdf",
"text": "The Web of Things is an active research field which aims at promoting the easy access and handling of smart things' digital representations through the adoption of Web standards and technologies. While huge research and development efforts have been spent on lower level networks and software technologies, it has been recognized that little experience exists instead in modeling and building applications for the Web of Things. Although several works have proposed Representational State Transfer (REST) inspired approaches for the Web of Things, a main limitation is that poor support is provided to web developers for speeding up the development of Web of Things applications while taking full advantage of REST benefits. In this paper, we propose a framework which supports developers in modeling smart things as web resources, exposing them through RESTful Application Programming Interfaces (APIs) and developing applications on top of them. The framework consists of a Web Resource information model, a middleware, and tools for developing and publishing smart things' digital representations on the Web. We discuss the framework compliance with REST guidelines and its major implementation choices. Finally, we report on our test activities carried out within the SmartSantander European Project to evaluate the use and proficiency of our framework in a smart city scenario.",
"title": ""
},
{
"docid": "e6d359934523ed73b2f9f2ac66fd6096",
"text": "We investigate a novel and important application domain for deep RL: network routing. The question of whether/when traditional network protocol design, which relies on the application of algorithmic insights by human experts, can be replaced by a data-driven approach has received much attention recently. We explore this question in the context of the, arguably, most fundamental networking task: routing. Can ideas and techniques from machine learning be leveraged to automatically generate “good” routing configurations? We observe that the routing domain poses significant challenges for data-driven network protocol design and report on preliminary results regarding the power of data-driven routing. Our results suggest that applying deep reinforcement learning to this context yields high performance and is thus a promising direction for further research. We outline a research agenda for data-driven routing.",
"title": ""
},
{
"docid": "ea5e08627706532504b9beb6f4dc6650",
"text": "This paper highlights the role that reinforcement learning can play in the optimization of treatment policies for chronic illnesses. Before applying any off-the-shelf reinforcement learning methods in this setting, we must first tackle a number of challenges. We outline some of these challenges and present methods for overcoming them. First, we describe a multiple imputation approach to overcome the problem of missing data. Second, we discuss the use of function approximation in the context of a highly variable observation set. Finally, we discuss approaches to summarizing the evidence in the data for recommending a particular action and quantifying the uncertainty around the Q-function of the recommended policy. We present the results of applying these methods to real clinical trial data of patients with schizophrenia.",
"title": ""
},
{
"docid": "d14812771115b4736c6d46aecadb2d8a",
"text": "This article reports on a helical spring-like piezoresistive graphene strain sensor formed within a microfluidic channel. The helical spring has a tubular hollow structure and is made of a thin graphene layer coated on the inner wall of the channel using an in situ microfluidic casting method. The helical shape allows the sensor to flexibly respond to both tensile and compressive strains in a wide dynamic detection range from 24 compressive strain to 20 tensile strain. Fabrication of the sensor involves embedding a helical thin metal wire with a plastic wrap into a precursor solution of an elastomeric polymer, forming a helical microfluidic channel by removing the wire from cured elastomer, followed by microfluidic casting of a graphene thin layer directly inside the helical channel. The wide dynamic range, in conjunction with mechanical flexibility and stretchability of the sensor, will enable practical wearable strain sensor applications where large strains are often involved.",
"title": ""
},
{
"docid": "59d7685a127b1fd98f2506c993d5ec6e",
"text": "Software defect prediction helps to optimize testing resources allocation by identifying defect-prone modules prior to testing. Most existing models build their prediction capability based on a set of historical data, presumably from the same or similar project settings as those under prediction. However, such historical data is not always available in practice. One potential way of predicting defects in projects without historical data is to learn predictors from data of other projects. This paper investigates defect predictions in the cross-project context focusing on the selection of training data. We conduct three large-scale experiments on 34 data sets obtained from 10 open source projects. Major conclusions from our experiments include: (1) in the best cases, training data from other projects can provide better prediction results than training data from the same project; (2) the prediction results obtained using training data from other projects meet our criteria for acceptance on the average level, defects in 18 out of 34 cases were predicted at a Recall greater than 70% and a Precision greater than 50%; (3) results of cross-project defect predictions are related with the distributional characteristics of data sets which are valuable for training data selection. We further propose an approach to automatically select suitable training data for projects without historical data. Prediction results provided by the training data selected by using our approach are comparable with those provided by training data from the same project.",
"title": ""
},
{
"docid": "7dfb6a3a619f7062452aa97aaa134c45",
"text": "Most companies favour the creation and nurturing of long-term relationships with customers because retaining customers is more profitable than acquiring new ones. Churn prediction is a predictive analytics technique to identify churning customers ahead of their departure and enable customer relationship managers to take action to keep them. This work evaluates the development of an expert system for churn prediction and prevention using a Hidden Markov model (HMM). A HMM is implemented on unique data from a mobile application and its predictive performance is compared to other algorithms that are commonly used for churn prediction: Logistic Regression, Neural Network and Support Vector Machine. Predictive performance of the HMM is not outperformed by the other algorithms. HMM has substantial advantages for use in expert systems though due to low storage and computational requirements and output of highly relevant customer motivational states. Generic session data of the mobile app is used to train and test the models which makes the system very easy to deploy and the findings applicable to the whole ecosystem of mobile apps distributed in Apple's App and Google's Play Store.",
"title": ""
},
{
"docid": "e62fd95ccd6c10960acc7358ad0a5071",
"text": "The view information of a chest X-ray (CXR), such as frontal or lateral, is valuable in computer aided diagnosis (CAD) of CXRs. For example, it helps for the selection of atlas models for automatic lung segmentation. However, very often, the image header does not provide such information. In this paper, we present a new method for classifying a CXR into two categories: frontal view vs. lateral view. The method consists of three major components: image pre-processing, feature extraction, and classification. The features we selected are image profile, body size ratio, pyramid of histograms of orientation gradients, and our newly developed contour-based shape descriptor. The method was tested on a large (more than 8,200 images) CXR dataset hosted by the National Library of Medicine. The very high classification accuracy (over 99% for 10-fold cross validation) demonstrates the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "aec82326c1fea34da9935731e4c476f4",
"text": "This paper presents a trajectory tracking control design which provides the essential spatial-temporal feedback control capability for fixed-wing unmanned aerial vehicles (UAVs) to execute a time critical mission reliably. In this design, a kinematic trajectory tracking control law and a control gain selection method are developed to allow the control law to be implemented on a fixed-wing UAV based on the platform's dynamic capability. The tracking control design assumes the command references of the heading and airspeed control systems are the accessible control inputs, and it does not impose restrictive model assumptions on the UAV's control systems. The control design is validated using a high-fidelity nonlinear six degrees of freedom (6DOF) model and the reported results suggest that the proposed tracking control design is able to track time-parameterized trajectories stably with robust control performance.",
"title": ""
},
{
"docid": "36fef38de53386e071ee2a1996aa733f",
"text": "Knowledge embedding, which projects triples in a given knowledge base to d-dimensional vectors, has attracted considerable research efforts recently. Most existing approaches treat the given knowledge base as a set of triplets, each of whose representation is then learned separately. However, as a fact, triples are connected and depend on each other. In this paper, we propose a graph aware knowledge embedding method (GAKE), which formulates knowledge base as a directed graph, and learns representations for any vertices or edges by leveraging the graph’s structural information. We introduce three types of graph context for embedding: neighbor context, path context, and edge context, each reflects properties of knowledge from different perspectives. We also design an attention mechanism to learn representative power of different vertices or edges. To validate our method, we conduct several experiments on two tasks. Experimental results suggest that our method outperforms several state-of-art knowledge embedding models.",
"title": ""
},
{
"docid": "bb5e00ac09e12f3cdb097c8d6cfde9a9",
"text": "3D biomaterial printing has emerged as a potentially revolutionary technology, promising to transform both research and medical therapeutics. Although there has been recent progress in the field, on-demand fabrication of functional and transplantable tissues and organs is still a distant reality. To advance to this point, there are two major technical challenges that must be overcome. The first is expanding upon the limited variety of available 3D printable biomaterials (biomaterial inks), which currently do not adequately represent the physical, chemical, and biological complexity and diversity of tissues and organs within the human body. Newly developed biomaterial inks and the resulting 3D printed constructs must meet numerous interdependent requirements, including those that lead to optimal printing, structural, and biological outcomes. The second challenge is developing and implementing comprehensive biomaterial ink and printed structure characterization combined with in vitro and in vivo tissueand organ-specific evaluation. This perspective outlines considerations for addressing these technical hurdles that, once overcome, will facilitate rapid advancement of 3D biomaterial printing as an indispensable tool for both investigating complex tissue and organ morphogenesis and for developing functional devices for a variety of diagnostic and regenerative medicine applications. PAPER 5 Contributed equally to this work. REcEivEd",
"title": ""
},
{
"docid": "b9bb07dd039c0542a7309f2291732f82",
"text": "Recent progress in acquiring shape from range data permits the acquisition of seamless million-polygon meshes from physical models. In this paper, we present an algorithm and system for converting dense irregular polygon meshes of arbitrary topology into tensor product B-spline surface patches with accompanying displacement maps. This choice of representation yields a coarse but efficient model suitable for animation and a fine but more expensive model suitable for rendering. The first step in our process consists of interactively painting patch boundaries over a rendering of the mesh. In many applications, interactive placement of patch boundaries is considered part of the creative process and is not amenable to automation. The next step is gridded resampling of each boundedsection of the mesh. Our resampling algorithm lays a grid of springs across the polygon mesh, then iterates between relaxing this grid and subdividing it. This grid provides a parameterization for the mesh section, which is initially unparameterized. Finally, we fit a tensor product B-spline surface to the grid. We also output a displacement map for each mesh section, which represents the error between our fitted surface and the spring grid. These displacement maps are images; hence this representation facilitates the use of image processing operators for manipulating the geometric detail of an object. They are also compatible with modern photo-realistic rendering systems. Our resampling and fitting steps are fast enough to surface a million polygon mesh in under 10 minutes important for an interactive system. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling —curve, surface and object representations; I.3.7[Computer Graphics]:Three-Dimensional Graphics and Realism—texture; J.6[Computer-Aided Engineering]:ComputerAided Design (CAD); G.1.2[Approximation]:Spline Approximation Additional",
"title": ""
},
{
"docid": "5625166c3e84059dd7b41d3c0e37e080",
"text": "External border surveillance is critical to the security of every state and the challenges it poses are changing and likely to intensify. Wireless sensor networks (WSN) are a low cost technology that provide an intelligence-led solution to effective continuous monitoring of large, busy, and complex landscapes. The linear network topology resulting from the structure of the monitored area raises challenges that have not been adequately addressed in the literature to date. In this paper, we identify an appropriate metric to measure the quality of WSN border crossing detection. Furthermore, we propose a method to calculate the required number of sensor nodes to deploy in order to achieve a specified level of coverage according to the chosen metric in a given belt region, while maintaining radio connectivity within the network. Then, we contribute a novel cross layer routing protocol, called levels division graph (LDG), designed specifically to address the communication needs and link reliability for topologically linear WSN applications. The performance of the proposed protocol is extensively evaluated in simulations using realistic conditions and parameters. LDG simulation results show significant performance gains when compared with its best rival in the literature, dynamic source routing (DSR). Compared with DSR, LDG improves the average end-to-end delays by up to 95%, packet delivery ratio by up to 20%, and throughput by up to 60%, while maintaining comparable performance in terms of normalized routing load and energy consumption.",
"title": ""
},
{
"docid": "535ebbee465f6a009a2a85c47115a51b",
"text": "Online social networks (OSNs) are increasingly threatened by social bots which are software-controlled OSN accounts that mimic human users with malicious intentions. A social botnet refers to a group of social bots under the control of a single botmaster, which collaborate to conduct malicious behavior while mimicking the interactions among normal OSN users to reduce their individual risk of being detected. We demonstrate the effectiveness and advantages of exploiting a social botnet for spam distribution and digital-influence manipulation through real experiments on Twitter and also trace-driven simulations. We also propose the corresponding countermeasures and evaluate their effectiveness. Our results can help understand the potentially detrimental effects of social botnets and help OSNs improve their bot(net) detection systems.",
"title": ""
},
{
"docid": "ff8f72d7afb43513c7a7a6b041a13040",
"text": "The paper first discusses the reasons why simplified solutions for the mechanical structure of fingers in robotic hands should be considered a worthy design goal. After a brief discussion about the mechanical solutions proposed so far for robotic fingers, a different design approach is proposed. It considers finger structures made of rigid links connected by flexural hinges, with joint actuation obtained by means of flexures that can be guided inside each finger according to different patterns. A simplified model of one of these structures is then presented, together with preliminary results of simulation, in order to evaluate the feasibility of the concept. Examples of technological implementation are finally presented and the perspective and problems of application are briefly discussed.",
"title": ""
},
{
"docid": "8aca909e0f83a8ac917a453fdcc73b6f",
"text": "Nearly half a century ago, military organizations introduced “Tempest” emission-security test standards to control information leakage from unintentional electromagnetic emanations of digital electronics. The nature of these emissions has changed with evolving technology; electromechanic devices have vanished and signal frequencies increased several orders of magnitude. Recently published eavesdropping attacks on modern flat-panel displays and cryptographic coprocessors demonstrate that the risk remains acute for applications with high protection requirements. The ultra-wideband signal processing technology needed for practical attacks finds already its way into consumer electronics. Current civilian RFI limits are entirely unsuited for emission security purposes. Only an openly available set of test standards based on published criteria will help civilian vendors and users to estimate and manage emission-security risks appropriately. This paper outlines a proposal and rationale for civilian electromagnetic emission-security limits. While the presented discussion aims specifically at far-field video eavesdropping in the VHF and UHF bands, the most easy to demonstrate risk, much of the presented approach for setting test limits could be adapted equally to address other RF emanation risks.",
"title": ""
},
{
"docid": "c47881213aa27d29d11579840f7ef1ae",
"text": "While patients with poor functional health literacy (FHL) have difficulties reading and comprehending written medical instructions, it is not known whether these patients also experience problems with other modes of communication, such as face-to-face encounters with primary care physicians. We enrolled 408 English- and Spanish-speaking diabetes patients to examine whether patients with inadequate FHL report worse communication than patients with adequate FHL. We assessed patients' experiences of communication using sub-scales from the Interpersonal Processes of Care in Diverse Populations instrument. In multivariate models, patients with inadequate FHL, compared to patients with adequate FHL, were more likely to report worse communication in the domains of general clarity (adjusted odds ratio [AOR] 6.29, P<0.01), explanation of condition (AOR 4.85, P=0.03), and explanation of processes of care (AOR 2.70, p=0.03). Poor FHL appears to be a marker for oral communication problems, particularly in the technical, explanatory domains of clinician-patient dialogue. Research is needed to identify strategies to improve communication for this group of patients.",
"title": ""
},
{
"docid": "f78fcf875104f8bab2fa465c414331c6",
"text": "In this paper, we present a systematic framework for recognizing realistic actions from videos “in the wild”. Such unconstrained videos are abundant in personal collections as well as on the Web. Recognizing action from such videos has not been addressed extensively, primarily due to the tremendous variations that result from camera motion, background clutter, changes in object appearance, and scale, etc. The main challenge is how to extract reliable and informative features from the unconstrained videos. We extract both motion and static features from the videos. Since the raw features of both types are dense yet noisy, we propose strategies to prune these features. We use motion statistics to acquire stable motion features and clean static features. Furthermore, PageRank is used to mine the most informative static features. In order to further construct compact yet discriminative visual vocabularies, a divisive information-theoretic algorithm is employed to group semantically related features. Finally, AdaBoost is chosen to integrate all the heterogeneous yet complementary features for recognition. We have tested the framework on the KTH dataset and our own dataset consisting of 11 categories of actions collected from YouTube and personal videos, and have obtained impressive results for action recognition and action localization.",
"title": ""
},
{
"docid": "a6ddbe0f834c38079282db91599e076d",
"text": "BACKGROUND\nThe efficacy of closure of a patent foramen ovale (PFO) in the prevention of recurrent stroke after cryptogenic stroke is uncertain. We investigated the effect of PFO closure combined with antiplatelet therapy versus antiplatelet therapy alone on the risks of recurrent stroke and new brain infarctions.\n\n\nMETHODS\nIn this multinational trial involving patients with a PFO who had had a cryptogenic stroke, we randomly assigned patients, in a 2:1 ratio, to undergo PFO closure plus antiplatelet therapy (PFO closure group) or to receive antiplatelet therapy alone (antiplatelet-only group). Imaging of the brain was performed at the baseline screening and at 24 months. The coprimary end points were freedom from clinical evidence of ischemic stroke (reported here as the percentage of patients who had a recurrence of stroke) through at least 24 months after randomization and the 24-month incidence of new brain infarction, which was a composite of clinical ischemic stroke or silent brain infarction detected on imaging.\n\n\nRESULTS\nWe enrolled 664 patients (mean age, 45.2 years), of whom 81% had moderate or large interatrial shunts. During a median follow-up of 3.2 years, clinical ischemic stroke occurred in 6 of 441 patients (1.4%) in the PFO closure group and in 12 of 223 patients (5.4%) in the antiplatelet-only group (hazard ratio, 0.23; 95% confidence interval [CI], 0.09 to 0.62; P=0.002). The incidence of new brain infarctions was significantly lower in the PFO closure group than in the antiplatelet-only group (22 patients [5.7%] vs. 20 patients [11.3%]; relative risk, 0.51; 95% CI, 0.29 to 0.91; P=0.04), but the incidence of silent brain infarction did not differ significantly between the study groups (P=0.97). Serious adverse events occurred in 23.1% of the patients in the PFO closure group and in 27.8% of the patients in the antiplatelet-only group (P=0.22). Serious device-related adverse events occurred in 6 patients (1.4%) in the PFO closure group, and atrial fibrillation occurred in 29 patients (6.6%) after PFO closure.\n\n\nCONCLUSIONS\nAmong patients with a PFO who had had a cryptogenic stroke, the risk of subsequent ischemic stroke was lower among those assigned to PFO closure combined with antiplatelet therapy than among those assigned to antiplatelet therapy alone; however, PFO closure was associated with higher rates of device complications and atrial fibrillation. (Funded by W.L. Gore and Associates; Gore REDUCE ClinicalTrials.gov number, NCT00738894 .).",
"title": ""
},
{
"docid": "140fd854c8564b75609f692229ac616e",
"text": "Modern search systems are based on dozens or even hundreds of ranking features. The dueling bandit gradient descent (DBGD) algorithm has been shown to effectively learn combinations of these features solely from user interactions. DBGD explores the search space by comparing a possibly improved ranker to the current production ranker. To this end, it uses interleaved comparison methods, which can infer with high sensitivity a preference between two rankings based only on interaction data. A limiting factor is that it can compare only to a single exploratory ranker. We propose an online learning to rank algorithm called multileave gradient descent (MGD) that extends DBGD to learn from so-called multileaved comparison methods that can compare a set of rankings instead of merely a pair. We show experimentally that MGD allows for better selection of candidates than DBGD without the need for more comparisons involving users. An important implication of our results is that orders of magnitude less user interaction data is required to find good rankers when multileaved comparisons are used within online learning to rank. Hence, fewer users need to be exposed to possibly inferior rankers and our method allows search engines to adapt more quickly to changes in user preferences.",
"title": ""
}
] | scidocsrr |
2a3d81dcfe9827429ff879c5242e12e5 | Vital Sign Monitoring Through the Back Using an UWB Impulse Radar With Body Coupled Antennas | [
{
"docid": "c70e11160c90bd67caa2294c499be711",
"text": "The vital sign monitoring through Impulse Radio Ultra-Wide Band (IR-UWB) radar provides continuous assessment of a patient's respiration and heart rates in a non-invasive manner. In this paper, IR UWB radar is used for monitoring respiration and the human heart rate. The breathing and heart rate frequencies are extracted from the signal reflected from the human body. A Kalman filter is applied to reduce the measurement noise from the vital signal. An algorithm is presented to separate the heart rate signal from the breathing harmonics. An auto-correlation based technique is applied for detecting random body movements (RBM) during the measurement process. Experiments were performed in different scenarios in order to show the validity of the algorithm. The vital signs were estimated for the signal reflected from the chest, as well as from the back side of the body in different experiments. The results from both scenarios are compared for respiration and heartbeat estimation accuracy.",
"title": ""
}
] | [
{
"docid": "1d7035cc5b85e13be6ff932d39740904",
"text": "This paper investigates an application of mobile sensing: detection of potholes on roads. We describe a system and an associated algorithm to monitor the pothole conditions on the road. This system, that we call the Pothole Detection System, uses Accelerometer Sensor of Android smartphone for detection of potholes and GPS for plotting the location of potholes on Google Maps. Using a simple machine-learning approach, we show that we are able to identify the potholes from accelerometer data. The pothole detection algorithm detects the potholes in real-time. A runtime graph has been shown with the help of a charting software library ‘AChartEngine’. Accelerometer data and pothole data can be mailed to any email address in the form of a ‘.csv’ file. While designing the pothole detection algorithm we have assumed some threshold values on x-axis and z-axis. These threshold values are justified using a neural network technique which confirms an accuracy of 90%-95%. The neural network has been implemented using a machine learning framework available for Android called ‘Encog’. We evaluate our system on the outputs obtained using two, three and four wheelers. Keywords— Machine Learning, Context, Android, Neural Networks, Pothole, Sensor",
"title": ""
},
{
"docid": "c55cf6c871a681cad112cb9c664a1928",
"text": "Splitting of the behavioural activity phase has been found in nocturnal rodents with suprachiasmatic nucleus (SCN) coupling disorder. A similar phenomenon was observed in the sleep phase in the diurnal human discussed here, suggesting that there are so-called evening and morning oscillators in the SCN of humans. The present case suffered from bipolar disorder refractory to various treatments, and various circadian rhythm sleep disorders, such as delayed sleep phase, polyphasic sleep, separation of the sleep bout resembling splitting and circabidian rhythm (48 h), were found during prolonged depressive episodes with hypersomnia. Separation of sleep into evening and morning components and delayed sleep-offset (24.69-h cycle) developed when lowering and stopping the dose of aripiprazole (APZ). However, resumption of APZ improved these symptoms in 2 weeks, accompanied by improvement in the patient's depressive state. Administration of APZ may improve various circadian rhythm sleep disorders, as well as improve and prevent manic-depressive episodes, via augmentation of coupling in the SCN network.",
"title": ""
},
{
"docid": "c83456247c28dd7824e9611f3c59167d",
"text": "In this paper, we present a carry skip adder (CSKA) structure that has a higher speed yet lower energy consumption compared with the conventional one. The speed enhancement is achieved by applying concatenation and incrementation schemes to improve the efficiency of the conventional CSKA (Conv-CSKA) structure. In addition, instead of utilizing multiplexer logic, the proposed structure makes use of AND-OR-Invert (AOI) and OR-AND-Invert (OAI) compound gates for the skip logic. The structure may be realized with both fixed stage size and variable stage size styles, wherein the latter further improves the speed and energy parameters of the adder. Finally, a hybrid variable latency extension of the proposed structure, which lowers the power consumption without considerably impacting the speed, is presented. This extension utilizes a modified parallel structure for increasing the slack time, and hence, enabling further voltage reduction. The proposed structures are assessed by comparing their speed, power, and energy parameters with those of other adders using a 45-nm static CMOS technology for a wide range of supply voltages. The results that are obtained using HSPICE simulations reveal, on average, 44% and 38% improvements in the delay and energy, respectively, compared with those of the Conv-CSKA. In addition, the power-delay product was the lowest among the structures considered in this paper, while its energy-delay product was almost the same as that of the Kogge-Stone parallel prefix adder with considerably smaller area and power consumption. Simulations on the proposed hybrid variable latency CSKA reveal reduction in the power consumption compared with the latest works in this field while having a reasonably high speed.",
"title": ""
},
{
"docid": "19443768282cf17805e70ac83288d303",
"text": "Interactive narrative is a form of storytelling in which users affect a dramatic storyline through actions by assuming the role of characters in a virtual world. This extended abstract outlines the SCHEHERAZADE-IF system, which uses crowdsourcing and artificial intelligence to automatically construct text-based interactive narrative experiences.",
"title": ""
},
{
"docid": "cace842a0c5507ae447e5009fb160592",
"text": "UNLABELLED\nDue to the localized surface plasmon (LSP) effect induced by Ag nanoparticles inside black silicon, the optical absorption of black silicon is enhanced dramatically in near-infrared range (1,100 to 2,500 nm). The black silicon with Ag nanoparticles shows much higher absorption than black silicon fabricated by chemical etching or reactive ion etching over ultraviolet to near-infrared (UV-VIS-NIR, 250 to 2,500 nm). The maximum absorption even increased up to 93.6% in the NIR range (820 to 2,500 nm). The high absorption in NIR range makes LSP-enhanced black silicon a potential material used for NIR-sensitive optoelectronic device.\n\n\nPACS\n78.67.Bf; 78.30.Fs; 78.40.-q; 42.70.Gi.",
"title": ""
},
{
"docid": "7db4066e2e6faabe0dfd815cd5b1d66e",
"text": "The observed poor quality of graduates of some Nigerian Universities in recent times has been partly traced to inadequacies of the National University Admission Examination System. In this study an Artificial Neural Network (ANN) model, for predicting the likely performance of a candidate being considered for admission into the university was developed and tested. Various factors that may likely influence the performance of a student were identified. Such factors as ordinary level subjects’ scores and subjects’ combination, matriculation examination scores, age on admission, parental background, types and location of secondary school attended and gender, among others, were then used as input variables for the ANN model. A model based on the Multilayer Perceptron Topology was developed and trained using data spanning five generations of graduates from an Engineering Department of University of Ibadan, Nigeria’s first University. Test data evaluation shows that the ANN model is able to correctly predict the performance of more than 70% of prospective students. (",
"title": ""
},
{
"docid": "f7d023abf0f651177497ae38d8494efc",
"text": "Developing Question Answering systems has been one of the important research issues because it requires insights from a variety of disciplines, including, Artificial Intelligence, Information Retrieval, Information Extraction, Natural Language Processing, and Psychology. In this paper we realize a formal model for a lightweight semantic–based open domain yes/no Arabic question answering system based on paragraph retrieval (with variable length). We propose a constrained semantic representation. Using an explicit unification framework based on semantic similarities and query expansion (synonyms and antonyms). This frequently improves the precision of the system. Employing the passage retrieval system achieves a better precision by retrieving more paragraphs that contain relevant answers to the question; It significantly reduces the amount of text to be processed by the system.",
"title": ""
},
{
"docid": "db5157c6682f281fb0f8ad1285646042",
"text": "There are currently very few practical methods for assessin g the quality of resources or the reliability of other entities in the o nline environment. This makes it difficult to make decisions about which resources ca n be relied upon and which entities it is safe to interact with. Trust and repu tation systems are aimed at solving this problem by enabling service consumers to eliably assess the quality of services and the reliability of entities befo r they decide to use a particular service or to interact with or depend on a given en tity. Such systems should also allow serious service providers and online play ers to correctly represent the reliability of themselves and the quality of thei r s rvices. In the case of reputation systems, the basic idea is to let parties rate e ch other, for example after the completion of a transaction, and use the aggreg ated ratings about a given party to derive its reputation score. In the case of tru st systems, the basic idea is to analyse and combine paths and networks of trust rel ationships in order to derive measures of trustworthiness of specific nodes. Rep utation scores and trust measures can assist other parties in deciding whether or not to transact with a given party in the future, and whether it is safe to depend on a given resource or entity. This represents an incentive for good behaviour and for offering reliable resources, which thereby tends to have a positive effect on t he quality of online markets and communities. This chapter describes the backgr ound, current status and future trend of online trust and reputation systems.",
"title": ""
},
{
"docid": "b9a1883e48cc1651d887124a2dee3831",
"text": "It is known that local filtering-based edge preserving smoothing techniques suffer from halo artifacts. In this paper, a weighted guided image filter (WGIF) is introduced by incorporating an edge-aware weighting into an existing guided image filter (GIF) to address the problem. The WGIF inherits advantages of both global and local smoothing filters in the sense that: 1) the complexity of the WGIF is O(N) for an image with N pixels, which is same as the GIF and 2) the WGIF can avoid halo artifacts like the existing global smoothing filters. The WGIF is applied for single image detail enhancement, single image haze removal, and fusion of differently exposed images. Experimental results show that the resultant algorithms produce images with better visual quality and at the same time halo artifacts can be reduced/avoided from appearing in the final images with negligible increment on running times.",
"title": ""
},
{
"docid": "2de8df231b5af77cfd141e26fb7a3ace",
"text": "A significant challenge for the practical application of reinforcement learning in the real world is the need to specify an oracle reward function that correctly defines a task. Inverse reinforcement learning (IRL) seeks to avoid this challenge by instead inferring a reward function from expert behavior. While appealing, it can be impractically expensive to collect datasets of demonstrations that cover the variation common in the real world (e.g. opening any type of door). Thus in practice, IRL must commonly be performed with only a limited set of demonstrations where it can be exceedingly difficult to unambiguously recover a reward function. In this work, we exploit the insight that demonstrations from other tasks can be used to constrain the set of possible reward functions by learning a “prior” that is specifically optimized for the ability to infer expressive reward functions from limited numbers of demonstrations. We demonstrate that our method can efficiently recover rewards from images for novel tasks and provide intuition as to how our approach is analogous to learning a prior.",
"title": ""
},
{
"docid": "e2c2cdb5245b73b7511c434c4901fff8",
"text": "Adversarial machine learning in the context of image processing and related applications has received a large amount of attention. However, adversarial machine learning, especially adversarial deep learning, in the context of malware detection has received much less attention despite its apparent importance. In this paper, we present a framework for enhancing the robustness of Deep Neural Networks (DNNs) against adversarial malware samples, dubbed Hashing Transformation Deep Neural Networks (HashTran-DNN). The core idea is to use hash functions with a certain locality-preserving property to transform samples to enhance the robustness of DNNs in malware classification. The framework further uses a Denoising Auto-Encoder (DAE) regularizer to reconstruct the hash representations of samples, making the resulting DNN classifiers capable of attaining the locality information in the latent space. We experiment with two concrete instantiations of the HashTranDNN framework to classify Android malware. Experimental results show that four known attacks can render standard DNNs useless in classifying Android malware, that known defenses can at most defend three of the four attacks, and that HashTran-DNN can effectively defend against all of the four attacks.",
"title": ""
},
{
"docid": "5cc1058a0c88ff15e2992a4d83fdbe3f",
"text": "The paper presents a finite-element method-based design and analysis of interior permanent magnet synchronous motor with flux barriers (IPMSMFB). Various parameters of IPMSMFB rotor structure were taken into account at determination of a suitable rotor construction. On the basis of FEM analysis the rotor of IPMSMFB with three-flux barriers was built. Output torque capability and flux weakening performance of IPMSMFB were compared with performances of conventional interior permanent magnet synchronous motor (IPMSM), having the same rotor geometrical dimensions and the same stator construction. The predicted performance of conventional IPMSM and IPMSMFB was confirmed with the measurements over a wide-speed range of constant output power operation.",
"title": ""
},
{
"docid": "af19c558ac6b5b286bc89634a1f05e26",
"text": "The SIGIR 2016 workshop on Neural Information Retrieval (Neu-IR) took place on 21 July, 2016 in Pisa. The goal of the Neu-IR (pronounced \"New IR\") workshop was to serve as a forum for academic and industrial researchers, working at the intersection of information retrieval (IR) and machine learning, to present new work and early results, compare notes on neural network toolkits, share best practices, and discuss the main challenges facing this line of research. In total, 19 papers were presented, including oral and poster presentations. The workshop program also included a session on invited \"lightning talks\" to encourage participants to share personal insights and negative results with the community. The workshop was well-attended with more than 120 registrations.",
"title": ""
},
{
"docid": "39a394f6c7f42f3a5e1451b0337584ed",
"text": "Surveys throughout the world have shown consistently that persons over 65 are far less likely to be victims of crime than younger age groups. However, many elderly people are unduly fearful about crime which has an adverse effect on their quality of life. This Trends and Issues puts this matter into perspective, but also discusses the more covert phenomena of abuse and neglect of the elderly. Our senior citizens have earned the right to live in dignity and without fear: the community as a whole should contribute to this process. Duncan Chappell Director",
"title": ""
},
{
"docid": "42f176b03faacad53ccef0b7573afdc4",
"text": "Acquired upper extremity amputations beyond the finger can have substantial physical, psychological, social, and economic consequences for the patient. The hand surgeon is one of a team of specialists in the care of these patients, but the surgeon plays a critical role in the surgical management of these wounds. The execution of a successful amputation at each level of the limb allows maximum use of the residual extremity, with or without a prosthesis, and minimizes the known complications of these injuries. This article reviews current surgical options in performing and managing upper extremity amputations proximal to the finger.",
"title": ""
},
{
"docid": "7347c844cdc0b7e4b365dafcdc9f720c",
"text": "Recommender systems are widely used in online applications since they enable personalized service to the users. The underlying collaborative filtering techniques work on user’s data which are mostly privacy sensitive and can be misused by the service provider. To protect the privacy of the users, we propose to encrypt the privacy sensitive data and generate recommendations by processing them under encryption. With this approach, the service provider learns no information on any user’s preferences or the recommendations made. The proposed method is based on homomorphic encryption schemes and secure multiparty computation (MPC) techniques. The overhead of working in the encrypted domain is minimized by packing data as shown in the complexity analysis.",
"title": ""
},
{
"docid": "545f41e1c94a3198e75801da4c39b0da",
"text": "When attempting to improve the performance of a deep learning system, there are more or less three approaches one can take: the first is to improve the structure of the model, perhaps adding another layer, switching from simple recurrent units to LSTM cells [4], or–in the realm of NLP–taking advantage of syntactic parses (e.g. as in [13, et seq.]); another approach is to improve the initialization of the model, guaranteeing that the early-stage gradients have certain beneficial properties [3], or building in large amounts of sparsity [6], or taking advantage of principles of linear algebra [15]; the final approach is to try a more powerful learning algorithm, such as including a decaying sum over the previous gradients in the update [12], by dividing each parameter update by the L2 norm of the previous updates for that parameter [2], or even by foregoing first-order algorithms for more powerful but more computationally costly second order algorithms [9]. This paper has as its goal the third option—improving the quality of the final solution by using a faster, more powerful learning algorithm.",
"title": ""
},
{
"docid": "8c80129507b138d1254e39acfa9300fc",
"text": "Motivation\nText mining has become an important tool for biomedical research. The most fundamental text-mining task is the recognition of biomedical named entities (NER), such as genes, chemicals and diseases. Current NER methods rely on pre-defined features which try to capture the specific surface properties of entity types, properties of the typical local context, background knowledge, and linguistic information. State-of-the-art tools are entity-specific, as dictionaries and empirically optimal feature sets differ between entity types, which makes their development costly. Furthermore, features are often optimized for a specific gold standard corpus, which makes extrapolation of quality measures difficult.\n\n\nResults\nWe show that a completely generic method based on deep learning and statistical word embeddings [called long short-term memory network-conditional random field (LSTM-CRF)] outperforms state-of-the-art entity-specific NER tools, and often by a large margin. To this end, we compared the performance of LSTM-CRF on 33 data sets covering five different entity classes with that of best-of-class NER tools and an entity-agnostic CRF implementation. On average, F1-score of LSTM-CRF is 5% above that of the baselines, mostly due to a sharp increase in recall.\n\n\nAvailability and implementation\nThe source code for LSTM-CRF is available at https://github.com/glample/tagger and the links to the corpora are available at https://corposaurus.github.io/corpora/ .\n\n\nContact\[email protected].",
"title": ""
},
{
"docid": "55eb5594f05319c157d71361880f1983",
"text": "Following the growing share of wind energy in electric power systems, several wind power forecasting techniques have been reported in the literature in recent years. In this paper, a wind power forecasting strategy composed of a feature selection component and a forecasting engine is proposed. The feature selection component applies an irrelevancy filter and a redundancy filter to the set of candidate inputs. The forecasting engine includes a new enhanced particle swarm optimization component and a hybrid neural network. The proposed wind power forecasting strategy is applied to real-life data from wind power producers in Alberta, Canada and Oklahoma, U.S. The presented numerical results demonstrate the efficiency of the proposed strategy, compared to some other existing wind power forecasting methods.",
"title": ""
},
{
"docid": "d7538c23aa43edce6cfde8f2125fd3bb",
"text": "We propose a holographic-laser-drawing volumetric display using a computer-generated hologram displayed on a liquid crystal spatial light modulator and multilayer fluorescent screen. The holographic-laser-drawing technique has enabled three things; (i) increasing the number of voxels of the volumetric graphics per unit time; (ii) increasing the total input energy to the volumetric display because the maximum energy incident at a point in the multilayer fluorescent screen is limited by the damage threshold; (iii) controlling the size, shape and spatial position of voxels. In this paper, we demonstrated (i) and (ii). The multilayer fluorescent screen was newly developed to display colored voxels. The thin layer construction of the multilayer fluorescent screen minimized the axial length of the voxels. A two-color volumetric display with blue-green voxels and red voxels were demonstrated.",
"title": ""
}
] | scidocsrr |
50217b0b862b3413a52784f3d2ebae5a | An Embedded System-on-Chip Architecture for Real-time Visual Detection and Matching | [
{
"docid": "c797b2a78ea6eb434159fd948c0a1bf0",
"text": "Feature extraction is an essential part in applications that require computer vision to recognize objects in an image processed. To extract the features robustly, feature extraction algorithms are often very demanding in computation so that the performance achieved by pure software is far from real-time. Among those feature extraction algorithms, scale-invariant feature transform (SIFT) has gained a lot of popularity recently. In this paper, we propose an all-hardware SIFT accelerator-the fastest of its kind to our knowledge. It consists of two interactive hardware components, one for key point identification, and the other for feature descriptor generation. We successfully developed a segment buffer scheme that could not only feed data to the computing modules in a data-streaming manner, but also reduce about 50% memory requirement than a previous work. With a parallel architecture incorporating a three-stage pipeline, the processing time of the key point identification is only 3.4 ms for one video graphics array (VGA) image. Taking also into account the feature descriptor generation part, the overall SIFT processing time for a VGA image can be kept within 33 ms (to support real-time operation) when the number of feature points to be extracted is fewer than 890.",
"title": ""
},
{
"docid": "90378605e6ee192cfedf60d226f8cacf",
"text": "Ever since the introduction of freely programmable hardware components into modern graphics hardware, graphics processing units (GPUs) have become increasingly popular for general purpose computations. Especially when applied to computer vision algorithms where a Single set of Instructions has to be executed on Multiple Data (SIMD), GPU-based algorithms can provide a major increase in processing speed compared to their CPU counterparts. This paper presents methods that take full advantage of modern graphics card hardware for real-time scale invariant feature detection and matching. The focus lies on the extraction of feature locations and the generation of feature descriptors from natural images. The generation of these feature-vectors is based on the Speeded Up Robust Features (SURF) method [1] due to its high stability against rotation, scale and changes in lighting condition of the processed images. With the presented methods feature detection and matching can be performed at framerates exceeding 100 frames per second for 640 times 480 images. The remaining time can then be spent on fast matching against large feature databases on the GPU while the CPU can be used for other tasks.",
"title": ""
}
] | [
{
"docid": "5d79d7e9498d7d41fbc7c70d94e6a9ae",
"text": "Reasoning about objects and their affordances is a fundamental problem for visual intelligence. Most of the previous work casts this problem as a classification task where separate classifiers are trained to label objects, recognize attributes, or assign affordances. In this work, we consider the problem of object affordance reasoning using a knowledge base representation. Diverse information of objects are first harvested from images and other meta-data sources. We then learn a knowledge base (KB) using a Markov Logic Network (MLN). Given the learned KB, we show that a diverse set of visual inference tasks can be done in this unified framework without training separate classifiers, including zeroshot affordance prediction and object recognition given human poses.",
"title": ""
},
{
"docid": "bf4776d6d01d63d3eb6dbeba693bf3de",
"text": "As the development of microprocessors, power electronic converters and electric motor drives, electric power steering (EPS) system which uses an electric motor came to use a few year ago. Electric power steering systems have many advantages over traditional hydraulic power steering systems in engine efficiency, space efficiency, and environmental compatibility. This paper deals with design and optimization of an interior permanent magnet (IPM) motor for power steering application. Simulated Annealing method is used for optimization. After optimization and finding motor parameters, An IPM motor and drive with mechanical parts of EPS system is simulated and performance evaluation of system is done.",
"title": ""
},
{
"docid": "0533a5382c58c8714f442784b5596258",
"text": "Using 2 phase-change memory (PCM) devices per synapse, a 3-layer perceptron network with 164,885 synapses is trained on a subset (5000 examples) of the MNIST database of handwritten digits using a backpropagation variant suitable for NVM+selector crossbar arrays, obtaining a training (generalization) accuracy of 82.2% (82.9%). Using a neural network (NN) simulator matched to the experimental demonstrator, extensive tolerancing is performed with respect to NVM variability, yield, and the stochasticity, linearity and asymmetry of NVM-conductance response.",
"title": ""
},
{
"docid": "5512bb4600d4cefa79508d75bc5c6898",
"text": "Spark, a subset of Ada for engineering safety and security-critical systems, is one of the best commercially available frameworks for formal-methodssupported development of critical software. Spark is designed for verification and includes a software contract language for specifying functional properties of procedures. Even though Spark and its static analysis components are beneficial and easy to use, its contract language is almost never used due to the burdens the associated tool support imposes on developers. Symbolic execution (SymExe) techniques have made significant strides in automating reasoning about deep semantic properties of source code. However, most work on SymExe has focused on bugfinding and test case generation as opposed to tasks that are more verificationoriented such as contract checking. In this paper, we present: (a) SymExe techniques for checking software contracts in embedded critical systems, and (b) Bakar Kiasan, a tool that implements these techniques in an integrated development environment for Spark. We describe a methodology for using Bakar Kiasan that provides significant increases in automation, usability, and functionality over existing Spark tools, and we present results from experiments on its application to industrial examples.",
"title": ""
},
{
"docid": "4791b04d1cafd0b4a59bbfbec50ace38",
"text": "The current paper proposes a slack-based version of the Super SBM, which is an alternative superefficiency model for the SBM proposed by Tone. Our two-stage approach provides the same superefficiency score as that obtained by the Super SBM model when the evaluated DMU is efficient and yields the same efficiency score as that obtained by the SBM model when the evaluated DMU is inefficient. The projection identified by the Super SBM model may not be strongly Pareto efficient; however, the projection identified from our approach is strongly Pareto efficient. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6c3a5cb13227b4f1333784784c1b3cb8",
"text": "This is the proposal for RumourEval-2019, which will run in early 2019 as part of that year’s SemEval event. Since the first RumourEval shared task in 2017, interest in automated claim validation has greatly increased, as the dangers of “fake news” have become a mainstream concern. Yet automated support for rumour checking remains in its infancy. For this reason, it is important that a shared task in this area continues to provide a focus for effort, which is likely to increase. We therefore propose a continuation in which the veracity of further rumours is determined, and as previously, supportive of this goal, tweets discussing them are classified according to the stance they take regarding the rumour. Scope is extended compared with the first RumourEval, in that the dataset is substantially expanded to include Reddit as well as Twitter data, and additional languages are also",
"title": ""
},
{
"docid": "17c42570f165f885062aeafe2338778d",
"text": "Deep learning has made remarkable achievement in many fields. However, learning the parameters of neural networks usually demands a large amount of labeled data. The algorithms of deep learning, therefore, encounter difficulties when applied to supervised learning where only little data are available. This specific task is called few-shot learning. To address it, we propose a novel algorithm for fewshot learning using discrete geometry, in the sense that the samples in a class are modeled as a reduced simplex. The volume of the simplex is used for the measurement of class scatter. During testing, combined with the test sample and the points in the class, a new simplex is formed. Then the similarity between the test sample and the class can be quantized with the ratio of volumes of the new simplex to the original class simplex. Moreover, we present an approach to constructing simplices using local regions of feature maps yielded by convolutional neural networks. Experiments on Omniglot and miniImageNet verify the effectiveness of our simplex algorithm on few-shot learning.",
"title": ""
},
{
"docid": "d3b0957b31f47620c0fa8e65a1cc086a",
"text": "In this paper, we propose series of algorithms for detecting change points in time-series data based on subspace identification, meaning a geometric approach for estimating linear state-space models behind time-series data. Our algorithms are derived from the principle that the subspace spanned by the columns of an observability matrix and the one spanned by the subsequences of time-series data are approximately equivalent. In this paper, we derive a batch-type algorithm applicable to ordinary time-series data, i.e. consisting of only output series, and then introduce the online version of the algorithm and the extension to be available with input-output time-series data. We illustrate the effectiveness of our algorithms with comparative experiments using some artificial and real datasets.",
"title": ""
},
{
"docid": "49791684a7a455acc9daa2ca69811e74",
"text": "This paper analyzes the basic method of digital video image processing, studies the vehicle license plate recognition system based on image processing in intelligent transport system, presents a character recognition approach based on neural network perceptron to solve the vehicle license plate recognition in real-time traffic flow. Experimental results show that the approach can achieve better positioning effect, has a certain robustness and timeliness.",
"title": ""
},
{
"docid": "d578c75d20e6747d0a381aee3a2c8f78",
"text": "As deep web grows at a very fast pace, there has been increased interest in techniques that help efficiently locate deep-web interfaces. However, due to the large volume of web resources and the dynamic nature of deep web, achieving wide coverage and high efficiency is a challenging issue. We propose a two-stage framework, namely SmartCrawler, for efficient harvesting deep web interfaces. In the first stage, SmartCrawler performs site-based searching for center pages with the help of search engines, avoiding visiting a large number of pages. To achieve more accurate results for a focused crawl, SmartCrawler ranks websites to prioritize highly relevant ones for a given topic. In the second stage, SmartCrawler achieves fast in-site searching by excavating most relevant links with an adaptive link-ranking. To eliminate bias on visiting some highly relevant links in hidden web directories, we design a link tree data structure to achieve wider coverage for a website. Our experimental results on a set of representative domains show the agility and accuracy of our proposed crawler framework, which efficiently retrieves deep-web interfaces from large-scale sites and achieves higher harvest rates than other crawlers.",
"title": ""
},
{
"docid": "9d9086fbdfa46ded883b14152df7f5a5",
"text": "This paper presents a low power continuous time 2nd order Low Pass Butterworth filter operating at power supply of 0.5V suitably designed for biomedical applications. A 3-dB bandwidth of 100 Hz using technology node of 0.18μm is achieved. The operational transconductance amplifier is a significant building block in continuous time filter design. To achieve necessary voltage headroom a pseudo-differential architecture is used to design bulk driven transconductor. In contrast, to the gate-driven OTA bulk-driven have the ability to operate over a wide input range. The output common mode voltage of the transconductor is set by a Common Mode Feedback (CMFB) circuit. The simulation results show that the filter has a peak-to-peak signal swing of 150mV (differential) for 1% THD, a dynamic range of 74.62 dB and consumes a total power of 0.225μW when operating at a supply voltage of 0.5V. The Figure of Merit (FOM) achieved by the filter is 0.055 fJ, lowest among similar low-voltage filters found in the literature.",
"title": ""
},
{
"docid": "49ca8739b6e28f0988b643fc97e7c6b1",
"text": "Stroke is a leading cause of severe physical disability, causing a range of impairments. Frequently stroke survivors are left with partial paralysis on one side of the body and movement can be severely restricted in the affected side’s hand and arm. We know that effective rehabilitation must be early, intensive and repetitive, which leads to the challenge of how to maintain motivation for people undergoing therapy. This paper discusses why games may be an effective way of addressing the problem of engagement in therapy and analyses which game design patterns may be important for rehabilitation. We present a number of serious games that our group has developed for upper limb rehabilitation. Results of an evaluation of the games are presented which indicate that they may be appropriate for people with stroke.",
"title": ""
},
{
"docid": "08c97484fe3784e2f1fd42606b915f83",
"text": "In the present study we manipulated the importance of performing two event-based prospective memory tasks. In Experiment 1, the event-based task was assumed to rely on relatively automatic processes, whereas in Experiment 2 the event-based task was assumed to rely on a more demanding monitoring process. In contrast to the first experiment, the second experiment showed that importance had a positive effect on prospective memory performance. In addition, the occurrence of an importance effect on prospective memory performance seemed to be mainly due to the features of the prospective memory task itself, and not to the characteristics of the ongoing tasks that only influenced the size of the importance effect. The results suggest that importance instructions may improve prospective memory if the prospective task requires the strategic allocation of attentional monitoring resources.",
"title": ""
},
{
"docid": "2ac1d3ce029f547213c122c0e84650b2",
"text": "Notes: (1) These questions require thought, but do not require long answers. Please be as concise as possible. (2) If you have a question about this homework, we encourage you to post your question on our Piazza forum, at https://piazza.com/class#fall2012/cs229. (3) If you missed the first lecture or are unfamiliar with the collaboration or honor code policy, please read the policy on Handout #1 (available from the course website) before starting work. (4) For problems that require programming, please include in your submission a printout of your code (with comments) and any figures that you are asked to plot. (5) Please indicate the submission time and number of late dates clearly in your submission. SCPD students: Please email your solutions to [email protected] with the subject line \" Problem Set 2 Submission \". The first page of your submission should be the homework routing form, which can be found on the SCPD website. Your submission (including the routing form) must be a single pdf file, or we may not be able to grade it. If you are writing your solutions out by hand, please write clearly and in a reasonably large font using a dark pen to improve legibility. 1. [15 points] Constructing kernels In class, we saw that by choosing a kernel K(x, z) = φ(x) T φ(z), we can implicitly map data to a high dimensional space, and have the SVM algorithm work in that space. One way to generate kernels is to explicitly define the mapping φ to a higher dimensional space, and then work out the corresponding K. However in this question we are interested in direct construction of kernels. I.e., suppose we have a function K(x, z) that we think gives an appropriate similarity measure for our learning problem, and we are considering plugging K into the SVM as the kernel function. However for K(x, z) to be a valid kernel, it must correspond to an inner product in some higher dimensional space resulting from some feature mapping φ. Mercer's theorem tells us that K(x, z) is a (Mercer) kernel if and only if for any finite set {x (1) ,. .. , x (m) }, the matrix K is symmetric and positive semidefinite, where the square matrix K ∈ R m×m is given by K ij = K(x (i) , x (j)). Now here comes the question: Let K 1 , K 2 be kernels …",
"title": ""
},
{
"docid": "ca65a232e6b93f6372d1339a11ea63f4",
"text": "Over the past decade, information technology has dramatically changed the context in which economic transactions take place. Increasingly, transactions are computer-mediated, so that, relative to humanhuman interactions, human-computer interactions are gaining in relevance. Computer-mediated transactions, and in particular those related to the Internet, increase perceptions of uncertainty. Therefore, trust becomes a crucial factor in the reduction of these perceptions. To investigate this important construct, we studied individual trust behavior and the underlying brain mechanisms through a multi-round trust game. Participants acted in the role of an investor, playing against both humans and avatars. The behavioral results show that participants trusted avatars to a similar degree as they trusted humans. Participants also revealed similarity in learning an interaction partner’s trustworthiness, independent of whether the partner was human or avatar. However, the neuroimaging findings revealed differential responses within the brain network that is associated with theory of mind (mentalizing) depending on the interaction partner. Based on these results, the major conclusion of our study is that, in a situation of a computer with human-like characteristics (avatar), trust behavior in human-computer interaction resembles that of human-human interaction. On a deeper neurobiological level, our study reveals that thinking about an interaction partner’s trustworthiness activates the mentalizing network more strongly if the trustee is a human rather than an avatar. We discuss implications of these findings for future research.",
"title": ""
},
{
"docid": "fcc0032fac0a13f99cafd936aeada724",
"text": "This paper shows that several sorts of expressions cannot be interpreted metaphorically, including determiners, tenses, etc. Generally, functional categories cannot be interpreted metaphorically, while lexical categories can. This reveals a semantic property of functional categories, and it shows that metaphor can be used as a probe for investigating them. It also reveals an important linguistic constraint on metaphor. The paper argues this constraint applies to the interface between the cognitive systems for language and metaphor. However, the constraint does not completely prevent structural elements of language from being available to the metaphor system. The paper shows that linguistic structure within the lexicon, specifically, aspectual structure, is available to the metaphor system. This paper takes as its starting point an observation about which sorts of expressions can receive metaphorical interpretations. Surprisingly, there are a number of expressions that cannot be interpreted metaphorically. Quantifier expressions (i.e. determiners) provide a good example. Consider a richly metaphorical sentence like: (1) Read o’er the volume of young Paris’ face, And find delight writ there with beauty’s pen; Examine every married lineament (Romeo and Juliet I.3). Metaphor and Lexical Semantics 2 In appreciating Shakespeare’s lovely use of language, writ and pen are obviously understood metaphorically, and married lineament must be too. (The meanings listed in the Oxford English Dictionary for lineament include diagram, portion of a body, and portion of the face viewed with respect to its outline.) In spite of all this rich metaphor, every means simply every, in its usual literal form. Indeed, we cannot think of what a metaphorical interpretation of every would be. As we will see, this is not an isolated case: while many expressions can be interpreted metaphorically, there is a broad and important group of expressions that cannot. Much of this paper will be devoted to exploring the significance of this observation. It shows us something about metaphor. In particular, it shows that there is a non-trivial linguistic constraint on metaphor. This is a somewhat surprising result, as one of the leading ideas in the theory of metaphor is that metaphor comprehension is an aspect of our more general cognitive abilities, and not tied to the specific structure of language. The constraint on metaphor also shows us something about linguistic meaning. We will see that the class of expressions that fail to have metaphorical interpretations is a linguistically important one. Linguistic items are often grouped into two classes: lexical categories, including nouns, verbs, etc., and functional categories, including determiners (quantifier expressions), tenses, etc. Generally, we will see that lexical categories can have metaphorical interpretations, while functional ones cannot. This reveals something about the kinds of semantic properties these expressions can have. It also shows that we can use the availability of metaphorical interpretation as a kind of probe, to help distinguish these sorts of categories. Functional categories are often described as ‘structural elements’ of language. They are the ‘linguistic glue’ that holds sentences together, and so, their expressions are described as being semantically ‘thin’. Our metaphor probe will give some substance to this (often very rough-andready) idea. But it raises the question of whether all such structural elements in language—anything we can describe as ‘linguistic glue’— are invisible when it comes to metaphorical interpretation. We will see that this is not so. In particular, we will see that linguistic structure that can be found within lexical items may be available to metaphorical interpretation. This paper will show specifically that so-called aspecVol. 3: A Figure of Speech",
"title": ""
},
{
"docid": "22d8bfa59bb8e25daa5905dbb9e1deea",
"text": "BACKGROUND\nSubacromial impingement syndrome (SAIS) is a painful condition resulting from the entrapment of anatomical structures between the anteroinferior corner of the acromion and the greater tuberosity of the humerus.\n\n\nOBJECTIVE\nThe aim of this study was to evaluate the short-term effectiveness of high-intensity laser therapy (HILT) versus ultrasound (US) therapy in the treatment of SAIS.\n\n\nDESIGN\nThe study was designed as a randomized clinical trial.\n\n\nSETTING\nThe study was conducted in a university hospital.\n\n\nPATIENTS\nSeventy patients with SAIS were randomly assigned to a HILT group or a US therapy group.\n\n\nINTERVENTION\nStudy participants received 10 treatment sessions of HILT or US therapy over a period of 2 consecutive weeks.\n\n\nMEASUREMENTS\nOutcome measures were the Constant-Murley Scale (CMS), a visual analog scale (VAS), and the Simple Shoulder Test (SST).\n\n\nRESULTS\nFor the 70 study participants (42 women and 28 men; mean [SD] age=54.1 years [9.0]; mean [SD] VAS score at baseline=6.4 [1.7]), there were no between-group differences at baseline in VAS, CMS, and SST scores. At the end of the 2-week intervention, participants in the HILT group showed a significantly greater decrease in pain than participants in the US therapy group. Statistically significant differences in change in pain, articular movement, functionality, and muscle strength (force-generating capacity) (VAS, CMS, and SST scores) were observed after 10 treatment sessions from the baseline for participants in the HILT group compared with participants in the US therapy group. In particular, only the difference in change of VAS score between groups (1.65 points) surpassed the accepted minimal clinically important difference for this tool.\n\n\nLIMITATIONS\nThis study was limited by sample size, lack of a control or placebo group, and follow-up period.\n\n\nCONCLUSIONS\nParticipants diagnosed with SAIS showed greater reduction in pain and improvement in articular movement functionality and muscle strength of the affected shoulder after 10 treatment sessions of HILT than did participants receiving US therapy over a period of 2 consecutive weeks.",
"title": ""
},
{
"docid": "57f3b7130d41a176410015ca03b9c954",
"text": "Sudhausia aristotokia n. gen., n. sp. and S. crassa n. gen., n. sp. (Nematoda: Diplogastridae): viviparous new species with precocious gonad development Matthias HERRMANN 1, Erik J. RAGSDALE 1, Natsumi KANZAKI 2 and Ralf J. SOMMER 1,∗ 1 Max Planck Institute for Developmental Biology, Department of Evolutionary Biology, Spemannstraße 37, Tübingen, Germany 2 Forest Pathology Laboratory, Forestry and Forest Products Research Institute, 1 Matsunosato, Tsukuba, Ibaraki 305-8687, Japan",
"title": ""
},
{
"docid": "dab15cc440d17efc5b3d5b2454cac591",
"text": "The performance of a circular patch antenna with slotted ground plane for body centric communication mainly in the health care monitoring systems for Onbody application is researched. The CP antenna is intended for utilization in UWB, body centric communication applications i.e. in between 3.1 to 10.6 GHz. The proposed antenna is CP antenna of (30 x 30 x 1.6) mm. It is simulated via CST microwave studio suite. This CP antenna covers the entire ultra wide frequency range (3.9174-13.519) GHz (9.6016) GHz with the VSWR of (3.818 GHz13.268 GHz). Antenna’s group delay is to be observed as 3.5 ns. The simulated results of antenna are given in terms of , VSWR, group delay and radiation pattern. Keywords— UWB, Body Worn Antenna, BodyCentric Communication.",
"title": ""
}
] | scidocsrr |
f90906dea9c0ba01edc93f425e6c9b1d | Deep Learning for Automated Quality Assessment of Color Fundus Images in Diabetic Retinopathy Screening | [
{
"docid": "d622cf283f27a32b2846a304c0359c5f",
"text": "Reliable verification of image quality of retinal screening images is a prerequisite for the development of automatic screening systems for diabetic retinopathy. A system is presented that can automatically determine whether the quality of a retinal screening image is sufficient for automatic analysis. The system is based on the assumption that an image of sufficient quality should contain particular image structures according to a certain pre-defined distribution. We cluster filterbank response vectors to obtain a compact representation of the image structures found within an image. Using this compact representation together with raw histograms of the R, G, and B color planes, a statistical classifier is trained to distinguish normal from low quality images. The presented system does not require any previous segmentation of the image in contrast with previous work. The system was evaluated on a large, representative set of 1000 images obtained in a screening program. The proposed method, using different feature sets and classifiers, was compared with the ratings of a second human observer. The best system, based on a Support Vector Machine, has performance close to optimal with an area under the ROC curve of 0.9968.",
"title": ""
}
] | [
{
"docid": "bcdf411d631f822e15a0b78396dc55e7",
"text": "Exercise-induced ST-segment elevation was correlated with myocardial perfusion abnormalities and coronary artery obstruction in 35 patients. Ten patients (group 1) developed exercise ST elevation in leads without Q waves on the resting ECG. The site of ST elevation corresponded to both a reversible perfusion defect and a severely obstructed coronary artery. Associated ST-segment depression in other leads occurred in seven patients, but only one had a second perfusion defect at the site of ST depression. In three of the 10 patients, abnormal left ventricular wall motion at the site of exercise-induced ST elevation was demonstrated by ventriculography. Twenty-five patients (group 2) developed exercise ST elevation in leads with Q waves on the resting ECG. The site ofST elevation corresponded to severe coronary artery stenosis and a thallium perfusion defect that persisted on the 4-hour scan (constant in 12 patients, decreased in 13). Associated ST depression in other leads occurred in 11 patients and eight (73%) had a second perfusion defect at the site of ST depression. In all 25 patients with previous transmural infarction, abnormal left ventricular wall motion at the site of the Q waves was shown by ventriculography. In patients without previous myocardial infarction, the site of exercise-induced ST-segment elevation indicates the site of severe transient myocardial ischemia, and associated ST depression is usually reciprocal. In patients with Q waves on the resting ECG, exercise ST elevation way be due to peri-infarctional ischemia, abnormal ventricular wall motion or both. Exercise ST-segment depression may be due to a second area of myocardial ischemia rather than being reciprocal to ST elevation.",
"title": ""
},
{
"docid": "65500c886a91a58ac95365c1e8539902",
"text": "This introductory overview tutorial on social network analysis (SNA) demonstrates through theory and practical case studies applications to research, particularly on social media, digital interaction and behavior records. NodeXL provides an entry point for non-programmers to access the concepts and core methods of SNA and allows anyone who can make a pie chart to now build, analyze and visualize complex networks.",
"title": ""
},
{
"docid": "488c52d028d18227f456cb3383784d05",
"text": "For smart grid execution, one of the most important requirements is fast, precise, and efficient synchronized measurements, which are possible by phasor measurement unit (PMU). To achieve fully observable network with the least number of PMUs, optimal placement of PMU (OPP) is crucial. In trying to achieve OPP, priority may be given at critical buses, generator buses, or buses that are meant for future extension. Also, different applications will have to be kept in view while prioritizing PMU placement. Hence, OPP with multiple solutions (MSs) can offer better flexibility for different placement strategies as it can meet the best solution based on the requirements. To provide MSs, an effective exponential binary particle swarm optimization (EBPSO) algorithm is developed. In this algorithm, a nonlinear inertia-weight-coefficient is used to improve the searching capability. To incorporate previous position of particle, two innovative mathematical equations that can update particle's position are formulated. For quick and reliable convergence, two useful filtration techniques that can facilitate MSs are applied. Single mutation operator is conditionally applied to avoid stagnation. The EBPSO algorithm is so developed that it can provide MSs for various practical contingencies, such as single PMU outage and single line outage for different systems.",
"title": ""
},
{
"docid": "ec332042fb49c5628ea2398e185bb369",
"text": "This paper describes a least squares (LS) channel estimation scheme for multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) systems based on pilot tones. We first compute the mean square error (MSE) of the LS channel estimate. We then derive optimal pilot sequences and optimal placement of the pilot tones with respect to this MSE. It is shown that the optimal pilot sequences are equipowered, equispaced, and phase shift orthogonal. To reduce the training overhead, an LS channel estimation scheme over multiple OFDM symbols is also discussed. Moreover, to enhance channel estimation, a recursive LS (RLS) algorithm is proposed, for which we derive the optimal forgetting or tracking factor. This factor is found to be a function of both the noise variance and the channel Doppler spread. Through simulations, it is shown that the optimal pilot sequences derived in this paper outperform both the orthogonal and random pilot sequences. It is also shown that a considerable gain in signal-to-noise ratio (SNR) can be obtained by using the RLS algorithm, especially in slowly time-varying channels.",
"title": ""
},
{
"docid": "0e672586c4be2e07c3e794ed1bb3443d",
"text": "In this thesis, the multi-category dataset has been incorporated with the robust feature descriptor using the scale invariant feature transform (SIFT), SURF and FREAK along with the multi-category enabled support vector machine (mSVM). The multi-category support vector machine (mSVM) has been designed with the iterative phases to make it able to work with the multi-category dataset. The mSVM represents the training samples of main class as the primary class in every iterative phase and all other training samples are categorized as the secondary class for the support vector machine classification. The proposed model is made capable of working with the variations in the indoor scene image dataset, which are noticed in the form of the color, texture, light, image orientation, occlusion and color illuminations. Several experiments have been conducted over the proposed model for the performance evaluation of the indoor scene recognition system in the proposed model. The results of the proposed model have been obtained in the form of the various performance parameters of statistical errors, precision, recall, F1-measure and overall accuracy. The proposed model has clearly outperformed the existing models in the terms of the overall accuracy. The proposed model improvement has been recorded higher than ten percent for all of the evaluated parameters against the existing models based upon SURF, FREAK, etc.",
"title": ""
},
{
"docid": "cbcb20173f4e012253c51020932e75a6",
"text": "We investigate methods for combining multiple selfsupervised tasks—i.e., supervised tasks where data can be collected without manual labeling—in order to train a single visual representation. First, we provide an apples-toapples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for “harmonizing” network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks—even via a na¨ýve multihead architecture—always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.",
"title": ""
},
{
"docid": "019ee0840b91f97a3acc3411edadcade",
"text": "Despite the many solutions proposed by industry and the research community to address phishing attacks, this problem continues to cause enormous damage. Because of our inability to deter phishing attacks, the research community needs to develop new approaches to anti-phishing solutions. Most of today's anti-phishing technologies focus on automatically detecting and preventing phishing attacks. While automation makes anti-phishing tools user-friendly, automation also makes them suffer from false positives, false negatives, and various practical hurdles. As a result, attackers often find simple ways to escape automatic detection.\n This paper presents iTrustPage - an anti-phishing tool that does not rely completely on automation to detect phishing. Instead, iTrustPage relies on user input and external repositories of information to prevent users from filling out phishing Web forms. With iTrustPage, users help to decide whether or not a Web page is legitimate. Because iTrustPage is user-assisted, iTrustPage avoids the false positives and the false negatives associated with automatic phishing detection. We implemented iTrustPage as a downloadable extension to FireFox. After being featured on the Mozilla website for FireFox extensions, iTrustPage was downloaded by more than 5,000 users in a two week period. We present an analysis of our tool's effectiveness and ease of use based on our examination of usage logs collected from the 2,050 users who used iTrustPage for more than two weeks. Based on these logs, we find that iTrustPage disrupts users on fewer than 2% of the pages they visit, and the number of disruptions decreases over time.",
"title": ""
},
{
"docid": "3b45dbcb526574cc77f3a099b5a97cd9",
"text": "In this paper, we exploit a new multi-country historical dataset on public (government) debt to search for a systemic relationship between high public debt levels, growth and inflation. Our main result is that whereas the link between growth and debt seems relatively weak at “normal” debt levels, median growth rates for countries with public debt over roughly 90 percent of GDP are about one percent lower than otherwise; average (mean) growth rates are several percent lower. Surprisingly, the relationship between public debt and growth is remarkably similar across emerging markets and advanced economies. This is not the case for inflation. We find no systematic relationship between high debt levels and inflation for advanced economies as a group (albeit with individual country exceptions including the United States). By contrast, in emerging market countries, high public debt levels coincide with higher inflation. Our topic would seem to be a timely one. Public debt has been soaring in the wake of the recent global financial maelstrom, especially in the epicenter countries. This should not be surprising, given the experience of earlier severe financial crises. Outsized deficits and epic bank bailouts may be useful in fighting a downturn, but what is the long-run macroeconomic impact,",
"title": ""
},
{
"docid": "9cb13d599da25991d11d276aaa76a005",
"text": "We propose a quasi real-time method for discrimination of ventricular ectopic beats from both supraventricular and paced beats in the electrocardiogram (ECG). The heartbeat waveforms were evaluated within a fixed-length window around the fiducial points (100 ms before, 450 ms after). Our algorithm was designed to operate with minimal expert intervention and we define that the operator is required only to initially select up to three ‘normal’ heartbeats (the most frequently seen supraventricular or paced complexes). These were named original QRS templates and their copies were substituted continuously throughout the ECG analysis to capture slight variations in the heartbeat waveforms of the patient’s sustained rhythm. The method is based on matching of the evaluated heartbeat with the QRS templates by a complex set of ECG descriptors, including maximal cross-correlation, area difference and frequency spectrum difference. Temporal features were added by analyzing the R-R intervals. The classification criteria were trained by statistical assessment of the ECG descriptors calculated for all heartbeats in MIT-BIH Supraventricular Arrhythmia Database. The performance of the classifiers was tested on the independent MIT-BIH Arrhythmia Database. The achieved unbiased accuracy is represented by sensitivity of 98.4% and specificity of 98.86%, both being competitive to other published studies. The provided computationally efficient techniques enable the fast post-recording analysis of lengthy Holter-monitor ECG recordings, as well as they can serve as a quasi real-time detection method embedded into surface ECG monitors.",
"title": ""
},
{
"docid": "ba2a9451fa1f794c7a819acaa9bc5d82",
"text": "In this paper we briefly address DLR’s (German Aerospace Center) background in space robotics by hand of corresponding milestone projects including systems on the International Space Station. We then discuss the key technologies needed for the development of an artificial “robonaut” generation with mechatronic ultra-lightweight arms and multifingered hands. The third arm generation is nearly finished now, approaching the limits of what is technologically achievable today with respect to light weight and power losses. In a similar way DLR’s second generation of artificial four-fingered hands was a big step towards higher reliability, manipulability and overall",
"title": ""
},
{
"docid": "8a6955ee53b9920a7c192143557ddf44",
"text": "C utaneous metastases rarely develop in patients having cancer with solid tumors. The reported incidence of cutaneous metastases from a known primary malignancy ranges from 0.6% to 9%, usually appearing 2 to 3 years after the initial diagnosis.1-11 Skin metastases may represent the first sign of extranodal disease in 7.6% of patients with a primary oncologic diagnosis.1 Cutaneous metastases may also be the first sign of recurrent disease after treatment, with 75% of patients also having visceral metastases.2 Infrequently, cutaneous metastases may be seen as the primary manifestation of an undiagnosed malignancy.12 Prompt recognition of such tumors can be of great significance, affecting prognosis and management. The initial presentation of cutaneous metastases is frequently subtle and may be overlooked without proper index of suspicion, appearing as multiple or single nodules, plaques, and ulcers, in decreasing order of frequency. Commonly, a painless, mobile, erythematous papule is initially noted, which may enlarge to an inflammatory nodule over time.8 Such lesions may be misdiagnosed as cysts, lipomas, fibromas, or appendageal tumors. Clinical features of cutaneous metastases rarely provide information regarding the primary tumor, although the location of the tumor may be helpful because cutaneous metastases typically manifest in the same geographic region as the initial cancer. The most common primary tumors seen with cutaneous metastases are melanoma, breast, and squamous cell carcinoma of the head and neck.1 Cutaneous metastases are often firm, because of dermal or lymphatic involvement, or erythematous. These features may help rule out some nonvascular entities in the differential diagnosis (eg, cysts and fibromas). The presence of pigment most commonly correlates with cutaneous metastases from melanoma. Given the limited body of knowledge regarding distinct clinical findings, we sought to better elucidate the dermoscopic patterns of cutaneous metastases, with the goal of using this diagnostic tool to help identify these lesions. We describe 20 outpatients with biopsy-proven cutaneous metastases secondary to various underlying primary malignancies. Their clinical presentation is reviewed, emphasizing the dermoscopic findings, as well as the histopathologic correlation.",
"title": ""
},
{
"docid": "855a8cfdd9d01cd65fe32d18b9be4fdf",
"text": "Interest in business intelligence and analytics education has begun to attract IS scholars’ attention. In order to discover new research questions, there is a need for conducting a literature review of extant studies on BI&A education. This study identified 44 research papers through using Google Scholar related to BI&A education. This research contributes to the field of BI&A education by (a) categorizing the existing studies on BI&A education into the key five research foci, and (b) identifying the research gaps and providing the guide for future BI&A and IS research.",
"title": ""
},
{
"docid": "f0532446a19fb2fa28a7a01cddca7e37",
"text": "The use of rumble strips on roads can provide drivers lane departure warning (LDW). However, rumble strips require an infrastructure and do not exist on a majority of roadways. Therefore, it is very desirable to have an effective in-vehicle LDW system to detect when the driver is in danger of departing the road and then triggers an alarm to warn the driver early enough to take corrective action. This paper presents the development of an image-based LDW system using the Lucas-Kanade (L-K) optical flow and the Hough transform methods. Our approach integrates both techniques to establish an operation algorithm to determine whether a warning signal should be issued based on the status of the vehicle deviating from its heading lane. The L-K optical flow tracking is used when the lane boundaries cannot be detected, while the lane detection technique is used when they become available. Even though both techniques are used in the system, only one method is activated at any given time because each technique has its own advantages and also disadvantages. The developed LDW system was road tested on several rural highways and also one section of the interstate I35 freeway. Overall, the system operates correctly as expected with a false alarm occurred only roughly about 1.18% of the operation time. This paper presents the system implementation together with our findings. Key-Words: Lane departure warning, Lucas-Kanade optical flow, Hough transform.",
"title": ""
},
{
"docid": "65b64f338b0126151a5e8dbcd4a9cf33",
"text": "This free executive summary is provided by the National Academies as part of our mission to educate the world on issues of science, engineering, and health. If you are interested in reading the full book, please visit us online at http://www.nap.edu/catalog/9728.html . You may browse and search the full, authoritative version for free; you may also purchase a print or electronic version of the book. If you have questions or just want more information about the books published by the National Academies Press, please contact our customer service department toll-free at 888-624-8373.",
"title": ""
},
{
"docid": "502cae1daa2459ed0f826ed3e20c44e4",
"text": "Recurrent neural networks (RNNs) have drawn interest from machine learning researchers because of their effectiveness at preserving past inputs for time-varying data processing tasks. To understand the success and limitations of RNNs, it is critical that we advance our analysis of their fundamental memory properties. We focus on echo state networks (ESNs), which are RNNs with simple memoryless nodes and random connectivity. In most existing analyses, the short-term memory (STM) capacity results conclude that the ESN network size must scale linearly with the input size for unstructured inputs. The main contribution of this paper is to provide general results characterizing the STM capacity for linear ESNs with multidimensional input streams when the inputs have common low-dimensional structure: sparsity in a basis or significant statistical dependence between inputs. In both cases, we show that the number of nodes in the network must scale linearly with the information rate and poly-logarithmically with the input dimension. The analysis relies on advanced applications of random matrix theory and results in explicit non-asymptotic bounds on the recovery error. Taken together, this analysis provides a significant step forward in our understanding of the STM properties in RNNs.",
"title": ""
},
{
"docid": "53d07bc7229500295741491aea15f63a",
"text": "Unhealthy lifestyle behaviour is driving an increase in the burden of chronic non-communicable diseases worldwide. Recent evidence suggests that poor diet and a lack of exercise contribute to the genesis and course of depression. While studies examining dietary improvement as a treatment strategy in depression are lacking, epidemiological evidence clearly points to diet quality being of importance to the risk of depression. Exercise has been shown to be an effective treatment strategy for depression, but this is not reflected in treatment guidelines, and increased physical activity is not routinely encouraged when managing depression in clinical practice. Recommendations regarding dietary improvement, increases in physical activity and smoking cessation should be routinely given to patients with depression. Specialised and detailed advice may not be necessary. Recommendations should focus on following national guidelines for healthy eating and physical activity.",
"title": ""
},
{
"docid": "3d2e82a0353d0b2803a579c413403338",
"text": "In 1994, nutritional facts panels became mandatory for processed foods to improve consumer access to nutritional information and to promote healthy food choices. Recent applied work is reviewed here in terms of how consumers value and respond to nutritional labels. We first summarize the health and nutritional links found in the literature and frame this discussion in terms of the obesity policy debate. Second, we discuss several approaches that have been used to empirically investigate consumer responses to nutritional labels: (a) surveys, (b) nonexperimental approaches utilizing revealed preferences, and (c) experimentbased approaches. We conclude with a discussion and suggest avenues of future research. INTRODUCTION How the provision of nutritional information affects consumers’ food choices and whether consumers value nutritional information are particularly pertinent questions in a country where obesity is pervasive. Firms typically have more information about the quality of their products than do consumers, creating a situation of asymmetric information. It is prohibitively costly for most consumers to acquire nutritional information independently of firms. Firms can use this Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 1 of 30 information to signal their quality and to receive quality premiums. However, firms that sell less nutritious products prefer to omit nutritional information. In this market setting, firms may not have an incentive to fully reveal their product quality, may try to highlight certain attributes in their advertising claims while shrouding others (Gabaix & Laibson 2006), or may provide information in a less salient fashion (Chetty et al. 2007). Mandatory nutritional labeling can fill this void of information provision by correcting asymmetric information and transforming an experience-good or a credence-good characteristic into search-good characteristics (Caswell & Mojduszka 1996). Golan et al. (2000) argue that the effectiveness of food labeling depends on firms’ incentives for information provision, government information requirements, and the role of third-party entities in standardizing and certifying the accuracy of the information. Yet nutritional information is valuable only if consumers use it in some fashion. Early advances in consumer choice theory, such as market goods possessing desirable characteristics (Lancaster 1966) or market goods used in conjunction with time to produce desirable commodities (Becker 1965), set the theoretical foundation for studying how market prices, household characteristics, incomes, nutrient content, and taste considerations interact with and influence consumer choice. LaFrance (1983) develops a theoretical framework and estimates the marginal value of nutrient versus taste parameters in an analytical approach that imposes a sufficient degree of restrictions to generality to be empirically feasible. Real or perceived tradeoffs between nutritional and taste or pleasure considerations imply that consumers will not necessarily make healthier choices. Reduced search costs mean that consumers can more easily make choices that maximize their utility. Foster & Just (1989) provide a framework in which to analyze the effect of information on consumer choice and welfare in this context. They argue that Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 2 of 30 when consumers are uncertain about product quality, the provision of information can help to better align choices with consumer preferences. However, consumers may not use nutritional labels because consumers still require time and effort to process the information. Reading a nutritional facts panel (NFP), for instance, necessitates that the consumer remove the product from the shelf and turn the product to read the nutritional information on the back or side. In addition, consumers often have difficulty evaluating the information provided on the NFP or how to relate it to a healthy diet. Berning et al. (2008) present a simple model of demand for nutritional information. The consumer chooses to consume goods and information to maximize utility subject to budget and time constraints, which include time to acquire and to process nutritional information. Consumers who have strong preferences for nutritional content will acquire more nutritional information. Alternatively, other consumers may derive more utility from appearance or taste. Following Becker & Murphy (1993), Berning et al. show that nutritional information may act as a complement to the consumption of products with unknown nutritional quality, similar to the way advertisements complement advertised goods. From a policy perspective, the rise in the U.S. obesity rate coupled with the asymmetry of information have resulted in changes in the regulatory environment. The U.S. Food and Drug Administration (FDA) is currently considering a change to the format and content of nutritional labels, originally implemented in 1994 to promote increased label use. Consumers’ general understanding of the link between food consumption and health, and widespread interest in the provision of nutritional information on food labels, is documented in the existing literature (e.g., Williams 2005, Grunert & Wills 2007). Yet only approximately half Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 3 of 30 of consumers claim to use NFPs when making food purchasing decisions (Blitstein & Evans 2006). Moreover, self-reported consumer use of nutritional labels has declined from 1995 to 2006, with the largest decline for younger age groups (20–29 years) and less educated consumers (Todd & Variyam 2008). This decline supports research findings that consumers prefer for short front label claims over the NFP’s lengthy back label explanations (e.g., Levy & Fein 1998, Wansink et al. 2004, Williams 2005, Grunert & Wills 2007). Furthermore, regulatory rules and enforcement policies may have induced firms to move away from reinforcing nutritional claims through advertising (e.g., Ippolito & Pappalardo 2002). Finally, critical media coverage of regulatory challenges (e.g., Nestle 2000) may have contributed to decreased labeling usage over time. Excellent review papers on this topic preceded and inspired this present review (e.g., Baltas 2001, Williams 2005, Drichoutis et al. 2006). In particular, Drichoutis et al. (2006) reviews the nutritional labeling literature and addresses specific issues regarding the determinants of label use, the debate on mandatory labeling, label formats preferred by consumers, and the effect of nutritional label use on purchase and dietary behavior. The current review article updates and complements these earlier reviews by focusing on recent work and highlighting major contributions in applied analyses on how consumers value, utilize, and respond to nutritional labels. We first cover the health and nutritional aspects of consumer food choices found in the literature to frame the discussion on nutritional labels in the context of the recent debate on obesity prevention policies. Second, we discuss the different empirical approaches that are utilized to investigate consumers’ response to and valuation of nutritional labels, classifying existing work into three categories according to the empirical strategy and data sources. First, we present findings based on consumer surveys and stated consumer responses to Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 4 of 30 labels. The second set of articles reviewed utilizes nonexperimental data and focuses on estimating consumer valuation of labels on the basis of revealed preferences. Here, the empirical strategy is structural, using hedonic methods, structural demand analyses, or discrete choice models and allowing for estimation of consumers’ willingness to pay (WTP) for nutritional information. The last set of empirical contributions discussed is based on experimental data, differentiating market-level and natural experiments from laboratory evidence. These studies employ mainly reduced-form approaches. Finally, we conclude with a discussion of avenues for future research. CONSUMER FOOD DEMAND, NUTRITIONAL LABELS, AND OBESITY PREVENTION The U.S. Department of Health and Public Services declared the reduction of obesity rates to less than 15% to be one of the national health objectives for 2010, yet in 2009 no state met these targets, with only two states reporting obesity rates less than 20% (CDC 2010). Researchers have studied and identified many contributing factors, such as the decreasing relative price of caloriedense food (Chou et al. 2004) and marketing practices that took advantage of behavioral reactions to food (Smith 2004). Other researchers argue that an increased prevalence of fast food (Cutler et al. 2003) and increased portion sizes in restaurants and at home (Wansink & van Ittersum 2007) may be the driving factors of increased food consumption. In addition, food psychologists have focused on changes in the eating environment, pointing to distractions such as television, books, conversation with others, or preoccupation with work as leading to increased food intake (Wansink 2004). Although each of these factors potentially contributes to the obesity epidemic, they do not necessarily mean that consumers wi",
"title": ""
},
{
"docid": "57bac865d79700350e3b1f2fe9f7a2f7",
"text": "This paper presents a novel neural machine translation model which jointly learns translation and source-side latent graph representations of sentences. Unlike existing pipelined approaches using syntactic parsers, our end-to-end model learns a latent graph parser as part of the encoder of an attention-based neural machine translation model, and thus the parser is optimized according to the translation objective. In experiments, we first show that our model compares favorably with state-of-the-art sequential and pipelined syntax-based NMT models. We also show that the performance of our model can be further improved by pretraining it with a small amount of treebank annotations. Our final ensemble model significantly outperforms the previous best models on the standard Englishto-Japanese translation dataset.",
"title": ""
},
{
"docid": "c98bf9bf53f39ba1cf5ff97ed7c9d0a3",
"text": "The problem of detecting community structures of a social network has been extensively studied over recent years, but most existing methods solely rely on the network structure and neglect the context information of the social relations. The main reason is that a context-rich network offers too much flexibility and complexity for automatic or manual modulation of the multifaceted context in the analysis process. We address the challenging problem of incorporating context information into the community analysis with a novel visual analysis mechanism. Our approach consists of two stages: interactive discovery of salient context, and iterative context-guided community detection. Central to the analysis process is a context relevance model (CRM) that visually characterizes the influence of a given set of contexts on the variation of the detected communities, and discloses the community structure in specific context configurations. The extracted relevance is used to drive an iterative visual reasoning process, in which the community structures are progressively discovered. We introduce a suite of visual representations to encode the community structures, the context as well as the CRM. In particular, we propose an enhanced parallel coordinates representation to depict the context and community structures, which allows for interactive data exploration and community investigation. Case studies on several datasets demonstrate the efficiency and accuracy of our approach.",
"title": ""
},
{
"docid": "41de353ad7e48d5f354893c6045394e2",
"text": "This paper proposes a long short-term memory recurrent neural network (LSTM-RNN) for extracting melody and simultaneously detecting regions of melody from polyphonic audio using the proposed harmonic sum loss. The previous state-of-the-art algorithms have not been based on machine learning techniques and certainly not on deep architectures. The harmonics structure in melody is incorporated in the loss function to attain robustness against both octave mismatch and interference from background music. Experimental results show that the performance of the proposed method is better than or comparable to other state-of-the-art algorithms.",
"title": ""
}
] | scidocsrr |
d0e0ba0e3ed70b12b352235199356bde | Hierarchical target type identification for entity-oriented queries | [
{
"docid": "f3a531c1979e1a179cc97c15a329d100",
"text": "This paper addresses the problem of Named Entity Recognition in Query (NERQ), which involves detection of the named entity in a given query and classification of the named entity into predefined classes. NERQ is potentially useful in many applications in web search. The paper proposes taking a probabilistic approach to the task using query log data and Latent Dirichlet Allocation. We consider contexts of a named entity (i.e., the remainders of the named entity in queries) as words of a document, and classes of the named entity as topics. The topic model is constructed by a novel and general learning method referred to as WS-LDA (Weakly Supervised Latent Dirichlet Allocation), which employs weakly supervised learning (rather than unsupervised learning) using partially labeled seed entities. Experimental results show that the proposed method based on WS-LDA can accurately perform NERQ, and outperform the baseline methods.",
"title": ""
},
{
"docid": "c7741eed703b0b896b58d272cd1a19fe",
"text": "In this paper, we propose a novel unsupervised approach to query segmentation, an important task in Web search. We use a generative query model to recover a query's underlying concepts that compose its original segmented form. The model's parameters are estimated using an expectation-maximization (EM) algorithm, optimizing the minimum description length objective function on a partial corpus that is specific to the query. To augment this unsupervised learning, we incorporate evidence from Wikipedia.\n Experiments show that our approach dramatically improves performance over the traditional approach that is based on mutual information, and produces comparable results with a supervised method. In particular, the basic generative language model contributes a 7.4% improvement over the mutual information based method (measured by segment F1 on the Intersection test set). EM optimization further improves the performance by 14.3%. Additional knowledge from Wikipedia provides another improvement of 24.3%, adding up to a total of 46% improvement (from 0.530 to 0.774).",
"title": ""
},
{
"docid": "aaf110cdf2a8ce96756c2ef0090d6e54",
"text": "The heterogeneous Web exacerbates IR problems and short user queries make them worse. The contents of web documents are not enough to find good answer documents. Link information and URL information compensates for the insufficiencies of content information. However, static combination of multiple evidences may lower the retrieval performance. We need different strategies to find target documents according to a query type. We can classify user queries as three categories, the topic relevance task, the homepage finding task, and the service finding task. In this paper, a user query classification scheme is proposed. This scheme uses the difference of distribution, mutual information, the usage rate as anchor texts, and the POS information for the classification. After we classified a user query, we apply different algorithms and information for the better results. For the topic relevance task, we emphasize the content information, on the other hand, for the homepage finding task, we emphasize the Link information and the URL information. We could get the best performance when our proposed classification method with the OKAPI scoring algorithm was used.",
"title": ""
}
] | [
{
"docid": "d7aac1208aa2ef63ed9a4ef5b67d8017",
"text": "We contrast two theoretical approaches to social influence, one stressing interpersonal dependence, conceptualized as normative and informational influence (Deutsch & Gerard, 1955), and the other stressing group membership, conceptualized as self-categorization and referent informational influence (Turner, Hogg, Oakes, Reicher & Wetherell, 1987). We argue that both social comparisons to reduce uncertainty and the existence of normative pressure to comply depend on perceiving the source of influence as belonging to one's own category. This study tested these two approaches using three influence paradigms. First we demonstrate that, in Sherif's (1936) autokinetic effect paradigm, the impact of confederates on the formation of a norm decreases as their membership of a different category is made more salient to subjects. Second, in the Asch (1956) conformity paradigm, surveillance effectively exerts normative pressure if done by an in-group but not by an out-group. In-group influence decreases and out-group influence increases when subjects respond privately. Self-report data indicate that in-group confederates create more subjective uncertainty than out-group confederates and public responding seems to increase cohesiveness with in-group - but decrease it with out-group - sources of influence. In our third experiment we use the group polarization paradigm (e.g. Burnstein & Vinokur, 1973) to demonstrate that, when categorical differences between two subgroups within a discussion group are made salient, convergence of opinion between the subgroups is inhibited. Taken together the experiments show that self-categorization can be a crucial determining factor in social influence.",
"title": ""
},
{
"docid": "e5b73193158b98a536d2d296e816c325",
"text": "We use a low-dimensional linear model to describe the user rating matrix in a recommendation system. A non-negativity constraint is enforced in the linear model to ensure that each user’s rating profile can be represented as an additive linear combination of canonical coordinates. In order to learn such a constrained linear model from an incomplete rating matrix, we introduce two variations on Non-negative Matrix Factorization (NMF): one based on the Expectation-Maximization (EM) procedure and the other a Weighted Nonnegative Matrix Factorization (WNMF). Based on our experiments, the EM procedure converges well empirically and is less susceptible to the initial starting conditions than WNMF, but the latter is much more computationally efficient. Taking into account the advantages of both algorithms, a hybrid approach is presented and shown to be effective in real data sets. Overall, the NMF-based algorithms obtain the best prediction performance compared with other popular collaborative filtering algorithms in our experiments; the resulting linear models also contain useful patterns and features corresponding to user communities.",
"title": ""
},
{
"docid": "588a4eccb49bf0edf45456319b6d8ee4",
"text": "The VIENNA rectifiers have advantages of high efficiency as well as low output harmonics and are widely utilized in power conversion system when dc power sources are needed for supplying dc loads. VIENNA rectifiers based on three-phase/level can provide two voltage outputs with a neutral line at relatively low costs. However, total harmonic distortion (THD) of input current deteriorates seriously when unbalanced voltages occur. In addition, voltage outputs depend on system parameters, especially multiple loads. Therefore, unbalance output voltage controller and modified carrier-based pulse-width modulation (CBPWM) are proposed in this paper to solve the above problems. Unbalanced output voltage controller is designed based on average model considering independent output voltage and loads conditions. Meanwhile, reference voltages are modified according to different neutral point voltage conditions. The simulation and experimental results are presented to verify the proposed method.",
"title": ""
},
{
"docid": "b784ff4a0e4458d19482d6715454f63d",
"text": "We address two questions for training a convolutional neural network (CNN) for hyperspectral image classification: i) is it possible to build a pre-trained network? and ii) is the pretraining effective in furthering the performance? To answer the first question, we have devised an approach that pre-trains a network on multiple source datasets that differ in their hyperspectral characteristics and fine-tunes on a target dataset. This approach effectively resolves the architectural issue that arises when transferring meaningful information between the source and the target networks. To answer the second question, we carried out several ablation experiments. Based on the experimental results, a network trained from scratch performs as good as a network fine-tuned from a pre-trained network. However, we observed that pre-training the network has its own advantage in achieving better performances when deeper networks are required.",
"title": ""
},
{
"docid": "1e176f66a29b6bd3dfce649da1a4db9d",
"text": "In just a few years, crowdsourcing markets like Mechanical Turk have become the dominant mechanism for for building \"gold standard\" datasets in areas of computer science ranging from natural language processing to audio transcription. The assumption behind this sea change - an assumption that is central to the approaches taken in hundreds of research projects - is that crowdsourced markets can accurately replicate the judgments of the general population for knowledge-oriented tasks. Focusing on the important domain of semantic relatedness algorithms and leveraging Clark's theory of common ground as a framework, we demonstrate that this assumption can be highly problematic. Using 7,921 semantic relatedness judgements from 72 scholars and 39 crowdworkers, we show that crowdworkers on Mechanical Turk produce significantly different semantic relatedness gold standard judgements than people from other communities. We also show that algorithms that perform well against Mechanical Turk gold standard datasets do significantly worse when evaluated against other communities' gold standards. Our results call into question the broad use of Mechanical Turk for the development of gold standard datasets and demonstrate the importance of understanding these datasets from a human-centered point-of-view. More generally, our findings problematize the notion that a universal gold standard dataset exists for all knowledge tasks.",
"title": ""
},
{
"docid": "62d1574e23fcf07befc54838ae2887c1",
"text": "Digital images are widely used and numerous application in different scientific fields use digital image processing algorithms where image segmentation is a common task. Thresholding represents one technique for solving that task and Kapur's and Otsu's methods are well known criteria often used for selecting thresholds. Finding optimal threshold values represents a hard optimization problem and swarm intelligence algorithms have been successfully used for solving such problems. In this paper we adjusted recent elephant herding optimization algorithm for multilevel thresholding by Kapur's and Otsu's method. Performance was tested on standard benchmark images and compared with four other swarm intelligence algorithms. Elephant herding optimization algorithm outperformed other approaches from literature and it was more robust.",
"title": ""
},
{
"docid": "6a993cdfbb701b43bb1cf287380e5b2e",
"text": "There is a growing need for real-time human pose estimation from monocular RGB images in applications such as human computer interaction, assisted living, video surveillance, people tracking, activity recognition and motion capture. For the task, depth sensors and multi-camera systems are usually more expensive and difficult to set up than conventional RGB video cameras. Recent advances in convolutional neural network research have allowed to replace of traditional methods with more efficient convolutional neural network based methods in many computer vision tasks. This thesis presents a method for real-time multi-person human pose estimation from video by utilizing convolutional neural networks. The method is aimed for use case specific applications, where good accuracy is essential and variation of the background and poses is limited. This enables to use a generic network architecture, which is both accurate and fast. The problem is divided into two phases: (1) pretraining and (2) fine-tuning. In pretraining, the network is learned with highly diverse input data from publicly available datasets, while in fine-tuning it is trained with application specific data recorded with Kinect. The method considers the whole system, including person detector, pose estimator and an automatic way to record application specific training material for fine-tuning. The method can be also thought of as a replacement for Kinect, and it can be used for higher level tasks such as gesture control, games, person tracking and action recognition.",
"title": ""
},
{
"docid": "a54f2e7a7d00cf5c9879e86009b60221",
"text": "OBJECTIVES\nThis study was aimed to compare the effectiveness of aromatherapy and acupressure massage intervention strategies on the sleep quality and quality of life (QOL) in career women.\n\n\nDESIGN\nThe randomized controlled trial experimental design was used in the present study. One hundred and thirty-two career women (24-55 years) voluntarily participated in this study and they were randomly assigned to (1) placebo (distilled water), (2) lavender essential oil (Lavandula angustifolia), (3) blended essential oil (1:1:1 ratio of L. angustifolia, Salvia sclarea, and Origanum majorana), and (4) acupressure massage groups for a 4-week treatment. The Pittsburgh Sleep Quality Index and Short Form 36 Health Survey were used to evaluate the intervention effects at pre- and postintervention.\n\n\nRESULTS\nAfter a 4-week treatment, all experimental groups (blended essential oil, lavender essential oil, and acupressure massage) showed significant improvements in sleep quality and QOL (p < 0.05). Significantly greater improvement in QOL was observed in the participants with blended essential oil treatment compared with those with lavender essential oil (p < 0.05), and a significantly greater improvement in sleep quality was observed in the acupressure massage and blended essential oil groups compared with the lavender essential oil group (p < 0.05).\n\n\nCONCLUSIONS\nThe blended essential oil exhibited greater dual benefits on improving both QOL and sleep quality compared with the interventions of lavender essential oil and acupressure massage in career women. These results suggest that aromatherapy and acupressure massage improve the sleep and QOL and may serve as the optimal means for career women to improve their sleep and QOL.",
"title": ""
},
{
"docid": "ab430da4dbaae50c2700f3bb9b1dbde5",
"text": "Visual appearance score, appearance mixture type and deformation are three important information sources for human pose estimation. This paper proposes to build a multi-source deep model in order to extract non-linear representation from these different aspects of information sources. With the deep model, the global, high-order human body articulation patterns in these information sources are extracted for pose estimation. The task for estimating body locations and the task for human detection are jointly learned using a unified deep model. The proposed approach can be viewed as a post-processing of pose estimation results and can flexibly integrate with existing methods by taking their information sources as input. By extracting the non-linear representation from multiple information sources, the deep model outperforms state-of-the-art by up to 8.6 percent on three public benchmark datasets.",
"title": ""
},
{
"docid": "c4c482cc453884d0016c442b580e3424",
"text": "PURPOSE/OBJECTIVES\nTo better understand treatment-induced changes in sexuality from the patient perspective, to learn how women manage these changes in sexuality, and to identify what information they want from nurses about this symptom.\n\n\nRESEARCH APPROACH\nQualitative descriptive methods.\n\n\nSETTING\nAn outpatient gynecologic clinic in an urban area in the southeastern United States served as the recruitment site for patients.\n\n\nPARTICIPANTS\nEight women, ages 33-69, receiving first-line treatment for ovarian cancer participated in individual interviews. Five women, ages 40-75, participated in a focus group and their status ranged from newly diagnosed to terminally ill from ovarian cancer.\n\n\nMETHODOLOGIC APPROACH\nBoth individual interviews and a focus group were conducted. Content analysis was used to identify themes that described the experience of women as they became aware of changes in their sexuality. Triangulation of approach, the researchers, and theory allowed for a rich description of the symptom experience.\n\n\nFINDINGS\nRegardless of age, women reported that ovarian cancer treatment had a detrimental impact on their sexuality and that the changes made them feel \"no longer whole.\" Mechanical changes caused by surgery coupled with hormonal changes added to the intensity and dimension of the symptom experience. Physiologic, psychological, and social factors also impacted how this symptom was experienced.\n\n\nCONCLUSIONS\nRegardless of age or relationship status, sexuality is altered by the diagnosis and treatment of ovarian cancer.\n\n\nINTERPRETATION\nNurses have an obligation to educate women with ovarian cancer about anticipated changes in their sexuality that may come from treatment.",
"title": ""
},
{
"docid": "7089c02cfebb857b809dc04589246ae0",
"text": "Context. Mobile web apps represent a large share of the Internet today. However, they still lag behind native apps in terms of user experience. Progressive Web Apps (PWAs) are a new technology introduced by Google that aims at bridging this gap, with a set of APIs known as service workers at its core. Goal. In this paper, we present an empirical study that evaluates the impact of service workers on the energy efficiency of PWAs, when operating in different network conditions on two different generations of mobile devices. Method. We designed an empirical experiment with two main factors: the use of service workers and the type of network available (2G or WiFi). We performed the experiment by running a total of 7 PWAs on two devices (an LG G2 and a Nexus 6P) that we evaluated as blocking factor. Our response variable is the energy consumption of the devices. Results. Our results show that service workers do not have a significant impact over the energy consumption of the two devices, regardless of the network conditions. Also, no interaction was detected between the two factors. However, some patterns in the data show different behaviors among PWAs. Conclusions. This paper represents a first empirical investigation on PWAs. Our results show that the PWA and service workers technology is promising in terms of energy efficiency.",
"title": ""
},
{
"docid": "959f2723ba18e71b2f4acd6108350dd3",
"text": "The manufacturing, converting and ennobling processes of paper are truly large area and reel-to-reel processes. Here, we describe a project focusing on using the converting and ennobling processes of paper in order to introduce electronic functions onto the paper surface. As key active electronic materials we are using organic molecules and polymers. We develop sensor, communication and display devices on paper and the main application areas are packaging and paper display applications.",
"title": ""
},
{
"docid": "c6d1ad31d52ed40d2fdba3c5840cbb63",
"text": "Classification is one of the most active research and application areas of neural networks. The literature is vast and growing. This paper summarizes the some of the most important developments in neural network classification research. Specifically, the issues of posterior probability estimation, the link between neural and conventional classifiers, learning and generalization tradeoff in classification, the feature variable selection, as well as the effect of misclassification costs are examined. Our purpose is to provide a synthesis of the published research in this area and stimulate further research interests and efforts in the identified topics.",
"title": ""
},
{
"docid": "917ab22adee174259bef5171fe6f14fb",
"text": "The manner in which quadrupeds change their locomotive patterns—walking, trotting, and galloping—with changing speed is poorly understood. In this paper, we provide evidence for interlimb coordination during gait transitions using a quadruped robot for which coordination between the legs can be self-organized through a simple “central pattern generator” (CPG) model. We demonstrate spontaneous gait transitions between energy-efficient patterns by changing only the parameter related to speed. Interlimb coordination was achieved with the use of local load sensing only without any preprogrammed patterns. Our model exploits physical communication through the body, suggesting that knowledge of physical communication is required to understand the leg coordination mechanism in legged animals and to establish design principles for legged robots that can reproduce flexible and efficient locomotion.",
"title": ""
},
{
"docid": "a427c3c0bcbfa10ce9ec1e7477697abe",
"text": "We present a system for real-time general object recognition (gor) for indoor robot in complex scenes. A point cloud image containing the object to be recognized from a Kinect sensor, for general object at will, must be extracted a point cloud model of the object with the Cluster Extraction method, and then we can compute the global features of the object model, making up the model database after processing many frame images. Here the global feature we used is Clustered Viewpoint Feature Histogram (CVFH) feature from Point Cloud Library (PCL). For real-time gor we must preprocess all the point cloud images streamed from the Kinect into clusters based on a clustering threshold and the min-max cluster sizes related to the size of the model, for reducing the amount of the clusters and improving the processing speed, and also compute the CVFH features of the clusters. For every cluster of a frame image, we search the several nearer features from the model database with the KNN method in the feature space, and we just consider the nearest model. If the strings of the model name contain the strings of the object to be recognized, it can be considered that we have recognized the general object; otherwise, we compute another cluster again and perform the above steps. The experiments showed that we had achieved the real-time recognition, and ensured the speed and accuracy for the gor.",
"title": ""
},
{
"docid": "2d7d20d578573dab8af8aff960010fea",
"text": "Two flavors of the recommendation problem are the explicit and the implicit feedback settings. In the explicit feedback case, users rate items and the user-item preference relationship can be modelled on the basis of the ratings. In the harder but more common implicit feedback case, the system has to infer user preferences from indirect information: presence or absence of events, such as a user viewed an item. One approach for handling implicit feedback is to minimize a ranking objective function instead of the conventional prediction mean squared error. The naive minimization of a ranking objective function is typically expensive. This difficulty is usually overcome by a trade-off: sacrificing the accuracy to some extent for computational efficiency by sampling the objective function. In this paper, we present a computationally effective approach for the direct minimization of a ranking objective function, without sampling. We demonstrate by experiments on the Y!Music and Netflix data sets that the proposed method outperforms other implicit feedback recommenders in many cases in terms of the ErrorRate, ARP and Recall evaluation metrics.",
"title": ""
},
{
"docid": "badfe178923af250baa80c2871aae5bc",
"text": "We study the problem of learning a tensor from a set of linear measurements. A prominent methodology for this problem is based on a generalization of trace norm regularization, which has been used extensively for learning low rank matrices, to the tensor setting. In this paper, we highlight some limitations of this approach and propose an alternative convex relaxation on the Euclidean ball. We then describe a technique to solve the associated regularization problem, which builds upon the alternating direction method of multipliers. Experiments on one synthetic dataset and two real datasets indicate that the proposed method improves significantly over tensor trace norm regularization in terms of estimation error, while remaining computationally tractable.",
"title": ""
},
{
"docid": "e56abb473e262fec3c0260202564be0a",
"text": "This paper presents and analyzes an annotated corpus of definitions, created to train an algorithm for the automatic extraction of definitions and hypernyms from Web documents. As an additional resource, we also include a corpus of non-definitions with syntactic patterns similar to those of definition sentences, e.g.: “An android is a robot” vs. “Snowcap is unmistakable”. Domain and style independence is obtained thanks to the annotation of a sample of the Wikipedia corpus and to a novel pattern generalization algorithm based on wordclass lattices (WCL). A lattice is a directed acyclic graph (DAG), a subclass of nondeterministic finite state automata (NFA). The lattice structure has the purpose of preserving the salient differences among distinct sequences, while eliminating redundant information. The WCL algorithm will be integrated into an improved version of the GlossExtractor Web application (Velardi et al., 2008). This paper is mostly concerned with a description of the corpus, the annotation strategy, and a linguistic analysis of the data. A summary of the WCL algorithm is also provided for the sake of completeness.",
"title": ""
},
{
"docid": "23919d976b6a25dc032fa23350195713",
"text": "I interactive multimedia technologies enable online firms to employ a variety of formats to present and promote their products: They can use pictures, videos, and sounds to depict products, as well as give consumers the opportunity to try out products virtually. Despite the several previous endeavors that studied the effects of different product presentation formats, the functional mechanisms underlying these presentation methods have not been investigated in a comprehensive way. This paper investigates a model showing how these functional mechanisms (namely, vividness and interactivity) influence consumers’ intentions to return to a website and their intentions to purchase products. A study conducted to test this model has largely confirmed our expectations: (1) both vividness and interactivity of product presentations are the primary design features that influence the efficacy of the presentations; (2) consumers’ perceptions of the diagnosticity of websites, their perceptions of the compatibility between online shopping and physical shopping, and their shopping enjoyment derived from a particular online shopping experience jointly influence consumers’ attitudes toward shopping at a website; and (3) both consumers’ attitudes toward products and their attitudes toward shopping at a website contribute to their intentions to purchase the products displayed on the website.",
"title": ""
},
{
"docid": "c3b652b561e38a51f1fa40483532e22d",
"text": "Vertical integration refers to one of the options that firms make decisions in the supply of oligopoly market. It was impacted by competition game between upstream firms and downstream firms. Based on the game theory and other previous studies,this paper built a dynamic game model of two-stage competition between the oligopoly suppliers of upstream and the vertical integration firms of downstream manufacturers. In the first stage, it analyzed the influences on integration degree by prices of intermediate goods when an oligopoly firm engages in a Bertrand-game if outputs are not limited. Moreover, it analyzed the influences on integration degree by price-diverge of intermediate goods if outputs were not restricted within a Bertrand Duopoly game equilibrium. In the second stage, there is a Cournot duopoly game between downstream specialization firms and downstream integration firms. Their marginal costs are affected by the integration degree and their yields are affected either under indifferent manufacture conditions. Finally, prices of intermediate goods are determined by the competition of upstream firms, the prices of intermediate goods affect the changes of integration degree between upstream firms and downstream firms. The conclusions can be referenced to decision-making of integration in market competition.",
"title": ""
}
] | scidocsrr |
32bf91d28b824afac3874285773666d9 | From archaeon to eukaryote: the evolutionary dark ages of the eukaryotic cell. | [
{
"docid": "023fa0ac94b2ea1740f1bbeb8de64734",
"text": "The establishment of an endosymbiotic relationship typically seems to be driven through complementation of the host's limited metabolic capabilities by the biochemical versatility of the endosymbiont. The most significant examples of endosymbiosis are represented by the endosymbiotic acquisition of plastids and mitochondria, introducing photosynthesis and respiration to eukaryotes. However, there are numerous other endosymbioses that evolved more recently and repeatedly across the tree of life. Recent advances in genome sequencing technology have led to a better understanding of the physiological basis of many endosymbiotic associations. This review focuses on endosymbionts in protists (unicellular eukaryotes). Selected examples illustrate the incorporation of various new biochemical functions, such as photosynthesis, nitrogen fixation and recycling, and methanogenesis, into protist hosts by prokaryotic endosymbionts. Furthermore, photosynthetic eukaryotic endosymbionts display a great diversity of modes of integration into different protist hosts. In conclusion, endosymbiosis seems to represent a general evolutionary strategy of protists to acquire novel biochemical functions and is thus an important source of genetic innovation.",
"title": ""
}
] | [
{
"docid": "179675ecf9ef119fcb0bc512995e2920",
"text": "There is little evidence available on the use of robot-assisted therapy in subacute stroke patients. A randomized controlled trial was carried out to evaluate the short-time efficacy of intensive robot-assisted therapy compared to usual physical therapy performed in the early phase after stroke onset. Fifty-three subacute stroke patients at their first-ever stroke were enrolled 30 ± 7 days after the acute event and randomized into two groups, both exposed to standard therapy. Additional 30 sessions of robot-assisted therapy were provided to the Experimental Group. Additional 30 sessions of usual therapy were provided to the Control Group. The following impairment evaluations were performed at the beginning (T0), after 15 sessions (T1), and at the end of the treatment (T2): Fugl-Meyer Assessment Scale (FM), Modified Ashworth Scale-Shoulder (MAS-S), Modified Ashworth Scale-Elbow (MAS-E), Total Passive Range of Motion-Shoulder/Elbow (pROM), and Motricity Index (MI). Evidence of significant improvements in MAS-S (p = 0.004), MAS-E (p = 0.018) and pROM (p < 0.0001) was found in the Experimental Group. Significant improvement was demonstrated in both Experimental and Control Group in FM (EG: p < 0.0001, CG: p < 0.0001) and MI (EG: p < 0.0001, CG: p < 0.0001), with an higher improvement in the Experimental Group. Robot-assisted upper limb rehabilitation treatment can contribute to increasing motor recovery in subacute stroke patients. Focusing on the early phase of stroke recovery has a high potential impact in clinical practice.",
"title": ""
},
{
"docid": "f7d535f9a5eeae77defe41318d642403",
"text": "On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift.",
"title": ""
},
{
"docid": "97582a93ef3977fab8b242a1ce102459",
"text": "We propose a distributed, multi-camera video analysis paradigm for aiport security surveillance. We propose to use a new class of biometry signatures, which are called soft biometry including a person's height, built, skin tone, color of shirts and trousers, motion pattern, trajectory history, etc., to ID and track errant passengers and suspicious events without having to shut down a whole terminal building and cancel multiple flights. The proposed research is to enable the reliable acquisition, maintenance, and correspondence of soft biometry signatures in a coordinated manner from a large number of video streams for security surveillance. The intellectual merit of the proposed research is to address three important video analysis problems in a distributed, multi-camera surveillance network: sensor network calibration, peer-to-peer sensor data fusion, and stationary-dynamic cooperative camera sensing.",
"title": ""
},
{
"docid": "a10b7c4b088c8df706381cfc3f1faec1",
"text": "OBJECTIVE\nTo develop a clinical practice guideline for red blood cell transfusion in adult trauma and critical care.\n\n\nDESIGN\nMeetings, teleconferences and electronic-based communication to achieve grading of the published evidence, discussion and consensus among the entire committee members.\n\n\nMETHODS\nThis practice management guideline was developed by a joint taskforce of EAST (Eastern Association for Surgery of Trauma) and the American College of Critical Care Medicine (ACCM) of the Society of Critical Care Medicine (SCCM). We performed a comprehensive literature review of the topic and graded the evidence using scientific assessment methods employed by the Canadian and U.S. Preventive Task Force (Grading of Evidence, Class I, II, III; Grading of Recommendations, Level I, II, III). A list of guideline recommendations was compiled by the members of the guidelines committees for the two societies. Following an extensive review process by external reviewers, the final guideline manuscript was reviewed and approved by the EAST Board of Directors, the Board of Regents of the ACCM and the Council of SCCM.\n\n\nRESULTS\nKey recommendations are listed by category, including (A) Indications for RBC transfusion in the general critically ill patient; (B) RBC transfusion in sepsis; (C) RBC transfusion in patients at risk for or with acute lung injury and acute respiratory distress syndrome; (D) RBC transfusion in patients with neurologic injury and diseases; (E) RBC transfusion risks; (F) Alternatives to RBC transfusion; and (G) Strategies to reduce RBC transfusion.\n\n\nCONCLUSIONS\nEvidence-based recommendations regarding the use of RBC transfusion in adult trauma and critical care will provide important information to critical care practitioners.",
"title": ""
},
{
"docid": "950fc4239ced87fef76ac687af3b09ac",
"text": "Software developers’ activities are in general recorded in software repositories such as version control systems, bug trackers and mail archives. While abundant information is usually present in such repositories, successful information extraction is often challenged by the necessity to simultaneously analyze different repositories and to combine the information obtained. We propose to apply process mining techniques, originally developed for business process analysis, to address this challenge. However, in order for process mining to become applicable, different software repositories should be combined, and “related” software development events should be matched: e.g., mails sent about a file, modifications of the file and bug reports that can be traced back to it. The combination and matching of events has been implemented in FRASR (Framework for Analyzing Software Repositories), augmenting the process mining framework ProM. FRASR has been successfully applied in a series of case studies addressing such aspects of the development process as roles of different developers and the way bug reports are handled.",
"title": ""
},
{
"docid": "ea31a93d54e45eede5ba3e6263e8a13e",
"text": "Clustering methods for data-mining problems must be extremely scalable. In addition, several data mining applications demand that the clusters obtained be balanced, i.e., of approximately the same size or importance. In this paper, we propose a general framework for scalable, balanced clustering. The data clustering process is broken down into three steps: sampling of a small representative subset of the points, clustering of the sampled data, and populating the initial clusters with the remaining data followed by refinements. First, we show that a simple uniform sampling from the original data is sufficient to get a representative subset with high probability. While the proposed framework allows a large class of algorithms to be used for clustering the sampled set, we focus on some popular parametric algorithms for ease of exposition. We then present algorithms to populate and refine the clusters. The algorithm for populating the clusters is based on a generalization of the stable marriage problem, whereas the refinement algorithm is a constrained iterative relocation scheme. The complexity of the overall method is O(kN log N) for obtaining k balanced clusters from N data points, which compares favorably with other existing techniques for balanced clustering. In addition to providing balancing guarantees, the clustering performance obtained using the proposed framework is comparable to and often better than the corresponding unconstrained solution. Experimental results on several datasets, including high-dimensional (>20,000) ones, are provided to demonstrate the efficacy of the proposed framework.",
"title": ""
},
{
"docid": "e37b3a68c850d1fb54c9030c22b5792f",
"text": "We address a central problem of neuroanatomy, namely, the automatic segmentation of neuronal structures depicted in stacks of electron microscopy (EM) images. This is necessary to efficiently map 3D brain structure and connectivity. To segment biological neuron membranes, we use a special type of deep artificial neural network as a pixel classifier. The label of each pixel (membrane or nonmembrane) is predicted from raw pixel values in a square window centered on it. The input layer maps each window pixel to a neuron. It is followed by a succession of convolutional and max-pooling layers which preserve 2D information and extract features with increasing levels of abstraction. The output layer produces a calibrated probability for each class. The classifier is trained by plain gradient descent on a 512 × 512 × 30 stack with known ground truth, and tested on a stack of the same size (ground truth unknown to the authors) by the organizers of the ISBI 2012 EM Segmentation Challenge. Even without problem-specific postprocessing, our approach outperforms competing techniques by a large margin in all three considered metrics, i.e. rand error, warping error and pixel error. For pixel error, our approach is the only one outperforming a second human observer.",
"title": ""
},
{
"docid": "9ca63cbf9fb0294aff706562d629e9d1",
"text": "This demo showcases Scythe, a novel query-by-example system that can synthesize expressive SQL queries from inputoutput examples. Scythe is designed to help end-users program SQL and explore data simply using input-output examples. From a web-browser, users can obtain SQL queries with Scythe in an automated, interactive fashion: from a provided example, Scythe synthesizes SQL queries and resolves ambiguities via conversations with the users. In this demo, we first show Scythe how end users can formulate queries using Scythe; we then switch to the perspective of an algorithm designer to show how Scythe can scale up to handle complex SQL features, like outer joins and subqueries.",
"title": ""
},
{
"docid": "e34d244a395a753b0cb97f8535b56add",
"text": "We propose Quadruplet Convolutional Neural Networks (Quad-CNN) for multi-object tracking, which learn to associate object detections across frames using quadruplet losses. The proposed networks consider target appearances together with their temporal adjacencies for data association. Unlike conventional ranking losses, the quadruplet loss enforces an additional constraint that makes temporally adjacent detections more closely located than the ones with large temporal gaps. We also employ a multi-task loss to jointly learn object association and bounding box regression for better localization. The whole network is trained end-to-end. For tracking, the target association is performed by minimax label propagation using the metric learned from the proposed network. We evaluate performance of our multi-object tracking algorithm on public MOT Challenge datasets, and achieve outstanding results.",
"title": ""
},
{
"docid": "c16428f049cebdc383c4ee24f75da6b0",
"text": "Classification and regression trees are machine-learning methods for constructing prediction models from data. The models are obtained by recursively partitioning the data space and fitting a simple prediction model within each partition. As a result, the partitioning can be represented graphically as a decision tree. Classification trees are designed for dependent variables that take a finite number of unordered values, with prediction error measured in terms of misclassification cost. Regression trees are for dependent variables that take continuous or ordered discrete values, with prediction error typically measured by the squared difference between the observed and predicted values. This article gives an introduction to the subject by reviewing some widely available algorithms and comparing their capabilities, strengths, and weakness in two examples. C © 2011 John Wiley & Sons, Inc. WIREs Data Mining Knowl Discov 2011 1 14–23 DOI: 10.1002/widm.8",
"title": ""
},
{
"docid": "3364f6fab787e3dbcc4cb611960748b8",
"text": "Filamentous fungi can each produce dozens of secondary metabolites which are attractive as therapeutics, drugs, antimicrobials, flavour compounds and other high-value chemicals. Furthermore, they can be used as an expression system for eukaryotic proteins. Application of most fungal secondary metabolites is, however, so far hampered by the lack of suitable fermentation protocols for the producing strain and/or by low product titers. To overcome these limitations, we report here the engineering of the industrial fungus Aspergillus niger to produce high titers (up to 4,500 mg • l−1) of secondary metabolites belonging to the class of nonribosomal peptides. For a proof-of-concept study, we heterologously expressed the 351 kDa nonribosomal peptide synthetase ESYN from Fusarium oxysporum in A. niger. ESYN catalyzes the formation of cyclic depsipeptides of the enniatin family, which exhibit antimicrobial, antiviral and anticancer activities. The encoding gene esyn1 was put under control of a tunable bacterial-fungal hybrid promoter (Tet-on) which was switched on during early-exponential growth phase of A. niger cultures. The enniatins were isolated and purified by means of reverse phase chromatography and their identity and purity proven by tandem MS, NMR spectroscopy and X-ray crystallography. The initial yields of 1 mg • l−1 of enniatin were increased about 950 fold by optimizing feeding conditions and the morphology of A. niger in liquid shake flask cultures. Further yield optimization (about 4.5 fold) was accomplished by cultivating A. niger in 5 l fed batch fermentations. Finally, an autonomous A. niger expression host was established, which was independent from feeding with the enniatin precursor d-2-hydroxyvaleric acid d-Hiv. This was achieved by constitutively expressing a fungal d-Hiv dehydrogenase in the esyn1-expressing A. niger strain, which used the intracellular α-ketovaleric acid pool to generate d-Hiv. This is the first report demonstrating that A. niger is a potent and promising expression host for nonribosomal peptides with titers high enough to become industrially attractive. Application of the Tet-on system in A. niger allows precise control on the timing of product formation, thereby ensuring high yields and purity of the peptides produced.",
"title": ""
},
{
"docid": "f562bd72463945bd35d42894e4815543",
"text": "Sound levels in animal shelters regularly exceed 100 dB. Noise is a physical stressor on animals that can lead to behavioral, physiological, and anatomical responses. There are currently no policies regulating noise levels in dog kennels. The objective of this study was to evaluate the noise levels dogs are exposed to in an animal shelter on a continuous basis and to determine the need, if any, for noise regulations. Noise levels at a newly constructed animal shelter were measured using a noise dosimeter in all indoor dog-holding areas. These holding areas included large dog adoptable, large dog stray, small dog adoptable, small dog stray, and front intake. The noise level was highest in the large adoptable area. Sound from the large adoptable area affected some of the noise measurements for the other rooms. Peak noise levels regularly exceeded the measuring capability of the dosimeter (118.9 dBA). Often, in new facility design, there is little attention paid to noise abatement, despite the evidence that noise causes physical and psychological stress on dogs. To meet their behavioral and physical needs, kennel design should also address optimal sound range.",
"title": ""
},
{
"docid": "27caf5f3a638e5084ca361424e69e9d0",
"text": "Digital watermarking of multimedia content has become a very active research area over the last several years. A general framework for watermark embedding and detection/decoding is presented here along with a review of some of the algorithms for different media types described in the literature. We highlight some of the differences based on application such as copyright protection, authentication, tamper detection, and data hiding as well as differences in technology and system requirements for different media types such as digital images, video, audio and text.",
"title": ""
},
{
"docid": "869a2cfbb021104e7f3bc7cb214b82f9",
"text": "The commoditization of high-performance networking has sparked research interest in the RDMA capability of this hardware. One-sided RDMA primitives, in particular, have generated substantial excitement due to the ability to directly access remote memory from within an application without involving the TCP/IP stack or the remote CPU. This paper considers how to leverage RDMA to improve the analytical performance of parallel database systems. To shuffle data efficiently using RDMA, one needs to consider a complex design space that includes (1) the number of open connections, (2) the contention for the shared network interface, (3) the RDMA transport function, and (4) how much memory should be reserved to exchange data between nodes during query processing. We contribute six designs that capture salient trade-offs in this design space. We comprehensively evaluate how transport-layer decisions impact the query performance of a database system for different generations of InfiniBand. We find that a shuffling operator that uses the RDMA Send/Receive transport function over the Unreliable Datagram transport service can transmit data up to 4× faster than an RDMA-capable MPI implementation in a 16-node cluster. The response time of TPC-H queries improves by as much as 2×.",
"title": ""
},
{
"docid": "644ebe324c23a23bc081119f13190810",
"text": "Most computer systems currently consist of DRAM as main memory and hard disk drives (HDDs) as storage devices. Due to the volatile nature of DRAM, the main memory may suffer from data loss in the event of power failures or system crashes. With rapid development of new types of non-volatile memory (NVRAM), such as PCM, Memristor, and STT-RAM, it becomes likely that one of these technologies will replace DRAM as main memory in the not-too-distant future. In an NVRAM based buffer cache, any updated pages can be kept longer without the urgency to be flushed to HDDs. This opens opportunities for designing new buffer cache policies that can achieve better storage performance. However, it is challenging to design a policy that can also increase the cache hit ratio. In this paper, we propose a buffer cache policy, named I/O-Cache, that regroups and synchronizes long sets of consecutive dirty pages to take advantage of HDDs' fast sequential access speed and the non-volatile property of NVRAM. In addition, our new policy can dynamically separate the whole cache into a dirty cache and a clean cache, according to the characteristics of the workload, to decrease storage writes. We evaluate our scheme with various traces. The experimental results show that I/O-Cache shortens I/O completion time, decreases the number of I/O requests, and improves the cache hit ratio compared with existing cache policies.",
"title": ""
},
{
"docid": "9da15e2851124d6ca1524ba28572f922",
"text": "With the growth of mobile data application and the ultimate expectations of 5G technology, the need to expand the capacity of the wireless networks is inevitable. Massive MIMO technique is currently taking a major part of the ongoing research, and expected to be the key player in the new cellular technologies. This papers presents an overview of the major aspects related to massive MIMO design including, antenna array general design, configuration, and challenges, in addition to advanced beamforming techniques and channel modeling and estimation issues affecting the implementation of such systems.",
"title": ""
},
{
"docid": "e1a4e8b8c892f1e26b698cd9fd37c3db",
"text": "Social networks such as Facebook, MySpace, and Twitter have become increasingly important for reaching millions of users. Consequently, spammers are increasing using such networks for propagating spam. Existing filtering techniques such as collaborative filters and behavioral analysis filters are able to significantly reduce spam, each social network needs to build its own independent spam filter and support a spam team to keep spam prevention techniques current. We propose a framework for spam detection which can be used across all social network sites. There are numerous benefits of the framework including: 1) new spam detected on one social network, can quickly be identified across social networks; 2) accuracy of spam detection will improve with a large amount of data from across social networks; 3) other techniques (such as blacklists and message shingling) can be integrated and centralized; 4) new social networks can plug into the system easily, preventing spam at an early stage. We provide an experimental study of real datasets from social networks to demonstrate the flexibility and feasibility of our framework.",
"title": ""
},
{
"docid": "354cbda757045bcee7044159bd353ca5",
"text": "In this paper we present the preliminary work of a Basque poetry generation system. Basically, we have extracted the POS-tag sequences from some verse corpora and calculated the probability of each sequence. For the generation process we have defined 3 different experiments: Based on a strophe from the corpora, we (a) replace each word with other according to its POS-tag and suffixes, (b) replace each noun and adjective with another equally inflected word and (c) replace only nouns with semantically related ones (inflected). Finally we evaluate those strategies using a Turing Test-like evaluation.",
"title": ""
},
{
"docid": "c479983e954695014417976275030746",
"text": "Semi-Non-negative Matrix Factorization is a technique that learns a low-dimensional representation of a dataset that lends itself to a clustering interpretation. It is possible that the mapping between this new representation and our original data matrix contains rather complex hierarchical information with implicit lower-level hidden attributes, that classical one level clustering methodologies cannot interpret. In this work we propose a novel model, Deep Semi-NMF, that is able to learn such hidden representations that allow themselves to an interpretation of clustering according to different, unknown attributes of a given dataset. We also present a semi-supervised version of the algorithm, named Deep WSF, that allows the use of (partial) prior information for each of the known attributes of a dataset, that allows the model to be used on datasets with mixed attribute knowledge. Finally, we show that our models are able to learn low-dimensional representations that are better suited for clustering, but also classification, outperforming Semi-Non-negative Matrix Factorization, but also other state-of-the-art methodologies variants.",
"title": ""
},
{
"docid": "81b5379abf3849e1ae4e233fd4955062",
"text": "Three-phase dc/dc converters have the superior characteristics including lower current rating of switches, the reduced output filter requirement, and effective utilization of transformers. To further reduce the voltage stress on switches, three-phase three-level (TPTL) dc/dc converters have been investigated recently; however, numerous active power switches result in a complicated configuration in the available topologies. Therefore, a novel TPTL dc/dc converter adopting a symmetrical duty cycle control is proposed in this paper. Compared with the available TPTL converters, the proposed converter has fewer switches and simpler configuration. The voltage stress on all switches can be reduced to the half of the input voltage. Meanwhile, the ripple frequency of output current can be increased significantly, resulting in a reduced filter requirement. Experimental results from a 540-660-V input and 48-V/20-A output are presented to verify the theoretical analysis and the performance of the proposed converter.",
"title": ""
}
] | scidocsrr |
aefe8698bdcbbc5e3be3fda46c3d563b | Compact Offset Microstrip-Fed MIMO Antenna for Band-Notched UWB Applications | [
{
"docid": "ba13195d39b28d5205b33452bfebd6e7",
"text": "A compact multiple-input-multiple-output (MIMO) antenna is presented for ultrawideband (UWB) applications. The antenna consists of two open L-shaped slot (LS) antenna elements and a narrow slot on the ground plane. The antenna elements are placed perpendicularly to each other to obtain high isolation, and the narrow slot is added to reduce the mutual coupling of antenna elements in the low frequency band (3-4.5 GHz). The proposed MIMO antenna has a compact size of 32 ×32 mm2, and the antenna prototype is fabricated and measured. The measured results show that the proposed antenna design achieves an impedance bandwidth of larger than 3.1-10.6 GHz, low mutual coupling of less than 15 dB, and a low envelope correlation coefficient of better than 0.02 across the frequency band, which are suitable for portable UWB applications.",
"title": ""
},
{
"docid": "b3c9bc55f5a9d64a369ec67e1364c4fc",
"text": "This paper introduces a coupling element to enhance the isolation between two closely packed antennas operating at the same frequency band. The proposed structure consists of two antenna elements and a coupling element which is located in between the two antenna elements. The idea is to use field cancellation to enhance isolation by putting a coupling element which artificially creates an additional coupling path between the antenna elements. To validate the idea, a design for a USB dongle MIMO antenna for the 2.4 GHz WLAN band is presented. In this design, the antenna elements are etched on a compact low-cost FR4 PCB board with dimensions of 20times40times1.6 mm3. According to our measurement results, we can achieve more than 30 dB isolation between the antenna elements even though the two parallel individual planar inverted F antenna (PIFA) in the design share a solid ground plane with inter-antenna spacing (Center to Center) of less than 0.095 lambdao or edge to edge separations of just 3.6 mm (0.0294 lambdao). Both simulation and measurement results are used to confirm the antenna isolation and performance. The method can also be applied to different types of antennas such as non-planar antennas. Parametric studies and current distribution for the design are also included to show how to tune the structure and control the isolation.",
"title": ""
}
] | [
{
"docid": "4163070f45dd4d252a21506b1abcfff4",
"text": "Nowadays, security solutions are mainly focused on providing security defences, instead of solving one of the main reasons for security problems that refers to an appropriate Information Systems (IS) design. In fact, requirements engineering often neglects enough attention to security concerns. In this paper it will be presented a case study of our proposal, called SREP (Security Requirements Engineering Process), which is a standard-centred process and a reuse-based approach which deals with the security requirements at the earlier stages of software development in a systematic and intuitive way by providing a security resources repository and by integrating the Common Criteria into the software development lifecycle. In brief, a case study is shown in this paper demonstrating how the security requirements for a security critical IS can be obtained in a guided and systematic way by applying SREP.",
"title": ""
},
{
"docid": "bf2c7b1d93b6dee024336506fb5a2b32",
"text": "In this paper we present the first public, online demonstration of MaxTract; a tool that converts PDF files containing mathematics into multiple formats including LTEX, HTML with embedded MathML, and plain text. Using a bespoke PDF parser and image analyser, we directly extract character and font information to use as input for a linear grammar which, in conjunction with specialised drivers, can accurately recognise and reproduce both the two dimensional relationships between symbols in mathematical formulae and the one dimensional relationships present in standard text. The main goals of MaxTract are to provide translation services into standard mathematical markup languages and to add accessibility to mathematical documents on multiple levels. This includes both accessibility in the narrow sense of providing access to content for print impaired users, such as those with visual impairments, dyslexia or dyspraxia, as well as more generally to enable any user access to the mathematical content at more re-usable levels than merely visual. MaxTract produces output compatible with web browsers, screen readers, and tools such as copy and paste, which is achieved by enriching the regular text with mathematical markup. The output can also be used directly, within the limits of the presentation MathML produced, as machine readable mathematical input to software systems such as Mathematica or Maple.",
"title": ""
},
{
"docid": "5b1fabc6a25409b25b37ea34a1e57cf8",
"text": "Global contrast considers the color difference between a target region or pixel and the rest of the image. It is frequently used to measure the saliency of the region or pixel. In previous global contrast-based methods, saliency is usually measured by the sum of contrast from the entire image. We find that the spatial distribution of contrast is one important cue of saliency that is neglected by previous works. Foreground pixel usually has high contrast from all directions, since it is surrounded by the background. Background pixel often shows low contrast in at least one direction, as it has to connect to the background. Motivated by this intuition, we first compute directional contrast from different directions for each pixel, and propose minimum directional contrast (MDC) as raw saliency metric. Then an O(1) computation of MDC using integral image is proposed. It takes only 1.5 ms for an input image of the QVGA resolution. In saliency post-processing, we use marker-based watershed algorithm to estimate each pixel as foreground or background, followed by one linear function to highlight or suppress its saliency. Performance evaluation is carried on four public data sets. The proposed method significantly outperforms other global contrast-based methods, and achieves comparable or better performance than the state-of-the-art methods. The proposed method runs at 300 FPS and shows six times improvement in runtime over the state-of-the-art methods.",
"title": ""
},
{
"docid": "f6446f5853ea6cb1ad3705c23b96edae",
"text": "Cloud-based radio access networks (C-RAN) have been proposed as a cost-efficient way of deploying small cells. Unlike conventional RANs, a C-RAN decouples the baseband processing unit (BBU) from the remote radio head (RRH), allowing for centralized operation of BBUs and scalable deployment of light-weight RRHs as small cells. In this work, we argue that the intelligent configuration of the front-haul network between the BBUs and RRHs, is essential in delivering the performance and energy benefits to the RAN and the BBU pool, respectively. We then propose FluidNet - a scalable, light-weight framework for realizing the full potential of C-RAN. FluidNet deploys a logically re-configurable front-haul to apply appropriate transmission strategies in different parts of the network and hence cater effectively to both heterogeneous user profiles and dynamic traffic load patterns. FluidNet's algorithms determine configurations that maximize the traffic demand satisfied on the RAN, while simultaneously optimizing the compute resource usage in the BBU pool. We prototype FluidNet on a 6 BBU, 6 RRH WiMAX C-RAN testbed. Prototype evaluations and large-scale simulations reveal that FluidNet's ability to re-configure its front-haul and tailor transmission strategies provides a 50% improvement in satisfying traffic demands, while reducing the compute resource usage in the BBU pool by 50% compared to baseline transmission schemes.",
"title": ""
},
{
"docid": "3725922023dbb52c1bde309dbe4d76ca",
"text": "BACKGROUND\nRecent studies demonstrate that low-level laser therapy (LLLT) modulates many biochemical processes, especially the decrease of muscle injures, the increase in mitochondrial respiration and ATP synthesis for accelerating the healing process.\n\n\nOBJECTIVE\nIn this work, we evaluated mitochondrial respiratory chain complexes I, II, III and IV and succinate dehydrogenase activities after traumatic muscular injury.\n\n\nMETHODS\nMale Wistar rats were randomly divided into three groups (n=6): sham (uninjured muscle), muscle injury without treatment, muscle injury with LLLT (AsGa) 5J/cm(2). Gastrocnemius injury was induced by a single blunt-impact trauma. LLLT was used 2, 12, 24, 48, 72, 96, and 120 hours after muscle-trauma.\n\n\nRESULTS\nOur results showed that the activities of complex II and succinate dehydrogenase after 5days of muscular lesion were significantly increased when compared to the control group. Moreover, our results showed that LLLT significantly increased the activities of complexes I, II, III, IV and succinate dehydrogenase, when compared to the group of injured muscle without treatment.\n\n\nCONCLUSION\nThese results suggest that the treatment with low-level laser may induce an increase in ATP synthesis, and that this may accelerate the muscle healing process.",
"title": ""
},
{
"docid": "42c7c881935df8b22068dabdd48a05e8",
"text": "Dropout training, originally designed for deep neural networks, has been successful on high-dimensional single-layer natural language tasks. This paper proposes a theoretical explanation for this phenomenon: we show that, under a generative Poisson topic model with long documents, dropout training improves the exponent in the generalization bound for empirical risk minimization. Dropout achieves this gain much like a marathon runner who practices at altitude: once a classifier learns to perform reasonably well on training examples that have been artificially corrupted by dropout, it will do very well on the uncorrupted test set. We also show that, under similar conditions, dropout preserves the Bayes decision boundary and should therefore induce minimal bias in high dimensions.",
"title": ""
},
{
"docid": "7a3b5a4c4968085d219fac481a4d316b",
"text": "Potassium based ceramic materials composed from leucite in which 5 % of Al is exchanged with Fe and 4 % of hematite was synthesized by mechanochemical homogenization and annealing of K2O-SiO2-Al2O3-Fe2O3 mixtures. Synthesized material was characterized by X-ray Powder Diffraction (XRPD) and Scanning Electron Microscopy coupled with Energy Dispersive X-ray spectroscopy (SEM/EDX). The two methods are in good agreement in regard to the specimen chemical composition suggesting that a leucite chemical formula is K0.8Al0.7Fe0.15Si2.25O6. Rietveld structure refinement results reveal that about 20 % of vacancies exist in the position of K atoms.",
"title": ""
},
{
"docid": "de1ec3df1fa76e5a419ac8506cd63286",
"text": "It is hard to estimate optical flow given a realworld video sequence with camera shake and other motion blur. In this paper, we first investigate the blur parameterization for video footage using near linear motion elements. We then combine a commercial 3D pose sensor with an RGB camera, in order to film video footage of interest together with the camera motion. We illustrates that this additional camera motion/trajectory channel can be embedded into a hybrid framework by interleaving an iterative blind deconvolution and warping based optical flow scheme. Our method yields improved accuracy within three other state-of-the-art baselines given our proposed ground truth blurry sequences; and several other realworld sequences filmed by our imaging system.",
"title": ""
},
{
"docid": "a83b417c2be604427eacf33b1db91468",
"text": "We report a male infant with iris coloboma, choanal atresia, postnatal retardation of growth and psychomotor development, genital anomaly, ear anomaly, and anal atresia. In addition, there was cutaneous syndactyly and nail hypoplasia of the second and third fingers on the right and hypoplasia of the left second finger nail. Comparable observations have rarely been reported and possibly represent genetic heterogeneity.",
"title": ""
},
{
"docid": "5512bb4600d4cefa79508d75bc5c6898",
"text": "Spark, a subset of Ada for engineering safety and security-critical systems, is one of the best commercially available frameworks for formal-methodssupported development of critical software. Spark is designed for verification and includes a software contract language for specifying functional properties of procedures. Even though Spark and its static analysis components are beneficial and easy to use, its contract language is almost never used due to the burdens the associated tool support imposes on developers. Symbolic execution (SymExe) techniques have made significant strides in automating reasoning about deep semantic properties of source code. However, most work on SymExe has focused on bugfinding and test case generation as opposed to tasks that are more verificationoriented such as contract checking. In this paper, we present: (a) SymExe techniques for checking software contracts in embedded critical systems, and (b) Bakar Kiasan, a tool that implements these techniques in an integrated development environment for Spark. We describe a methodology for using Bakar Kiasan that provides significant increases in automation, usability, and functionality over existing Spark tools, and we present results from experiments on its application to industrial examples.",
"title": ""
},
{
"docid": "a71efe137054cd9102ed05e7d5c139f4",
"text": "In this paper we argue for the use of Unstructured Supplementary Service Data (USSD) as a platform for universal cell phone applications. We examine over a decade of ICT4D research, analyzing how USSD can extend and complement current uses of IVR and SMS for data collection, messaging, information access, social networking and complex user initiated transactions. Based on these findings we identify situations when a mobile based project should consider using USSD with increasingly common third party gateways over other mediums. This analysis also motivates the design and implementation of an open source library for rapid development of USSD applications. Finally, we explore three USSD use cases, demonstrating how USSD opens up a design space not available with IVR or SMS.",
"title": ""
},
{
"docid": "5c8e509d42148fef01e1c5ac00286aac",
"text": "Graphs can represent biological networks at the molecular, protein, or species level. An important query is to find all matches of a pattern graph to a target graph. Accomplishing this is inherently difficult (NP-complete) and the efficiency of heuristic algorithms for the problem may depend upon the input graphs. The common aim of existing algorithms is to eliminate unsuccessful mappings as early as and as inexpensively as possible. We propose a new subgraph isomorphism algorithm which applies a search strategy to significantly reduce the search space without using any complex pruning rules or domain reduction procedures. We compare our method with the most recent and efficient subgraph isomorphism algorithms (VFlib, LAD, and our C++ implementation of FocusSearch which was originally distributed in Modula2) on synthetic, molecules, and interaction networks data. We show a significant reduction in the running time of our approach compared with these other excellent methods and show that our algorithm scales well as memory demands increase. Subgraph isomorphism algorithms are intensively used by biochemical tools. Our analysis gives a comprehensive comparison of different software approaches to subgraph isomorphism highlighting their weaknesses and strengths. This will help researchers make a rational choice among methods depending on their application. We also distribute an open-source package including our system and our own C++ implementation of FocusSearch together with all the used datasets ( http://ferrolab.dmi.unict.it/ri.html ). In future work, our findings may be extended to approximate subgraph isomorphism algorithms.",
"title": ""
},
{
"docid": "d274ad45c79237b9e63e9dc18881064b",
"text": "Can altmetric data be validly used for the measurement of societal impact? The current study seeks to answer this question with a comprehensive dataset (about 100,000 records) from very disparate sources (F1000, Altmetric, and an in-house database based on Web of Science). In the F1000 peer review system, experts attach particular tags to scientific papers which indicate whether a paper could be of interest for science or rather for other segments of society. The results show that papers with the tag\"good for teaching\"do achieve higher altmetric counts than papers without this tag - if the quality of the papers is controlled. At the same time, a higher citation count is shown especially by papers with a tag that is specifically scientifically oriented (\"new finding\"). The findings indicate that papers tailored for a readership outside the area of research should lead to societal impact. If altmetric data is to be used for the measurement of societal impact, the question arises of its normalization. In bibliometrics, citations are normalized for the papers' subject area and publication year. This study has taken a second analytic step involving a possible normalization of altmetric data. As the results show there are particular scientific topics which are of especial interest for a wide audience. Since these more or less interesting topics are not completely reflected in Thomson Reuters' journal sets, a normalization of altmetric data should not be based on the level of subject categories, but on the level of topics.",
"title": ""
},
{
"docid": "e083b5fdf76bab5cdc8fcafc77db23f7",
"text": "Working under a model of privacy in which data remains private even from the statistician, we study the tradeoff between privacy guarantees and the risk of the resulting statistical estimators. We develop private versions of classical information-theoretic bounds, in particular those due to Le Cam, Fano, and Assouad. These inequalities allow for a precise characterization of statistical rates under local privacy constraints and the development of provably (minimax) optimal estimation procedures. We provide a treatment of several canonical families of problems: mean estimation and median estimation, multinomial probability estimation, and nonparametric density estimation. For all of these families, we provide lower and upper bounds that match up to constant factors, and exhibit new (optimal) privacy-preserving mechanisms and computationally efficient estimators that achieve the bounds. Additionally, we present a variety of experimental results for estimation problems involving sensitive data, including salaries, censored blog posts and articles, and drug abuse; these experiments demonstrate the importance of deriving optimal procedures.",
"title": ""
},
{
"docid": "e5dc07c94c7519f730d03aa6c53ca98e",
"text": "Brown adipose tissue (BAT) is specialized to dissipate chemical energy in the form of heat as a defense against cold and excessive feeding. Interest in the field of BAT biology has exploded in the past few years because of the therapeutic potential of BAT to counteract obesity and obesity-related diseases, including insulin resistance. Much progress has been made, particularly in the areas of BAT physiology in adult humans, developmental lineages of brown adipose cell fate, and hormonal control of BAT thermogenesis. As we enter into a new era of brown fat biology, the next challenge will be to develop strategies for activating BAT thermogenesis in adult humans to increase whole-body energy expenditure. This article reviews the recent major advances in this field and discusses emerging questions.",
"title": ""
},
{
"docid": "2c5eb3fb74c6379dfd38c1594ebe85f4",
"text": "Accurately recognizing speaker emotion and age/gender from speech can provide better user experience for many spoken dialogue systems. In this study, we propose to use deep neural networks (DNNs) to encode each utterance into a fixed-length vector by pooling the activations of the last hidden layer over time. The feature encoding process is designed to be jointly trained with the utterance-level classifier for better classification. A kernel extreme learning machine (ELM) is further trained on the encoded vectors for better utterance-level classification. Experiments on a Mandarin dataset demonstrate the effectiveness of our proposed methods on speech emotion and age/gender recognition tasks.",
"title": ""
},
{
"docid": "764c38722f53229344184248ac94a096",
"text": "Verbal fluency tasks have long been used to assess and estimate group and individual differences in executive functioning in both cognitive and neuropsychological research domains. Despite their ubiquity, however, the specific component processes important for success in these tasks have remained elusive. The current work sought to reveal these various components and their respective roles in determining performance in fluency tasks using latent variable analysis. Two types of verbal fluency (semantic and letter) were compared along with several cognitive constructs of interest (working memory capacity, inhibition, vocabulary size, and processing speed) in order to determine which constructs are necessary for performance in these tasks. The results are discussed within the context of a two-stage cyclical search process in which participants first search for higher order categories and then search for specific items within these categories.",
"title": ""
},
{
"docid": "2a0315f4e95ee3475ec9a359eae98632",
"text": "The measurement of safe driving distance based on stereo vision is proposed. The model of camera imaging is established using traditional camera calibration method firstly. Secondly, the projection matrix is deduced according to camera's internal and external parameter and used to calibrate the camera. The method of camera calibration based on two-dimensional target plane is adopted. Then the distortion parameters are calculated when the nonlinear geometric model of camera imaging is built. Moreover, the camera's internal and external parameters are optimized on the basis of the projection error' least squares criterion so that the un-distortion image can be obtained. The matching is done between the left image and the right image corresponding to angular point. The parallax error and the distance between the target vehicle and the camera can be calculated. The experimental results show that the measurement scheme is an effective one in a security vehicles spacing survey. The proposed system is convenient for driver to control in time and precisely. It is able to increase the security in intelligent transportation vehicles.",
"title": ""
},
{
"docid": "0594068f88a89de0dbc9d4b82e15d31f",
"text": "We describe mechanical metamaterials created by folding flat sheets in the tradition of origami, the art of paper folding, and study them in terms of their basic geometric and stiffness properties, as well as load bearing capability. A periodic Miura-ori pattern and a non-periodic Ron Resch pattern were studied. Unexceptional coexistence of positive and negative Poisson's ratio was reported for Miura-ori pattern, which are consistent with the interesting shear behavior and infinity bulk modulus of the same pattern. Unusually strong load bearing capability of the Ron Resch pattern was found and attributed to the unique way of folding. This work paves the way to the study of intriguing properties of origami structures as mechanical metamaterials.",
"title": ""
}
] | scidocsrr |
Subsets and Splits