title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
THE DEMOGRAPHY OF THE LOW COUNTRIES 1500-1990: FACTS AND FIGURES | THE PRESENT SPECIAL ISSUE OF TIIE CJNS contains essays on economy, history, philosophy, music, painting and sculpture covering almost four cen~es. In examining, as editor, these varied contents, the following questions began to sU8gest themselves: What is actually known of the size and population taking part in the economic and cultural activities described? How was this population divided over the different provinces and regions of the country? And what was the proportion between rural and urban areas? The sources for the following data are derived not only from historical surveys and atlases, but also from travel guides, calendars, journals and newspapers. They are of varied quality, but together they do give a fairly coherent picture. For the fifteenth and sixteenth centuries not many population sources were available. The Southern Netherlands, moreover, could unfortunately not be considered at all. In the past, Randers and Brabant always were more densely populated than Holland. With regard to the East Indies (present day Indonesia), most of the available sources cover the nineteenth and twentieth centuries. Because of the importance of the East, their inclusion seemed essential. At present, the Netherlands covers an area of 41,864 square kilometers, with a i population of 15.1 million. Before ca. 800 A.D., the Dutch delta was actually larger than it is now. As the result of a series of floods during the Middle Ages, the landmass diminished considerably. The most noteworthy of these inundations was the St. Elisabeth Rood of November 18-19, 1421. In Holland and Zeeland many villages disappeared and the Biesbosch came into existence. As to the Dutch population, until the latter half of the last century it was only a fraction of its present size. Much emphasis has been placed on the study of the Golden Age, ca. 1600-1675. This era covers less than one fifth of the entire period considered in this issue. The economic and artistic flowering as~ociated with the Golden Age took place in a limited geographical area and benefitted a relatively small population (cf. the article by Peter Ford). Around 1500, twenty-nine percent of the Dutch lived in the county (Graajschap) of Holland. By 1650 this percentage had risen to forty-eight. Here less than half of the inhabitants lived in the countryside - even for European standards an unusual degree of urbanization. Holland was very much a region of small and medium-sized towns and villages, with the exception of Amsterdam. The urban centers were surrounded by intensively cultivated agricultural lands, providing food and other crops used in a variety of industries. The indispensable supplies of wheat and wood depended largely on the Baltic and Russian trade. Wood was essential for construction and for the huge shipbuilding industries. It has been generally accepted that during the Golden Age most of the activities in manufacturing, shipbuilding, trade and commerce to the Baltic, the Mediterranean, South |
A Probabilistic Discriminative Model for Android Malware Detection with Decompiled Source Code | Mobile devices are an important part of our everyday lives, and the Android platform has become a market leader. In recent years a number of approaches for Android malware detection have been proposed, using permissions, source code analysis, or dynamic analysis. In this paper, we propose to use a probabilistic discriminative model based on regularized logistic regression for Android malware detection. Through extensive experimental evaluation, we demonstrate that it can generate probabilistic outputs with highly accurate classification results. In particular, we propose to use Android API calls as features extracted from decompiled source code, and analyze and explore issues in feature granularity, feature representation, feature selection, and regularization. We show that the probabilistic discriminative model also works well with permissions, and substantially outperforms the state-of-the-art methods for Android malware detection with application permissions. Furthermore, the discriminative learning model achieves the best detection results by combining both decompiled source code and application permissions. To the best of our knowledge, this is the first research that proposes probabilistic discriminative model for Android malware detection with a thorough study of desired representation of decompiled source code and is the first research work for Android malware detection task that combines both analysis of decompiled source code and application permissions. |
The Effects of Disability Labels on Special Education and General Education Teachers' Referrals for Gifted Programs. | This study investigated the effect of the disability labels learning disabilities (LD) and emotional and behavioral disorders (EBD) on public school general education and special education teachers’ willingness to refer students to gifted programs. Results indicated that teachers were significantly influenced by the LD and EBD labels when making referrals to gifted programs. Both groups of teachers were much less willing to refer students with disability labels to gifted programs than identically described students with no disability label. Additionally, when compared to general education teachers, special education teachers were less likely to refer a gifted student, with or without disabilities, to a gifted program. MARGARITA BIANCO, Ed.D., is assistant professor, Colorado State University. most likely to be found among students with the most frequently occurring disabilities, such as learning disabilities (Miller & Terry-Godt, 1996). For example, Friedrichs estimated that there are approximately 95,000 students in this subpopulation. Although it is generally accepted that gifted students with learning disabilities (LD) are underrepresented in gifted programs, limited empirical data are available regarding the actual prevalence of this population (Karnes, Shaunessy, & Bisland, 2004). One reason for this may be the problematic nature of defining giftedness and identifying who does and who does not meet the criteria. Defining giftedness, with or without disabilities, is a complicated and often controversial task (Davis & Rimm, 2004). Although the literature abounds with definitions of giftedness (e.g., Clark, 1997; Piirto, 1999; Renzulli, 1978; Tannenbaum, 1997) and theories of intelligence (e.g., Gardner, 1983; Sternberg, 1997), there is no one universally accepted definition of giftedness (Davis & Rimm, 1998, 2004). As a result, giftedness means different things to different people (Tannenbaum & Baldwin, 1983) and can be influenced by one’s cultural perspective (Busse, Dahme, Wagner, & Wieczerkowski, 1986). To help resolve this dilemma, many states look to the federal definition to guide their policy development (Stephens & Karnes, 2000). The federal definition of gifted and talented has undergone numerous changes since the first definition appeared in The Education Amendments of 1969 (U.S. Congress, 1970). State departments of education use their interpretation of these definitions to develop school district policies for identification and eligibility criteria (Davis & Rimm, 2004; Stephens & Karnes, 2000). In a recent analysis of states’ definitions of gifted and talented, Stephens and Karnes found no single generally accepted definition used for identification and eligibility purposes. However, according to these authors, most states use some modified form of the following 1978 federal definition: The term “gifted and talented children” means children and, whenever applicable, youth, who are identified at the preschool, elementary, or secondary level as possessing demonstrated or potential abilities that give evidence of high performance capability in areas such as intellectual, creative, specific academic or leadership ability or in the performing and visual arts and who by reason thereof require services or activities not ordinarily provided by the school. (Purcell, 1978; P.L. 95-561, title IX, sec. 902) A critical issue related to defining giftedness is the purpose for which the definition is used (Renzulli, 1998). Defining giftedness becomes particularly important when the definition influences the selection of students for gifted programs and inhibits the selection of others (Davis & Rimm, 1998, 2004). Renzulli discussed this relationship, stating: A definition of giftedness is a formal and explicit statement that might eventually become part of official policies or guidelines. Whether or not it is the writer’s intent, such statements will undoubtedly be used to direct identification and programming practices, and therefore we must recognize the consequential nature of this purpose and pivotal role. (p. 2) Most school districts still base their identification of gifted students on high general intelligence as measured by group or individual intelligence tests and high achievement test scores (Patton, 1997; Richert, 1997). As a result, access to gifted programs continues to be limited for many students who, despite their gifted abilities, do not perform well on these measures (Patton; Richert). Consequently, many unidentified gifted students, including those with LD, are not receiving the differentiated services they need in order to nurture and further develop their unique abilities (Davis & Rimm, 1998, 2004). Increasing attention has been given to identifying characteristics of gifted students with LD (Beckley, 1998; Nielsen, 2002). This population has been defined as “those who possess an outstanding gift or talent and are capable of high performance, but also have a learning disability that makes some aspect of academic achievement difficult” (Brody & Mills, 1997, p. 282). The students’ disabilities frequently mask their abilities, causing both exceptionalities to appear less extreme, which may result in average (or below average) performance (Baum, Owen, & Dixon, 1991; Silverman, 1989, 2003). According to Brody and Mills, these students usually fit into one of three categories, leaving the dual nature of their exceptionalities unrecognized. The first group includes students who have been identified as gifted but continue to exhibit difficulties with academic tasks. They are frequently considered underachievers and often their poor academic performance is attributed to laziness (Silverman, 2003). The second group contains those who have been identified as having an LD. For this group, the disability is what becomes recognized and addressed. Finally, the third group consists of students who have not been identified for either their disability or their exceptional abilities. This may be the largest group of all (Baum, 1990; Beckley, 1998; Brody & Mills, 1997). Contrary to the recent interest and research in the identification and needs of gifted students with LD (Karnes, Shaunessy, & Bisland, 2004; Reis & Colbert, 2004; Winebrenner, 2003), a paucity of empirical research has addressed the characteristics, identificaLearning Disability Quarterly 286 tion, and needs of gifted students with emotional and behavioral disorders (Morrison & Omdal, 2000; Reid & McGuire, 1995). Reid and McGuire suggested that students with attention or emotional and behavior disorders (EBD) are routinely overlooked and not considered for referral to gifted programs because their negative behaviors contradict commonly held perceptions of gifted students. Among the many barriers hindering the identification and referral of students with disabilities for gifted programs are teachers’ stereotypic beliefs (Cline & Hedgeman, 2001; Johnson et al., 1997; Minner, Prater, Bloodsworth, & Walker, 1987; St. Jean, 1996) and inadequate teacher training (Davis & Rimm, 2004; Johnson et al., 1997). According to Cline and Hedgeman, stereotypic expectations work against gifted students with disabilities in two ways: (a) misconceptions about the characteristics of gifted students and (b) low expectations for students identified with disabilities. Researchers have investigated the effects of disability labels on teachers’ perceptions and expectations for students with disabilities for several decades (e.g., Algozzine & Sutherland, 1977; Dunn, 1968; Foster & Ysseldyke, 1976; Taylor, Smiley, & Ziegler, 1983). These studies, among others, document both preservice and inservice teachers’ lowered expectations for students with disabilities in public school classrooms and even college classrooms (Beilke & Yssel, 1999; Minner & Prater, 1984). Given overall lower teacher expectations for students who are labeled as having a disability than for those who are not, the special education teacher’s role becomes particularly important for gifted students with disabilities since many of these students are often first recognized for their disability, not their gifts and talents (Davis & Rimm, 2004). While special education teachers may provide services for students with disabilities in a variety of settings or using a variety of approaches, their role does not preclude noting potential giftedness among their students, and subsequently making referrals for evaluation and placement in gifted programs. With the exception of a few well-cited studies published more than a decade ago (Minner, 1989, 1990; Minner et al., 1987), research on the specific effect of disability labels on teachers’ referrals to gifted programs is nonexistent. Additionally, little is known about the differential effects of disability labels on referrals to gifted programs between special education teachers and general education teachers. However, Minner’s research (Minner, 1989, 1990; Minner et al., 1987) clearly demonstrated that general education teachers and teachers of the gifted are negatively influenced by certain disability labels when making referral decisions for gifted programs. The purpose of this study was twofold. First, given what is known regarding the underrepresentation of students with LD and EBD in gifted programs, the study was designed to investigate the influence of the presence of LD and EBD labels on public school teachers’ (special education and general education) referral recommendations for gifted programs. Second, the differences in referral recommendations between special and general education teachers were examined. Three questions were investigated: (a) Do referral ratings for gifted programs differ among teachers who believe the student has a learning disability, an emotional or behavioral disorder, or no exceptional condition? (b) Do referral ratings for gifted programs differ between general and special education teachers? and (c) Is there an interaction betwe |
A survey of Agent-Oriented Software Engineering | Agent-Oriented Software Engineering is the one of the most recent contributions to the field of Software Engineering. It has several benefits compared to existing development approaches, in particular the ability to let agents represent high-level abstractions of active entities in a software system. This paper gives an overview of recent research and industrial applications of both general high-level methodologies and on more specific design methodologies for industry-strength software engineering. |
The ketogenic diet for the treatment of childhood epilepsy: a randomised controlled trial | BACKGROUND
The ketogenic diet has been widely and successfully used to treat children with drug-resistant epilepsy since the 1920s. The aim of this study was to test the efficacy of the ketogenic diet in a randomised controlled trial.
METHODS
145 children aged between 2 and 16 years who had at least daily seizures (or more than seven seizures per week), had failed to respond to at least two antiepileptic drugs, and had not been treated previously with the ketogenic diet participated in a randomised controlled trial of its efficacy to control seizures. Enrolment for the trial ran between December, 2001, and July, 2006. Children were seen at one of two hospital centres or a residential centre for young people with epilepsy. Children were randomly assigned to receive a ketogenic diet, either immediately or after a 3-month delay, with no other changes to treatment (control group). Neither the family nor investigators were blinded to the group assignment. Early withdrawals were recorded, and seizure frequency on the diet was assessed after 3 months and compared with that of the controls. The primary endpoint was a reduction in seizures; analysis was intention to treat. Tolerability of the diet was assessed by questionnaire at 3 months. The trial is registered with ClinicalTrials.gov, number NCT00564915.
FINDINGS
73 children were assigned to the ketogenic diet and 72 children to the control group. Data from 103 children were available for analysis: 54 on the ketogenic diet and 49 controls. Of those who did not complete the trial, 16 children did not receive their intervention, 16 did not provide adequate data, and ten withdrew from the treatment before the 3-month review, six because of intolerance. After 3 months, the mean percentage of baseline seizures was significantly lower in the diet group than in the controls (62.0%vs 136.9%, 75% decrease, 95% CI 42.4-107.4%; p<0.0001). 28 children (38%) in the diet group had greater than 50% seizure reduction compared with four (6%) controls (p<0.0001), and five children (7%) in the diet group had greater than 90% seizure reduction compared with no controls (p=0.0582). There was no significant difference in the efficacy of the treatment between symptomatic generalised or symptomatic focal syndromes. The most frequent side-effects reported at 3-month review were constipation, vomiting, lack of energy, and hunger.
INTERPRETATION
The results from this trial of the ketogenic diet support its use in children with treatment-intractable epilepsy.
FUNDING
HSA Charitable Trust; Smiths Charity; Scientific Hospital Supplies; Milk Development Council. |
LNG decision making approaches compared. | Hazard zones associated with LNG handling activities have been a major point of contention in recent terminal development applications. Debate has reflected primarily worst case scenarios and discussion of these. This paper presents results from a maximum credible event approach. A comparison of results from several models either run by the authors or reported in the literature is presented. While larger scale experimental trials will be necessary to reduce the uncertainty, in the interim a set of base cases are suggested covering both existing trials and credible and worst case events is proposed. This can assist users to assess the degree of conservatism present in quoted modeling approaches and model selections. |
Towards Finding Optimal Differential Characteristics for ARX : Application to Salsa 20 ⋆ | Abstract. An increasing number of cryptographic primitives are built using the ARX operations: addition modulo 2, bit rotation and XOR. Because of their very fast performance in software, ARX ciphers are becoming increasingly common. However, there is currently no rigorous understanding of the security of ARX ciphers against one of the most common attacks in symmetric-key cryptography: differential cryptanalysis. In this paper, we introduce a tool to search for optimal differential characteristics for ARX ciphers. Our technique is very easy to use, as it only involves writing out simple equations for every addition, rotation and XOR operation in the cipher, and applying an off-the-shelf SAT solver. As is commonly done for ARX ciphers, our analysis assumes that the probability of a characteristic can be computed by multiplying the probabilities of each operation, and that the probability of the best characteristic is a good estimate for the probability of the corresponding differential. Using extensive experiments for Salsa20, we find that these assumptions are not always valid. To overcome these issues, we propose a method to accurately estimate the probability of ARX differentials. |
Tamper resistance mechanisms for secure embedded systems | Security is a concern in the design of a wide range of embedded systems. Extensive research has been devoted to the development of cryptographic algorithms that provide the theoretical underpinnings of information security. Functional security mechanisms, such as security protocols, suitably employ these mathematical primitives in order to achieve the desired security objectives. However, functional security mechanisms alone cannot ensure security, since most embedded systems present attackers with an abundance of opportunities to observe or interfere with their implementation, and hence to compromise their theoretical strength. This paper surveys various tamper or attack techniques, and explains how they can be used to undermine or weaken security functions in embedded systems. Tamper-resistant design refers to the process of designing a system architecture and implementation that is resistant to such attacks. We outline approaches that have been proposed to design tamper-resistant embedded systems, with examples drawn from recent commercial products. |
POMCPOW: An online algorithm for POMDPs with continuous state, action, and observation spaces | Online solvers for partially observable Markov decision processes have been applied to problems with large discrete state spaces, but continuous state, action, and observation spaces remain a challenge. This paper begins by investigating double progressive widening (DPW) as a solution to this challenge. However, we prove that this modification alone is not sufficient because the belief representations in the search tree collapse to a single particle causing the algorithm to converge to a policy that is suboptimal regardless of the computation time. The main contribution of the paper is to propose a new algorithm, POMCPOW, that incorporates DPW and weighted particle filtering to overcome this deficiency and attack continuous problems. Simulation results show that these modifications allow the algorithm to be successful where previous approaches fail. |
[Population trends and population theory in the past and present: German Society for Demography twenty-first conference] | This publication contains 16 papers presented at the twenty-first meeting of the German Society for Demography held in 1987 in Berlin Federal Republic of Germany. Four main topics are covered: the practical significance of the historical development of population theory; the beginnings of demography and population statistics in the eighteenth and nineteenth centuries; demography and population statistics at the beginning of the twentieth century; and the tasks and concepts of demography today. The geographic focus is on Germany and other European countries. One of the papers is in English. |
Ultrawide-band properties of long slot arrays | In this paper the ultrawide-band properties of a long slot array are described. The study provides the rigorously derived Green's function (GF) for an infinite long slot array. From these GF active impedances and radiation patterns for various cases were obtained. The ultrawide bandwidths and the array performances of such apertures are highlighted, providing physical insights to understand the mathematical formulation throughout the paper. |
Estimating the prevalence of limb loss in the United States: 2005 to 2050. | OBJECTIVE
To estimate the current prevalence of limb loss in the United States and project the future prevalence to the year 2050.
DESIGN
Estimates were constructed using age-, sex-, and race-specific incidence rates for amputation combined with age-, sex-, and race-specific assumptions about mortality. Incidence rates were derived from the 1988 to 1999 Nationwide Inpatient Sample of the Healthcare Cost and Utilization Project, corrected for the likelihood of reamputation among those undergoing amputation for vascular disease. Incidence rates were assumed to remain constant over time and applied to historic mortality and population data along with the best available estimates of relative risk, future mortality, and future population projections. To investigate the sensitivity of our projections to increasing or decreasing incidence, we developed alternative sets of estimates of limb loss related to dysvascular conditions based on assumptions of a 10% or 25% increase or decrease in incidence of amputations for these conditions.
SETTING
Community, nonfederal, short-term hospitals in the United States.
PARTICIPANTS
Persons who were discharged from a hospital with a procedure code for upper-limb or lower-limb amputation or diagnosis code of traumatic amputation.
INTERVENTIONS
Not applicable.
MAIN OUTCOME MEASURES
Prevalence of limb loss by age, sex, race, etiology, and level in 2005 and projections to the year 2050.
RESULTS
In the year 2005, 1.6 million persons were living with the loss of a limb. Of these subjects, 42% were nonwhite and 38% had an amputation secondary to dysvascular disease with a comorbid diagnosis of diabetes mellitus. It is projected that the number of people living with the loss of a limb will more than double by the year 2050 to 3.6 million. If incidence rates secondary to dysvascular disease can be reduced by 10%, this number would be lowered by 225,000.
CONCLUSIONS
One in 190 Americans is currently living with the loss of a limb. Unchecked, this number may double by the year 2050. |
Statistical Texture Measures Computed from Gray Level Coocurrence Matrices | The purpose of the present text is to present the theory and techniques behind the Gray Level Coocurrence Matrix (GLCM) method, and the stateof-the-art of the field, as applied to two dimensional images. It does not present a survey of practical results. 1 Gray Level Coocurrence Matrices In statistical texture analysis, texture features are computed from the statistical distribution of observed combinations of intensities at specified positions relative to each other in the image. According to the number of intensity points (pixels) in each combination, statistics are classified into first-order, second-order and higher-order statistics. The Gray Level Coocurrence Matrix (GLCM) method is a way of extracting second order statistical texture features. The approach has been used in a number of applications, e.g. [5],[6],[14],[5],[7],[12],[2],[8],[10],[1]. A GLCM is a matrix where the number of rows and colums is equal to the number of gray levels, G, in the image. The matrix element P (i, j | ∆x, ∆y) is the relative frequency with which two pixels, separated by a pixel distance (∆x, ∆y), occur within a given neighborhood, one with intensity i and the other with intensity j. One may also say that the matrix element P (i, j | d, θ) contains the second order 1 Albregtsen : Texture Measures Computed from GLCM-Matrices 2 statistical probability values for changes between gray levels i and j at a particular displacement distance d and at a particular angle (θ). Given an M ×N neighborhood of an input image containing G gray levels from 0 to G − 1, let f(m, n) be the intensity at sample m, line n of the neighborhood. Then P (i, j | ∆x, ∆y) = WQ(i, j | ∆x, ∆y) (1) where W = 1 (M − ∆x)(N − ∆y) Q(i, j | ∆x, ∆y) = N−∆y ∑ |
Evaluating Bayesian Networks via Data Streams | Consider a stream of n-tuples that empirically define the joint distribution of n discrete random variables X1, . . . , Xn. Previous work of Indyk and McGregor [6] and Braverman et al. [1, 2] addresses the problem of determining whether these variables are n-wise independent by measuring the `p distance between the joint distribution and the product distribution of the marginals. An open problem in this line of work is to answer more general questions about the dependencies between the variables. One powerful way to express such dependencies is via Bayesian networks where nodes correspond to variables and directed edges encode dependencies. We consider the problem of testing such dependencies in the streaming setting. Our main results are: 1. A tight upper and lower bound of Θ̃(nk) on the space required to test whether the data is consistent with a given Bayesian network where k is the size of the range of each Xi and d is the max in-degree of the network. 2. A tight upper and lower bound of Θ̃(k) on the space required to compute any 2-approximation of the log-likelihood of the network. 3. Finally, we show space/accuracy trade-offs for the problem of independence testing using `1 and `2 distances. |
Experimental study on evaluation of multidimensional information visualization techniques | Information visualization systems often present usability problems mainly because many of these tools are not submitted to complete evaluation studies. This paper presents an experimental study based on tests with users to evaluate two multidimensional information visualization techniques, Parallel Coordinates and Radviz. The tasks used in the experiments were defined based on a taxonomy of users' tasks for interaction with multidimensional visualizations. The study intended to identify usability problems following the ergonomic criteria from Bastien and Scapin on implementations of both techniques, especially built for these experiments with the InfoVis toolkit. |
Floe: A Continuous Dataflow Framework for Dynamic Cloud Applications | Applications in cyber-physical systems are increasingly coupled with online instruments to perform long-running, continuous data processing. Such “always on” dataflow applications are dynamic, where they need to change the applications logic and performance at runtime, in response to external operational needs. F`oε is a continuous dataflow framework that is designed to be adaptive for dynamic applications on Cloud infrastructure. It offers advanced dataflow patterns like BSP and MapReduce for flexible and holistic composition of streams and files, and supports dynamic recomposition at runtime with minimal impact on the execution. Adaptive resource allocation strategies allow our framework to effectively use elastic Cloud resources to meet varying data rates. We illustrate the design patterns of F`oε by running an integration pipeline and a tweet clustering application from the Smart Power Grids domain on a private Eucalyptus Cloud. The responsiveness of our resource adaptation is validated through simulations for periodic, bursty and random workloads. |
Dissecting GPU Memory Hierarchy Through Microbenchmarking | Memory access efficiency is a key factor in fully utilizing the computational power of graphics processing units (GPUs). However, many details of the GPU memory hierarchy are not released by GPU vendors. In this paper, we propose a novel fine-grained microbenchmarking approach and apply it to three generations of NVIDIA GPUs, namely Fermi, Kepler, and Maxwell, to expose the previously unknown characteristics of their memory hierarchies. Specifically, we investigate the structures of different GPU cache systems, such as the data cache, the texture cache and the translation look-aside buffer (TLB). We also investigate the throughput and access latency of GPU global memory and shared memory. Our microbenchmark results offer a better understanding of the mysterious GPU memory hierarchy, which will facilitate the software optimization and modelling of GPU architectures. To the best of our knowledge, this is the first study to reveal the cache properties of Kepler and Maxwell GPUs, and the superiority of Maxwell in shared memory performance under bank conflict. |
Consensus algorithms are input-to-state stable | In many cooperative control problems, a shared knowledge of information provides the basis for cooperation. When this information is different for each agent, a state of noncooperation can result. Consensus algorithms ensure that after some time the agents would agree on the information critical for coordination, called the coordination variable. In this paper we show that if the coordination algorithm is input-to-state stable where the input is considered to be the discrepancy between the coordination variable known to each vehicle, then cooperation is guaranteed when a consensus scheme is used to synchronize information. A coordinated timing example is shown in simulation to illustrate the notions of stability when a coordination algorithm is augmented with a consensus strategy. |
A Hall-effect sensor based instrument for measuring the door-seal gap of heavy trucks on the production line | A proof-of-concept design for an instrument to measure the door-seal gap width on heavy trucks is presented. The door-seal gap is an important specification because if it is too small, the seal becomes over-compressed when the door is closed and exerts too much pressure on the door latch. The proposed design uses a Hall-effect sensor and a permanent magnet in a unipolar, head-on configuration to measure the gap width. Because the relationship between the gap width and sensor output voltage is non-linear, the instrument fits a curve through two calibration data points, and uses this curve to calculate the measured distance from the sensor output voltage. The measurement uncertainty of the instrument was within ±0.5mm over the majority of the desired measurement range, which is within the accuracy range required for this application. |
Discovery of everyday human activities from long-term visual behaviour using topic models | Human visual behaviour has significant potential for activity recognition and computational behaviour analysis, but previous works focused on supervised methods and recognition of predefined activity classes based on short-term eye movement recordings. We propose a fully unsupervised method to discover users' everyday activities from their long-term visual behaviour. Our method combines a bag-of-words representation of visual behaviour that encodes saccades, fixations, and blinks with a latent Dirichlet allocation (LDA) topic model. We further propose different methods to encode saccades for their use in the topic model. We evaluate our method on a novel long-term gaze dataset that contains full-day recordings of natural visual behaviour of 10 participants (more than 80 hours in total). We also provide annotations for eight sample activity classes (outdoor, social interaction, focused work, travel, reading, computer work, watching media, eating) and periods with no specific activity. We show the ability of our method to discover these activities with performance competitive with that of previously published supervised methods. |
Comorbidities and race/ethnicity among adults with stimulant use disorders in residential treatment. | Comorbid physical and mental health problems are associated with poorer substance abuse treatment outcomes; however, little is known about these conditions among stimulant abusers at treatment entry. This study compared racial and ethnic groups on baseline measures of drug use patterns, comorbid physical and mental health disorders, quality of life, and daily functioning among cocaine and stimulant abusing/dependent patients. Baseline data from a multi-site randomized clinical trial of vigorous exercise as a treatment strategy for a diverse population of stimulant abusers (N=290) were analyzed. Significant differences between groups were found on drug use characteristics, stimulant use disorders, and comorbid mental and physical health conditions. Findings highlight the importance of integrating health and mental health services into substance abuse treatment and could help identify potential areas for intervention to improve treatment outcomes for racial and ethnic minority groups. |
ASR-based corrective feedback on pronunciation: does it really work? | We studied a group of immigrants who were following regular, teacher-fronted Dutch classes, and who were assigned to three groups using either a) Dutch CAPT, an ASR-based Computer Assisted Pronunciation Training (CAPT) system that provides feedback on a number of Dutch speech sounds that are problematic for L2 learners b) a CAPT system without feedback c) no CAPT system. Participants were tested before and after the training. The results show that the ASR-based feedback was effective in correcting the errors addressed in the training. |
Effect of Core Stabilizing Program on Balance in Spastic Diplegic Cerebral Palsy Children | Background: Balance is a component of basic needs for daily activities and it plays an important role in static and dynamic activities. Core stabilization training is thought to improve balance, postural control, and reduce the risk of lower extremity injuries. The purpose of this study was to study the effect of core stabilizing program on balance in spastic diplegic cerebral palsy children. Subjects and Methods: Thirty diplegic cerebral palsy children from both sexes ranged in age from six to eight years participated in this study. They were assigned randomly into two groups of equal numbers, control group (A) children were received selective therapeutic exercises and study group (B) children were received selective therapeutic exercises plus core stabilizing program for eight weeks. Each patient of the two groups was evaluated before and after treatment by Biodex Balance System in laboratory of balance in faculty of physical therapy (antero posterior, medio lateral and overall stability). Patients in both groups received traditional physical therapy program for one hour per day and three sessions per week and group (B) were received core stabilizing program for eight weeks three times per week. Results: There was no significant difference between the two groups in all measured variables before wearing the orthosis (p>0.05), while there was significant difference when comparing pre and post mean values of all measured variables in each group (p<0.01). When comparing post mean values between both groups, the results revealed significant improvement in favor of group (B) (p<0.01). Conclusion: core stabilizing program is an effective therapeutic exercise to improve balance in diplegic cerebral palsy children. |
Optimum Settings for Automatic Controllers By | In this paper, the three principle control effects found in present controllers are examined and practical names and units of measurement are proposed for each effect. Corresponding units are proposed for a classification of industrial processes in terms of the two principal characteristics affecting their controllability. Formulas are given which enable the controller settings to be determined from the experimental or calculated values of the lag and unit reaction rate of the process to be controlled. These units form the basis of a quick method for adjusting a controller on the job. The effect of varying each controller setting is shown in a series of chart records. It is believed that the conceptions of control presented in this paper will be of assistance in the adjustment of existing controller applications and in the design of new installations. |
A reconfigurable architecture for hybrid CMOS/Nanodevice circuits | This report describes a preliminary evaluation of performance of a cell-FPGA-like architecture for future hybrid "CMOL" circuits. Such circuits will combine a semiconduc-tor-transistor (CMOS) stack and a two-level nanowire crossbar with molecular-scale two-terminal nanodevices (program-mable diodes) formed at each crosspoint. Our cell-based architecture is based on a uniform CMOL fabric of "tiles". Each tile consists of 12 four-transistor basic cells and one (four times larger) latch cell. Due to high density of nanodevices, which may be used for both logic and routing functions, CMOL FPGA may be reconfigured around defective nanodevices to provide high defect tolerance. Using a semi-custom set of design automation tools we have evaluated CMOL FPGA performance for the Toronto 20 benchmark set, so far without optimization of several parameters including the power supply voltage and nanowire pitch. The results show that even without such optimization, CMOL FPGA circuits may provide a density advantage of more than two orders of magnitude over the traditional CMOS FPGA with the same CMOS design rules, at comparable time delay, acceptable power consumption and potentially high defect tolerance. |
Recursive Partitioning Analysis of Lymph Node Ratio in Breast Cancer Patients | Lymph node ratio (LNR) is a powerful prognostic factor for breast cancer. We conducted a recursive partitioning analysis (RPA) of the LNR to identify the prognostic risk groups in breast cancer patients. Records of newly diagnosed breast cancer patients between 2002 and 2006 were searched in the Taiwan Cancer Database. The end of follow-up was December 31, 2009. We excluded patients with distant metastases, inflammatory breast cancer, survival <1 month, no mastectomy, or missing lymph node status. Primary outcome was 5-year overall survival (OS). For univariate significant predictors, RPA were used to determine the risk groups. Among the 11,349 eligible patients, we identified 4 prognostic factors (including LNR) for survival, resulting in 8 terminal nodes. The LNR cutoffs were 0.038, 0.259, and 0.738, which divided LNR into 4 categories: very low (LNR ≤ 0.038), low (0.038 < LNR ≤ 0.259), moderate (0.259 < LNR ≤ 0.738), and high (0.738 < LNR). Then, 4 risk groups were determined as follows: Class 1 (very low risk, 8,265 patients), Class 2 (low risk, 1,901 patients), Class 3 (moderate risk, 274 patients), and Class 4 (high risk, 900 patients). The 5-year OS for Class 1, 2, 3, and 4 were 93.2%, 83.1%, 72.3%, and 56.9%, respectively (P< 0.001). The hazard ratio of death was 2.70, 4.52, and 8.59 (95% confidence interval 2.32-3.13, 3.49-5.86, and 7.48-9.88, respectively) times for Class 2, 3, and 4 compared with Class 1 (P < 0.001). In conclusion, we identified the optimal cutoff LNR values based on RPA and determined the related risk groups, which successfully predict 5-year OS in breast cancer patients. |
On the Capabilities and Limitations of Reasoning for Natural Language Understanding | Recent systems for natural language understanding are strong at overcoming linguistic variability for lookup style reasoning. Yet, their accuracy drops dramatically as the number of reasoning steps increases. We present the first formal framework to study such empirical observations, addressing the ambiguity, redundancy, incompleteness, and inaccuracy that the use of language introduces when representing a hidden conceptual space. Our formal model uses two interrelated spaces: a conceptual meaning space that is unambiguous and complete but hidden, and a linguistic symbol space that captures a noisy grounding of the meaning space in the symbols or words of a language. We apply this framework to study the connectivity problem in undirected graphs---a core reasoning problem that forms the basis for more complex multi-hop reasoning. We show that it is indeed possible to construct a high-quality algorithm for detecting connectivity in the (latent) meaning graph, based on an observed noisy symbol graph, as long as the noise is below our quantified noise level and only a few hops are needed. On the other hand, we also prove an impossibility result: if a query requires a large number (specifically, logarithmic in the size of the meaning graph) of hops, no reasoning system operating over the symbol graph is likely to recover any useful property of the meaning graph. This highlights a fundamental barrier for a class of reasoning problems and systems, and suggests the need to limit the distance between the two spaces, rather than investing in multi-hop reasoning with"many"hops. |
Tizayuca : A Case Study of Embedded Stones | " Oh God, O father in heaven, you glorify this multitude of flowers. Only in Your shadow, yonder only, can there be a shelter. "-Cantares Mexicanos 189 |
Color Compensation of Multicolor FISH Images | Multicolor fluorescence in situ hybridization (M-FISH) techniques provide color karyotyping that allows simultaneous analysis of numerical and structural abnormalities of whole human chromosomes. Chromosomes are stained combinatorially in M-FISH. By analyzing the intensity combinations of each pixel, all chromosome pixels in an image are classified. Due to the overlap of excitation and emission spectra and the broad sensitivity of image sensors, the obtained images contain crosstalk between the color channels. The crosstalk complicates both visual and automatic image analysis and may eventually affect the classification accuracy in M-FISH. The removal of crosstalk is possible by finding the color compensation matrix, which quantifies the color spillover between channels. However, there exists no simple method of finding the color compensation matrix from multichannel fluorescence images whose specimens are combinatorially hybridized. In this paper, we present a method of calculating the color compensation matrix for multichannel fluorescence images whose specimens are combinatorially stained. |
Syntactically Guided Neural Machine Translation | We investigate the use of hierarchical phrase-based SMT lattices in end-to-end neural machine translation (NMT). Weight pushing transforms the Hiero scores for complete translation hypotheses, with the full translation grammar score and full ngram language model score, into posteriors compatible with NMT predictive probabilities. With a slightly modified NMT beam-search decoder we find gains over both Hiero and NMT decoding alone, with practical advantages in extending NMT to very large input and output vocabularies. |
Thermal management and cooling of windings in electrical machines for electric vehicle and traction application | Conventional electrical machine cooling includes housing fins, shaft-mounted fan, and water jacket. In these cases, heat from the copper loss of windings needs to pass through either the stator core or the air between windings and the housing. Because of the large thermal resistance in the path, these methods are sometimes not efficient enough for high torque density machines. Overheat in windings causes failure in the insulation and damages the machine. Many technologies that facilitate winding cooling have been investigated in the literature, such as winding topologies with more efficient heat dissipation capability, impregnation material with high thermal conductivity, and advanced direct winding. This paper reviews and classifies thermal management and cooling methods to provide a guideline for high torque density electrical machine design for a better winding thermal management. |
MT Quality Estimation for E-Commerce Data | In this paper we present a system that automatically estimates the quality of machine translated segments of e-commerce data without relying on reference translations. Such approach can be used to estimate the quality of machine translated text in scenarios in which references are not available. Quality estimation (QE) can be applied to select translations to be postedited, choose the best translation from a pool of machine translation (MT) outputs, or help in the process of revision of translations, among other applications. Our approach is based on supervised machine learning algorithms that are used to train models that predict post-editing effort. The post-editing effort is measured according to the translation error rate (TER) between machine translated segments against their human post-edits. The predictions are computed at the segment level and can be easily extended to any kind of text ranging from item titles to item descriptions. In addition, our approach can be applied to different kinds of e-commerce data (e.g. different categories of products). Our models explore linguistic information regarding the complexity of the source sentence, the fluency of the translation in the target language and the adequacy of the translation with respect to its source sentence. In particular, we show that the use of named entity recognition systems as one source of linguistic information substantially improves the models’ performance. In order to evaluate the efficiency of our approach, we evaluate the quality scores assigned by the QE system (predicted TER) against the human posteditions (real TER) using the Pearson correlation coefficient. |
Going At It Alone: Single-Mother Undergraduate's Experiences | The single-parent undergraduate is an underserved student population who face organizational barriers and negative climates. These barriers negatively impact their collegiate experience and could influence retention. This research uses feminist-informed focus groups to explore the single-parent undergraduate experience and posit suggestions for institutional change. |
Fully Private Noninteractive Face Verification | Face recognition is one of the foremost applications in computer vision, which often involves sensitive signals; privacy concerns have been raised lately and tackled by several recent privacy-preserving face recognition approaches. Those systems either take advantage of information derived from the database templates or require several interaction rounds between client and server, so they cannot address outsourced scenarios. We present a private face verification system that can be executed in the server without interaction, working with encrypted feature vectors for both the templates and the probe face. We achieve this by combining two significant contributions: 1) a novel feature model for Gabor coefficients' magnitude driving a Lloyd-Max quantizer, used for reducing plaintext cardinality with no impact on performance; 2) an extension of a quasi-fully homomorphic encryption able to compute, without interaction, the soft scores of an SVM operating on quantized and encrypted parameters, features and templates. We evaluate the private verification system in terms of time and communication complexity, and in verification accuracy in widely known face databases (XM2VTS, FERET, and LFW). These contributions open the door to completely private and noninteractive outsourcing of face verification. |
Enhancing visual perception of shape through tactile glances | Object shape information is an important parameter in robot grasping tasks. However, it may be difficult to obtain accurate models of novel objects due to incomplete and noisy sensory measurements. In addition, object shape may change due to frequent interaction with the object (cereal boxes, etc). In this paper, we present a probabilistic approach for learning object models based on visual and tactile perception through physical interaction with an object. Our robot explores unknown objects by touching them strategically at parts that are uncertain in terms of shape. The robot starts by using only visual features to form an initial hypothesis about the object shape, then gradually adds tactile measurements to refine the object model. Our experiments involve ten objects of varying shapes and sizes in a real setup. The results show that our method is capable of choosing a small number of touches to construct object models similar to real object shapes and to determine similarities among acquired models. |
Coordination of Directional Overcurrent Relays Using Seeker Algorithm | Coordination of directional overcurrent relays in a multiloop subtransmission or distribution network is formulated as an optimization problem. In this paper, the coordination of directional overcurrent relays is formulated as a mixed-integer nonlinear programming problem and is then solved by a new seeker optimization technique. Based on the act of human searching, in the proposed seeker technique, the search direction and step length are determined in an adaptive way. The proposed method is implemented in three different test cases. The results are compared with previously proposed analytic and evolutionary approaches. |
A Survey on multi-robot systems | This paper reviews the state-of-the-art research on multi-robot systems, with a focus on multi-robot cooperation and coordination. By primarily classifying multi-robot systems into active and passive cooperative systems, three main research topics of multi-robot systems are focused on: task allocation, multi-sensor fusion and localization. In addition, formation control and coordination methods for multi-robots are reviewed. |
SD-Map - A Fast Algorithm for Exhaustive Subgroup Discovery | In this paper we present the novel SD-Map algorithm for exhaustive but efficient subgroup discovery. SD-Map guarantees to identify all interesting subgroup patterns contained in a data set, in contrast to heuristic or samplingbased methods. The SD-Map algorithm utilizes the well-known FP-growth method for mining association rules with adaptations for the subgroup discovery task. We show how SD-Map can handle missing values, and provide an experimental evaluation of the performance of the algorithm using synthetic data. |
Performance of a building integrated photovoltaic/thermal (BIPVT) solar collector | The idea of combining photovoltaic and solar thermal collectors (PVT collectors) to provide electrical and heat energy is an area that has, until recently, received only limited attention. Although PVTs are not as prevalent as solar thermal systems, the integration of photovoltaic and solar thermal collectors into the walls or roofing structure of a building could provide greater opportunity for the use of renewable solar energy technologies. In this study, the design of a novel building integrated photovoltaic/thermal (BIPVT) solar collector was theoretically analysed through the use of a modified Hottel-Whillier model and was validated with experimental data from testing on a prototype BIPVT collector. The results showed that key design parameters such as the fin efficiency, the thermal conductivity between the PV cells and their supporting structure, and the lamination method had a significant influence on both the electrical and thermal efficiency of the BIPVT. Furthermore, it was shown that 1 Corresponding Author: Tel.: +64 7 838 4266 Fax: +64 7 838 4835 E-mail address: [email protected] (T.N. Anderson) the BIPVT could be made of lower cost materials, such as pre-coated colour steel, without significant decreases in efficiency. Finally, it was shown that by integrating the BIPVT into the building rather than onto the building could result in a lower cost system. This was illustrated by the finding that insulating the rear of the BIPVT may be unnecessary when it is integrated into a roof above an enclosed air filled attic, as this air space acts as a passive insulating barrier. |
A New Super Wideband Fractal Microstrip Antenna | The commercial and military telecommunication systems require ultrawideband antennas. The small physical size and multi-band capability are very important in the design of ultrawideband antennas. Fractals have unique properties such as self-similarity and space-filling. The use of fractal geometry in antenna design provides a good method for achieving the desired miniaturization and multi-band properties. In this communication, a multi-band and broad-band microstrip antenna based on a new fractal geometry is presented. The proposed design is an octagonal fractal microstrip patch antenna. The simulation and optimization are performed using CST Microwave Studio simulator. The results show that the proposed microstrip antenna can be used for 10 GHz -50 GHz frequency range, i.e., it is a super wideband microstrip antenna with 40 GHz bandwidth. Radiation patterns and gains are also studied. |
A Two-Step Method for Clustering Mixed Categroical and Numeric Data | Various clustering algorithms have been developed to group data into clusters in diverse domains. However, these clustering algorithms work effectively either on pure numeric data or on pure categorical data, most of them perform poorly on mixed categorical and numeric data types. In this paper, a new two-step clustering method is presented to find clusters on this kind of data. In this approach the items in categorical attributes are processed to construct the similarity or relationships among them based on the ideas of co-occurrence; then all categorical attributes can be converted into numeric attributes based on these constructed relationships. Finally, since all categorical data are converted into numeric, the existing clustering algorithms can be applied to the dataset without pain. Nevertheless, the existing clustering algorithms suffer from some disadvantages or weakness, the proposed two-step method integrates hierarchical and partitioning clustering algorithm with adding attributes to cluster objects. This method defines the relationships among items, and improves the weaknesses of applying single clustering algorithm. Experimental evidences show that robust results can be achieved by applying this method to cluster mixed numeric and categorical data. |
Deep neural networks show an equivalent and often superior performance to dermatologists in onychomycosis diagnosis: Automatic construction of onychomycosis datasets by region-based convolutional deep neural network | Although there have been reports of the successful diagnosis of skin disorders using deep learning, unrealistically large clinical image datasets are required for artificial intelligence (AI) training. We created datasets of standardized nail images using a region-based convolutional neural network (R-CNN) trained to distinguish the nail from the background. We used R-CNN to generate training datasets of 49,567 images, which we then used to fine-tune the ResNet-152 and VGG-19 models. The validation datasets comprised 100 and 194 images from Inje University (B1 and B2 datasets, respectively), 125 images from Hallym University (C dataset), and 939 images from Seoul National University (D dataset). The AI (ensemble model; ResNet-152 + VGG-19 + feedforward neural networks) results showed test sensitivity/specificity/ area under the curve values of (96.0 / 94.7 / 0.98), (82.7 / 96.7 / 0.95), (92.3 / 79.3 / 0.93), (87.7 / 69.3 / 0.82) for the B1, B2, C, and D datasets. With a combination of the B1 and C datasets, the AI Youden index was significantly (p = 0.01) higher than that of 42 dermatologists doing the same assessment manually. For B1+C and B2+ D dataset combinations, almost none of the dermatologists performed as well as the AI. By training with a dataset comprising 49,567 images, we achieved a diagnostic accuracy for onychomycosis using deep learning that was superior to that of most of the dermatologists who participated in this study. |
Active Learning of Inverse Models with Intrinsically Motivated Goal Exploration in Robots | We introduce the Self-Adaptive Goal Generation Robust Intelligent Adaptive Curiosity (SAGG-RIAC) architecture as an intrinsically motivated goal exploration mechanism which allows active learning of inverse models in high-dimensional redundant robots. This allows a robot to efficiently and actively learn distributions of parameterized motor skills/policies that solve a corresponding distribution of parameterized tasks/goals. The architecture makes the robot sample actively novel parameterized tasks in the task space, based on a measure of competence progress, each of which triggers low-level goal-directed learning of the motor policy parameters that allow to solve it. For both learning and generalization, the system leverages regression techniques which allow to infer the motor policy parameters corresponding to a given novel parameterized task, and based on the previously learnt correspondences between policy and task parameters. We present experiments with high-dimensional continuous sensorimotor spaces in three different robotic setups: 1) learning the inverse kinematics in a highly-redundant robotic arm, 2) learning omnidirectional locomotion with motor primitives in a quadruped robot, 3) an arm learning to control a fishing rod with a flexible wire. We show that 1) exploration in the task space can be a lot faster than exploration in the actuator space for learning inverse models in redundant robots; 2) selecting goals maximizing competence progress creates developmental trajectories driving the robot to progressively focus on tasks of increasing complexity and is statistically significantly more efficient than selecting tasks randomly, as well as more efficient than different standard active motor babbling methods; 3) this architecture allows the robot to actively discover which parts of its task space it can learn to reach and which part it cannot. |
Movie genre classification via scene categorization | This paper presents a method for movie genre categorization of movie trailers, based on scene categorization. We view our approach as a step forward from using only low-level visual feature cues, towards the eventual goal of high-level seman- tic understanding of feature films. Our approach decom- poses each trailer into a collection of keyframes through shot boundary analysis. From these keyframes, we use state-of- the-art scene detectors and descriptors to extract features, which are then used for shot categorization via unsuper- vised learning. This allows us to represent trailers using a bag-of-visual-words (bovw) model with shot classes as vo- cabularies. We approach the genre classification task by mapping bovw temporally structured trailer features to four high-level movie genres: action, comedy, drama or horror films. We have conducted experiments on 1239 annotated trailers. Our experimental results demonstrate that exploit- ing scene structures improves film genre classification com- pared to using only low-level visual features. |
NUMERICAL ANALYSIS OF A SMALL ULTRA WIDEBAND MICROSTRIP-FED TAP MONOPOLE ANTENNA | Abstract—This paper presents a planar microstrip-fed tab monopole antenna for ultra wideband wireless communications applications. The impedance bandwidth of the antenna is improved by adding slit in one side of the monopole, introducing a tapered transition between the monopole and the feed line, and adding two-step staircase notch in the ground plane. Numerical analysis for the antenna dimensional parameters using Ansoft HFSS is performed and presented. The proposed antenna has a small size of 16 × 19 mm, and provides an ultra wide bandwidth from 2.8 to 28 GHz with low VSWR level and good radiation characteristics to satisfy the requirements of the current and future wireless communications systems. |
QASCA: A Quality-Aware Task Assignment System for Crowdsourcing Applications | A crowdsourcing system, such as the Amazon Mechanical Turk (AMT), provides a platform for a large number of questions to be answered by Internet workers. Such systems have been shown to be useful to solve problems that are difficult for computers, including entity resolution, sentiment analysis, and image recognition. In this paper, we investigate the online task assignment problem: Given a pool of n questions, which of the k questions should be assigned to a worker? A poor assignment may not only waste time and money, but may also hurt the quality of a crowdsourcing application that depends on the workers' answers. We propose to consider quality measures (also known as evaluation metrics) that are relevant to an application during the task assignment process. Particularly, we explore how Accuracy and F-score, two widely-used evaluation metrics for crowdsourcing applications, can facilitate task assignment. Since these two metrics assume that the ground truth of a question is known, we study their variants that make use of the probability distributions derived from workers' answers. We further investigate online assignment strategies, which enables optimal task assignments. Since these algorithms are expensive, we propose solutions that attain high quality in linear time. We develop a system called the Quality-Aware Task Assignment System for Crowdsourcing Applications (QASCA) on top of AMT. We evaluate our approaches on five real crowdsourcing applications. We find that QASCA is efficient, and attains better result quality (of more than 8% improvement) compared with existing methods. |
Analysis of PWM nonlinearity in non-inverting buck-boost power converters | This paper focuses on the non-inverting buck- boost converter operated as either buck or boost converter. It is shown that a pulse-width modulation (PWM) discontinuity around buck/boost mode transition can result in substantial increases in output voltage ripple. The effect of the PWM nonlinearity is studied using periodic steady state analysis to quantify the worst case ripple voltage in terms of design parameters. Furthermore, a bifurcation analysis shows that the PWM discontinuity leads to quasi- periodic route to chaos, which results in erratic operation around the buck/boost mode transition. The increased ripple is a very significant problem when the converter is used as a power supply for an RF power amplifier, as is the case in WCDMA handsets. An approach is proposed to remove the discontinuity which results in reduced output voltage ripple, at the expense of reduced efficiency, as demonstrated on an experimental prototype. |
Knowledge Management Strategies in Public Sector — Case Study | Knowledge management (KM) as an emerging discipline is becoming increasingly important to organizations seeking to improve their efficiency and competitive abilities. This research aims to investigate knowledge management strategies (KMS) from different fields of knowledge and to identify what are critical factors for effective KMS in public sector and the challenges it faces for the future. This research is possibly the first attempt to investigate empirically the compatibility in one of the most important Saudi public organizations of KMS. To investigate KMS, the research focuses on KM as practiced in the Institute of Public Administration (IPA). The research focuses on factors that may critically influence the development of KMS in public sector in Saudi Arabia. The main question research is: What are the success factors for effective KMS at IPA? The research design was employed with quantitative data collection methods. Questionnaires were distributed to 238 employees in all IPA organizations. The resulting data is analyzed at descriptive and explanatory levels. The research identified 13 critical factors that must be carefully considered to ensure KMS success. The study divided these critical factors into four groups from different perspectives point views to KMS, namely: KM resources, KM technology, KM learning and innovation, and KM beneficiaries. By integrating the insights from organizational knowledge, information systems, customer-based knowledge, and organizational learning literatures, this study has demonstrated the need to implement complementary strategies. |
Bag of Visual Words and Fusion Methods for Action Recognition: Comprehensive Study and Good Practice | Video based action recognition is one of the important and challenging problems in computer vision research. Bag of visual words model (BoVW) with local features has been very popular for a long time and obtained the state-of-the-art performance on several realistic datasets, such as the HMDB51, UCF50, and UCF101. BoVW is a general pipeline to construct a global representation from local features, which is mainly composed of five steps; (i) feature extraction, (ii) feature pre-processing, (iii) codebook generation, (iv) feature encoding, and (v) pooling and normalization. Although many effort s have been made in each step independently in different scenarios, their effects on action recognition are still unknown. Meanwhile, video data exhibits different views of visual patterns , such as static appearance and motion dynamics. Multiple descriptors are usually extracted to represent these different views. Fusing these descriptors is crucial for boosting the final performance of an action recognition system. This paper aims to provide a comprehensive study of all steps in BoVW and different fusion methods, and uncover some good practices to produce a state-of-the-art action recognition system. Specifically, we explore two kinds of local features, ten kinds of encoding methods, eight kinds of pooling and normalization strategies, and three kinds of fusion methods. We conclude that every step is crucial for contributing to the final recognition rate and improper choice in one of the steps may counteract the performance improvement of other steps. Furthermore, based on our comprehensive study, we propose a simple yet effective representation, called hybrid supervector , by exploring the complementarity of different BoVW frameworks with improved dense trajectories. Using this representation, we obtain impressive results on the three challenging datasets; HMDB51 (61.9%), UCF50 (92.3%), and UCF101 (87.9%). © 2016 Elsevier Inc. All rights reserved. |
SOAP2: an improved ultrafast tool for short read alignment | SUMMARY
SOAP2 is a significantly improved version of the short oligonucleotide alignment program that both reduces computer memory usage and increases alignment speed at an unprecedented rate. We used a Burrows Wheeler Transformation (BWT) compression index to substitute the seed strategy for indexing the reference sequence in the main memory. We tested it on the whole human genome and found that this new algorithm reduced memory usage from 14.7 to 5.4 GB and improved alignment speed by 20-30 times. SOAP2 is compatible with both single- and paired-end reads. Additionally, this tool now supports multiple text and compressed file formats. A consensus builder has also been developed for consensus assembly and SNP detection from alignment of short reads on a reference genome.
AVAILABILITY
http://soap.genomics.org.cn. |
SceneNN: A Scene Meshes Dataset with aNNotations | Several RGB-D datasets have been publicized over the past few years for facilitating research in computer vision and robotics. However, the lack of comprehensive and fine-grained annotation in these RGB-D datasets has posed challenges to their widespread usage. In this paper, we introduce SceneNN, an RGB-D scene dataset consisting of 100 scenes. All scenes are reconstructed into triangle meshes and have per-vertex and per-pixel annotation. We further enriched the dataset with fine-grained information such as axis-aligned bounding boxes, oriented bounding boxes, and object poses. We used the dataset as a benchmark to evaluate the state-of-the-art methods on relevant research problems such as intrinsic decomposition and shape completion. Our dataset and annotation tools are available at http://www.scenenn.net. |
Toward Precision Healthcare: Context and Mathematical Challenges | Precision medicine refers to the idea of delivering the right treatment to the right patient at the right time, usually with a focus on a data-centered approach to this task. In this perspective piece, we use the term "precision healthcare" to describe the development of precision approaches that bridge from the individual to the population, taking advantage of individual-level data, but also taking the social context into account. These problems give rise to a broad spectrum of technical, scientific, policy, ethical and social challenges, and new mathematical techniques will be required to meet them. To ensure that the science underpinning "precision" is robust, interpretable and well-suited to meet the policy, ethical and social questions that such approaches raise, the mathematical methods for data analysis should be transparent, robust, and able to adapt to errors and uncertainties. In particular, precision methodologies should capture the complexity of data, yet produce tractable descriptions at the relevant resolution while preserving intelligibility and traceability, so that they can be used by practitioners to aid decision-making. Through several case studies in this domain of precision healthcare, we argue that this vision requires the development of new mathematical frameworks, both in modeling and in data analysis and interpretation. |
Treatment of central precocious puberty by GnRH analogs: long-term outcome in men. | In boys, central precocious puberty (CPP) is the appearance of secondary sex characteristics driven by pituitary gonadotropin secretion before the age of 9 years. In the last years, relevant improvements in the treatment of CPP have been achieved. Because CPP is rare in boys, the majority of papers on this issue focus on girls and do not address specific features of male patients regarding end results and safety. In the present paper, recent advances of CPP management with GnRH analogs in men are summarized. End results in untreated and treated patients are also reviewed by an analysis of the recently published literature on treatment of CPP in men. The available data indicate that therapy with GnRH analogs can improve final height into the range of target height without significant adverse short-term and long-term effects, but longer follow-up of larger series of patients is still required to draw definitive conclusions. |
Fournier's gangrene. Case report. | Fournier's gangrene is a condition marked by fulminant polymicrobial necrotizing fasciitis of the urogenital and perineal areas. We present a patient with Fournier's gangrene and describe the physical examination and bedside sonographic findings. These findings can assist in the evaluation of patients with concerning symptoms so there can be timely administration of antibiotics and specialist consultation when necessary. |
Binary-code obfuscations in prevalent packer tools | The first steps in analyzing defensive malware are understanding what obfuscations are present in real-world malware binaries, how these obfuscations hinder analysis, and how they can be overcome. While some obfuscations have been reported independently, this survey consolidates the discussion while adding substantial depth and breadth to it. This survey also quantifies the relative prevalence of these obfuscations by using the Dyninst binary analysis and instrumentation tool that was recently extended for defensive malware analysis. The goal of this survey is to encourage analysts to focus on resolving the obfuscations that are most prevalent in real-world malware. |
Promise and perils of Dynamic Sensitivity control in IEEE 802.11ax WLANs | Dynamic sensitivity control (DSC) is being discussed within the new IEEE 802.11ax task group as one of the potential techniques to improve the system performance for next generation Wi-Fi in high capacity and dense deployment environments, e.g. stadiums, conference venues, shopping malls, etc. However, there appears to be lack of consensus regarding the adoption of DSC within the group. This paper reports on investigations into the performance of the baseline DSC technique proposed in the IEEE 802.11ax task group under realistic scenarios defined by the task group. Simulations were carried out and the results suggest that compared with the default case (no DSC), the use of DSC may lead to mixed results in terms of throughput and fairness with the gain varying depending on factors like inter-AP distance, node distribution, node density and the DSC margin value. Further, we also highlight avenues for mitigating the shortcomings of DSC found in this study. |
Convex Optimization of Wireless Power Transfer Systems With Multiple Transmitters | Wireless power transfer systems with multiple transmitters promise advantages of higher transfer efficiencies and focusing effects over single-transmitter systems. From the standard formulation, straightforward maximization of the power transfer efficiency is not trivial. By reformulating the problem, a convex optimization problem emerges, which can be solved efficiently. Further, using Lagrangian duality theory, analytical results are found for the achievable maximum power transfer efficiency and all parameters involved. With these closed-form results, planar and coaxial wireless power transfer setups are investigated. |
Short- and Long-Term Effects of Real-Time Continuous Glucose Monitoring in Patients With Type 2 Diabetes | OBJECTIVE
To determine whether short-time, real-time continuous glucose monitoring (RT-CGM) has long-term salutary glycemic effects in patients with type 2 diabetes who are not on prandial insulin.
RESEARCH DESIGN AND METHODS
This was a randomized controlled trial of 100 adults with type 2 diabetes who were not on prandial insulin. This study compared the effects of 12 weeks of intermittent RT-CGM with self-monitoring of blood glucose (SMBG) on glycemic control over a 40-week follow-up period. Subjects received diabetes care from their regular provider without therapeutic intervention from the study team.
RESULTS
There was a significant difference in A1C at the end of the 3-month active intervention that was sustained during the follow-up period. The mean, unadjusted A1C decreased by 1.0, 1.2, 0.8, and 0.8% in the RT-CGM group vs. 0.5, 0.5, 0.5, and 0.2% in the SMBG group at 12, 24, 38, and 52 weeks, respectively (P = 0.04). There was a significantly greater decline in A1C over the course of the study for the RT-CGM group than for the SMBG group, after adjusting for covariates (P < 0.0001). The subjects who used RT-CGM per protocol (≥48 days) improved the most (P < 0.0001). The improvement in the RT-CGM group occurred without a greater intensification of medication compared with those in the SMBG group.
CONCLUSIONS
Subjects with type 2 diabetes not on prandial insulin who used RT-CGM intermittently for 12 weeks significantly improved glycemic control at 12 weeks and sustained the improvement without RT-CGM during the 40-week follow-up period, compared with those who used only SMBG. |
Similarity search in multimedia databases | The research on multimedia databases involves different areas in Computer Science, such as computer graphics, databases, and information retrieval. There are many practical applications that benefit from this research, e.g., molecular biology, medicine, CAD/CAM, and geography. An important characteristic of these applications is the variety of data that should be supported, e.g., text, images (both still and moving), and audio. This implies that the development of a multimedia information system is considerably more complex than a traditional information system. An important research issue in the field of multimedia databases is the content-based retrieval of similar objects. Given a multimedia query object, the search for an exact match in a database is not meaningful in most applications, because the probability that two multimedia objects are identical is negligible (unless they are digital copies from the same source). For this reason, the development of efficient and effective similarity search techniques has become an important topic in the multimedia database research community. The goal of this advanced technology seminar is to provide an overview of the similarity search problem and to present the state-of-art techniques for performing efficient and effective similarity queries in multimedia databases. The seminar begins with an introduction and a motivation of multimedia databases. The two main approaches for describing multimedia objects (as elements in a metric space or in a vector space) are introduced, as well as a description of the ”Multimedia Content Description Interface” (MPEG)-7 standard. The efficiency issue is addressed for both metric and vector space approaches, describing the data structures and algorithms used to answer similarity queries. For the effectiveness issue, the seminar introduces some widely used retrieval performance measures. Several examples of techniques for particular multimedia applications (text, image, CAD, 3D objects, audio and video) are presented. The seminar outline is as follows: |
Structural Properties and Williamson-Hall Analysis of Mn Doped SmFeO3 | Abstract We have synthesized SmFe 1-x Mn x O 3 (x = 0.0, 0.1, 0.2 and 0.3) by solid state reaction route in order to understand their structural, morphological and dielectric properties. X-ray diffraction (XRD) patterns confirm single phase nature and the orthorhombic crystal symmetry of our samples. The lattice parameters are determined from the PowderX software, and are found to decrease with increase in Mn concentration. Williamson-Hall-plots are used to investigate physical parameters such as strain, stress, and energy density using different models namely, uniform deformation model (UDM), uniform deformation stress model (UDSM) and uniform deformation energy density model (UDEDM). The strain, stress, energy density and crystallite size increase as the concentration of Mn increases. |
High-power diode laser assisted hard turning of AISI D2 tool steel | Laser technology is being employed at an increasing rate in many industrial applications. The increasing demand for new engineered materials including ceramics, composites and hardened steel require manufacturing technologies alternative to traditional ones. The use of lasers in hot machining processes is one of them. The current research presents a study on laser assisted turning of hardened AISI D2 tool steel (≈ 60 HRC), widely utilized in the tool making industry, which poses problems with respect to the current state of machining technology due to hard chromium carbide particles present in its microstructure. This research work relates to the application of an analytical model to predict the rate of heating and cooling of the surface of the workpiece material subject to laser heating. Experimental temperature measurements were performed using an infrared thermometer and a thermocouple in order to calibrate and validate the temperature model. The predicted temperature evolution was then used in designing the laser assisted turning process with respect to cutting parameters and kinematics. Cutting tests were performed on a Nakamura Tome-450 CNC lathe on which a 2 kW diode laser (Laserline LDL 80-2000) was integrated. Two machining configurations: grooving and longitudinal turning were evaluated using carbide tooling with an emphasis on tool life, cutting forces, mechanism of chip formation, workpiece surface temperature, and surface integrity. Cutting tests performed on AISI D2 tool steel when using laser assist showed that an average temperature of about 300◦C in the uncut chip thickness is sufficient for proper LAM. The main mechanisms of tool wear identified during both conventional and laser assisted grooving were cutting edge chipping, flank face abrasion, and adhesion. Built iii up edge (BUE) was invariantly present during LAM, which was very stable for low cutting speed (20 m/min) and became unstable with an increase in cutting speed to 30 m/min. The use of laser assist enabled the cutting up to a speed of 30 m/min in the grooving cutting tests with good tool performance which was not possible without the use of laser. LAM also significantly reduced chatter, which was consistently noticed during conventional machining. The use of laser assist in grooving changed the cutting to thrust force ratio Fc/Fp from ≈ 0.5 in conventional cutting to ≈ 1 during LAM, indicating material softening. Chip thickening was noticed when using LAM, which suggests a decrease of about 10◦ in the shear angle. No thermal damage was found in the generated subsurface for the grooving experiments. Longitudinal turning tests were performed in two LAM configurations corresponding to two different laser beam orientations: spot slow axis parallel to the workpiece axis (LAM ‖) and spot fast axis parallel to the workpiece axis (LAM ⊥). Chipping and abrasion were identified as the main mechanisms of flank wear for both conventional and LAM tests. During both LAM tests (LAM ‖ and LAM ⊥) chipping was reduced and the tool life improved by about 100% compared with conventional turning. Chip analysis revealed that the segmented chips characterized both conventional and LAM ‖, while for LAM ⊥ chips transformed into continuous chips. This observation together with the measured surface temperature in front of the cutting edge (≈ 400◦ C for LAM ‖ and ≈ 600◦ C for LAM⊥) showed that LAM⊥ is the proper LAM configuration for longitudinal turning. No thermal damage was identified in the generated subsurface and surface roughness increased with the increase of temperature in the uncut chip thickness. |
The CD226 gene in susceptibility of type 1 diabetes. | The rs763361 single nucleotide polymorphism (SNP) within the CD226 gene has recently been reported as a novel susceptible locus for type 1 diabetes. The CD226 gene is implicated in the regulation of a number of cells involved in immune mechanisms leading to beta-cell destruction in type 1 diabetes. The aim of the present study was to confirm the association of the CD226 gene with type 1 diabetes in Estonian population. The TT genotype [odds ratio (OR) = 2.29, 95% confidence interval (CI) = 1.25-4.18, P = 0.0071) and the T allele (OR = 1.48, 95% CI = 1.11-1.98, P = 0.0084) of the rs763361 SNP were associated with the risk of type 1 diabetes. The current study replicates the novel association of the rs763361 SNP in susceptibility of type 1 diabetes and supports the CD226 gene as a susceptible candidate locus for type 1 diabetes outside the major histocompatibility complex region. |
Clinical response to the canakinumab in crohn’s disease related artrhritis | Results A 4 years old girl was admitted because of right hip pain. When she was 1 year old was diagnosed with Crohn’s disease and taken sulfasalazine and corticosteroid. She had septic arthritis in her right hip one year ago. On admission, we have found pain and limitation in right hip. Also she was growth retardation. In her laboratory findings, acute phase reactants were elevated (white blood cells :20 500 /mm3, Thrombocyte : 596 000/ mm3, ESH:120 mm/h, CRP 50,2 mg/L). She had also anemia (Hemoglobine : 8 gr/dl). We found ANA and HLA B27 were negative. We detected arthritis in right hip joint and bilateral sacroiliac joints in her MRI. Glucocorticoids and methotrexate (MTX) was started effectively; however, the patient did not reach complete remission. Therefore etanercept was added her therapy. We found homozygote MEFV mutation (M694V/M694V) and cholchine was added in her therapy. After one year, a severe arthritis flare occurred, with an aggressive polyarticular course. In consideration of the lack of control obtained through the etanercept administration. We then decided to switch from etanercept to infliximab), which was administered at 7 dose. Despite this therapy, symptoms and laboratory findings did not regress. We started canakinumab (2mg/kg/month) therapy. Her arthritis was recovery on canacinumab in 3 months. |
A preliminary study of fMRI-guided rTMS in the treatment of generalized anxiety disorder. | BACKGROUND
Repetitive transcranial magnetic stimulation (rTMS) is a noninvasive method that holds promise for treating several psychiatric disorders. Yet the most effective location and parameters for treatment need more exploration. Also, whether rTMS is an effective treatment for individuals with a DSM-IV diagnosis of generalized anxiety disorder (GAD) has not been empirically tested. The goal of this pilot study was to evaluate whether functional magnetic resonance imaging (fMRI)-guided rTMS is effective in reducing symptoms of GAD.
METHOD
Ten participants with a DSM-IV diagnosis of GAD, recruited from the UCLA Anxiety Disorders Program, and between the ages of 18 and 56 years were enrolled in the study from August 2006 to March 2007. A pretreatment symptom provocation fMRI experiment was used to determine the most active location in the prefrontal cortex of the participants. Ten participants completed 6 sessions of rTMS over the course of 3 weeks, stereotactically directed to the previously determined prefrontal location. The primary efficacy measures were the Hamilton Rating Scale for Anxiety (HAM-A) and the Clinical Global Impressions-Improvement of Illness (CGI-I) scale. Response to treatment was defined as a reduction of 50% or more on the HAM-A and a CGI-I score of 1 or 2 ("very much improved" or "much improved," respectively).
RESULTS
Overall, rTMS was associated with significant decreases in HAM-A scores (t = 6.044, p = .001) indicative of clinical improvement in GAD symptoms. At endpoint, 6 (60%) of the 10 participants who completed the study showed reductions of 50% or more on the HAM-A and a CGI-I score of 1 or 2; those 6 subjects also had an endpoint HAM-A score < 8, therefore meeting criteria for remission.
CONCLUSION
Results of the current study suggest that fMRI-guided rTMS treatment may be a beneficial technique for the treatment of anxiety disorders. Limitations include a small sample size and open-label design with a technology that may be associated with a large placebo response. These limitations necessitate further research to determine whether rTMS is indeed effective in treating anxiety disorders. |
Late reperfusion for acute myocardial infarction limits the dilatation of left ventricle without the reduction of infarct size. | BACKGROUND
While previous clinical studies have shown a possible beneficial effect of the reperfusion performed at a relatively late phase of acute myocardial infarction ("late reperfusion") in preventing left ventricular enlargement, the mechanism has not been clarified.
METHODS AND RESULTS
Of 89 patients with an initial anterior myocardial infarction, reperfusion was successful in 69. These 69 were divided into three groups according to the time required to achieve reperfusion after the onset of symptoms: early-reperfused (< 3 hours from the onset to reperfusion; n = 22), intermediate-reperfused (3 to 6 hours from the onset to reperfusion; n = 28), and late-reperfused (> 6 hours from the onset to reperfusion; n = 19). The 20 patients whose infarct-related artery were occluded in the acute phase as well as 1 month later was classified as nonreperfused. Infarct size, evaluated as defect volume by 201Tl single-photon emission computed tomography 1 month after the onset, was 1593 +/- 652 units (mean +/- SD) in the late-reperfused group, significantly larger (P < .05) than that of the intermediate-reperfused (1066 +/- 546 U) or the early-reperfused groups (372 +/- 453 U) but not different from that of the nonreperfused group (1736 +/- 562 U). Wall motion abnormality index as well as global ejection fraction evaluated by left ventriculography 1 month after the onset showed that late reperfusion did not preserve the left ventricular wall motion and function. These results indicate that the earlier reperfusion decreased the size of the infarction and preserved left ventricular function, whereas late reperfusion (> 6 hours after onset) did not limit infarct size or preserve left ventricular function. In contrast, the end-diastolic volume index did not differ significantly among the early-reperfused (50 +/- 15 mL/m2), intermediate-reperfused (54 +/- 14 mL/m2), and late-reperfused (53 +/- 19 mL/m2) groups; those were significantly smaller than that of the nonreperfused group (68 +/- 12 mL/m2; P < .05). Left ventriculographic data obtained in both the acute and chronic phase in 39 patients showed that left ventricular volumes increased significantly during the course of myocardial infarction only in the nonreperfused group.
CONCLUSIONS
Late reperfusion appeared to prevent ventricular dilatation acute myocardial infarction independent of the limitation of infarct size. |
Adversarially Regularising Neural NLI Models to Integrate Logical Background Knowledge | Adversarial examples are inputs to machine learning models designed to cause the model to make a mistake. They are useful for understanding the shortcomings of machine learning models, interpreting their results, and for regularisation. In NLP, however, most example generation strategies produce input text by using known, pre-specified semantic transformations, requiring significant manual effort and in-depth understanding of the problem and domain. In this paper, we investigate the problem of automatically generating adversarial examples that violate a set of given First-Order Logic constraints in Natural Language Inference (NLI). We reduce the problem of identifying such adversarial examples to a combinatorial optimisation problem, by maximising a quantity measuring the degree of violation of such constraints and by using a language model for generating linguisticallyplausible examples. Furthermore, we propose a method for adversarially regularising neural NLI models for incorporating background knowledge. Our results show that, while the proposed method does not always improve results on the SNLI and MultiNLI datasets, it significantly and consistently increases the predictive accuracy on adversarially-crafted datasets – up to a 79.6% relative improvement – while drastically reducing the number of background knowledge violations. Furthermore, we show that adversarial examples transfer among model architectures, and that the proposed adversarial training procedure improves the robustness of NLI models to adversarial examples. |
Effect of Landfill Leachate on the Stream water Quality | The influence of leachate from open solid waste dumping near Salhad stream (Abbottabad, Pakistan) was investigated to quantify the variations of water quality during August 2007 to April 2008. Samples were collected from five different sites located along the Salhad stream. Two sites were located before the mixing of solid waste leachate with the surface water. One sampling site was of leachate and other two sampling sites were affected with solid waste leachate. Samples were analyzed for various physical and chemical parameters like pH, water temperature, electrical conductivity (EC), total dissolved solids (TDS), Biological oxygen demand (BOD), chemical oxygen demand (COD) and dissolved oxygen (DO). Microbiological analysis was done by using Membrane filter technique. The results of various parameters determined strongly suggested that landfill leachate had severe deleterious impact on the water quality of Salhad stream. The parameters exceeding the allowable limits of WHO, EC and National Environmental Quality Standards included pH, TDS, BOD, COD, total bacterial counts and total coliform counts. Heavy metals like Pb, Cd and Cu were released from the leachate into the Salhad stream which might affect the sustainability of the aquatic life. Integrated, multi-sector approaches are required to deal with the contamination problem and sustainable management of the Salhad stream water. |
Image-to-image translation for cross-domain disentanglement | Deep image translation methods have recently shown excellent results, outputting high-quality images covering multiple modes of the data distribution. There has also been increased interest in disentangling the internal representations learned by deep methods to further improve their performance and achieve a finer control. In this paper, we bridge these two objectives and introduce the concept of crossdomain disentanglement. We aim to separate the internal representation into three parts. The shared part contains information for both domains. The exclusive parts, on the other hand, contain only factors of variation that are particular to each domain. We achieve this through bidirectional image translation based on Generative Adversarial Networks and cross-domain autoencoders, a novel network component. Our model offers multiple advantages. We can output diverse samples covering multiple modes of the distributions of both domains, perform domainspecific image transfer and interpolation, and cross-domain retrieval without the need of labeled data, only paired images. We compare our model to the state-ofthe-art in multi-modal image translation and achieve better results for translation on challenging datasets as well as for cross-domain retrieval on realistic datasets. |
"I am borrowing ya mixing ?" An Analysis of English-Hindi Code Mixing in Facebook | Code-Mixing is a frequently observed phenomenon in social media content generated by multi-lingual users. The processing of such data for linguistic analysis as well as computational modelling is challenging due to the linguistic complexity resulting from the nature of the mixing as well as the presence of non-standard variations in spellings and grammar, and transliteration. Our analysis shows the extent of Code-Mixing in English-Hindi data. The classification of Code-Mixed words based on frequency and linguistic typology underline the fact that while there are easily identifiable cases of borrowing and mixing at the two ends, a large majority of the words form a continuum in the middle, emphasizing the need to handle these at different levels for automatic processing of the data. |
Early sport specialization: roots, effectiveness, risks. | Year-round training in a single sport beginning at a relatively young age is increasingly common among youth. Contributing factors include perceptions of Eastern European sport programs, a parent's desire to give his or her child an edge, labeling youth as talented at an early age, pursuit of scholarships and professional contracts, the sporting goods and services industry, and expertise research. The factors interact with the demands of sport systems. Limiting experiences to a single sport is not the best path to elite status. Risks of early specialization include social isolation, overdependence, burnout, and perhaps risk of overuse injury. Commitment to a single sport at an early age immerses a youngster in a complex world regulated by adults, which is a setting that facilitates manipulation - social, dietary, chemical, and commercial. Youth sport must be kept in perspective. Participants, including talented young athletes, are children and adolescents with the needs of children and adolescents. |
Salience from feature contrast: additivity across dimensions | Test targets ('singletons') that displayed orientation, motion, luminance, or color contrast, or pairwise combinations of these, were presented in line texture arrays, and their saliences were quantified in comparison to reference targets at defined luminance levels. In general, saliency effects in different stimulus dimensions did add, but did not add linearly. That is, targets with feature contrast in two dimensions were generally more salient than targets with only one of these properties, but often less salient than predicted from the sum of the individual saliency components. Salience variations within a dimension were compared with and without a second saliency effect added. The resulting gain reduction in the combined stimulus conditions was interpreted to reflect the amount of overlap between the respective saliency mechanisms. Combinations of orientation and color contrast produced the strongest gain reduction (about 90% for color in orientation) thus indicating the strongest overlap of underlying saliency mechanisms. Combinations of orientation and motion contrast revealed about 50% overlap, slightly smaller rates were found for combinations of color and motion. All combinations with luminance contrast (orientation and luminance, motion and luminance) produced only little gain reduction (<30%) thus indicating a higher degree of independence between the underlying saliency mechanisms than for other stimulus dimensions. |
Soft Material Characterization for Robotic Applications | In this article we present mechanical measurements of three representative elastomers used in soft robotic systems: Sylgard 184, Smooth-Sil 950, and EcoFlex 00-30. Our aim is to demonstrate the effects of the nonlinear, time-dependent properties of these materials to facilitate improved dynamic modeling of soft robotic components. We employ uniaxial pull-to-failure tests, cyclic loading tests, and stress relaxation tests to provide a qualitative assessment of nonlinear behavior, batch-to-batch repeatability, and effects of prestraining, cyclic loading, and viscoelastic stress relaxation. Strain gauges composed of the elastomers embedded with a microchannel of conductive liquid (eutectic gallium–indium) are also tested to quantify the interaction between material behaviors and measured strain output. It is found that all of the materials tested exhibit the Mullins effect, where the material properties in the first loading cycle differ from the properties in all subsequent cycles, as well as response sensitivity to loading rate and production variations. Although the materials tested show stress relaxation effects, the measured output from embedded resistive strain gauges is found to be uncoupled from the changes to the material properties and is only a function of strain. |
Object-Based Multiple Foreground Video Co-segmentation | We present a video co-segmentation method that uses category-independent object proposals as its basic element and can extract multiple foreground objects in a video set. The use of object elements overcomes limitations of low-level feature representations in separating complex foregrounds and backgrounds. We formulate object-based co-segmentation as a co-selection graph in which regions with foreground-like characteristics are favored while also accounting for intra-video and inter-video foreground coherence. To handle multiple foreground objects, we expand the co-selection graph model into a proposed multi-state selection graph model (MSG) that optimizes the segmentations of different objects jointly. This extension into the MSG can be applied not only to our co-selection graph, but also can be used to turn any standard graph model into a multi-state selection solution that can be optimized directly by the existing energy minimization techniques. Our experiments show that our object-based multiple foreground video co-segmentation method (ObMiC) compares well to related techniques on both single and multiple foreground cases. |
The Corticostriatal and Corticosubthalamic Pathways: Two Entries, One Target. So What? | The basal ganglia receive cortical inputs through two main stations - the striatum and the subthalamic nucleus (STN). The information flowing along the corticostriatal system is transmitted to the basal ganglia circuitry via the "direct and indirect" striatofugal pathways, while information that flows through the STN is transmitted along the so-called "hyperdirect" pathway. The functional significance of this dual entry system is not clear. Although the corticostriatal system has been thoroughly characterized anatomically and electrophysiologically, such is not the case for the corticosubthalamic system. In order to provide further insights into the intricacy of this complex anatomical organization, this review examines and compares the anatomical and functional organization of the corticostriatal and corticosubthalamic systems, and highlights some key issues that must be addressed to better understand the mechanisms by which these two neural systems may interact to regulate basal ganglia functions and dysfunctions. |
Response-Time Analysis of Conditional DAG Tasks in Multiprocessor Systems | Different task models have been proposed to represent the parallel structure of real-time tasks executing on manycore platforms: fork/join, synchronous parallel, DAG-based, etc. Despite different schedulability tests and resource augmentation bounds are available for these task systems, we experience difficulties in applying such results to real application scenarios, where the execution flow of parallel tasks is characterized by multiple (and nested) conditional structures. When a conditional branch drives the number and size of sub-jobs to spawn, it is hard to decide which execution path to select for modeling the worst-case scenario. To circumvent this problem, we integrate control flow information in the task model, considering conditional parallel tasks (cp-tasks) represented by DAGs composed of both precedence and conditional edges. For this task model, we identify meaningful parameters that characterize the schedulability of the system, and derive efficient algorithms to compute them. A response time analysis based on these parameters is then presented for different scheduling policies. A set of simulations shows that the proposed approach allows efficiently checking the schedulability of the addressed systems, and that it significantly tightens the schedulability analysis of non-conditional (e.g., Classic DAG) tasks over existing approaches. |
What leads to post-implementation success of ERP? An empirical study of the Chinese retail industry | Enterprise Resource Planning (ERP) systems have been implemented globally and their implementation has been extensively studied during the past decade. However, many organizations are still struggling to derive benefits from the implemented ERP systems. Therefore, ensuring post-implementation success has become the focus of the current ERP research. This study develops an integrative model to explain the post-implementation success of ERP, based on the Technology–Organization–Environment (TOE) theory. We posit that ERP implementation quality (the technological aspect) consisting of project management and system configuration, organizational readiness (the organizational aspect) consisting of leadership involvement and organizational fit, and external support (the environmental aspect) will positively affect the post-implementation success of ERP. An empirical test was conducted in the Chinese retail industry. The results show that both ERP implementation quality and organizational readiness significantly affect post-implementation success, whereas external support does not. The theoretical and practical implications of the findings are discussed. © 2009 Elsevier Ltd. All rights reserved. |
Perceptual expertise in forensic facial image comparison. | Forensic facial identification examiners are required to match the identity of faces in images that vary substantially, owing to changes in viewing conditions and in a person's appearance. These identifications affect the course and outcome of criminal investigations and convictions. Despite calls for research on sources of human error in forensic examination, existing scientific knowledge of face matching accuracy is based, almost exclusively, on people without formal training. Here, we administered three challenging face matching tests to a group of forensic examiners with many years' experience of comparing face images for law enforcement and government agencies. Examiners outperformed untrained participants and computer algorithms, thereby providing the first evidence that these examiners are experts at this task. Notably, computationally fusing responses of multiple experts produced near-perfect performance. Results also revealed qualitative differences between expert and non-expert performance. First, examiners' superiority was greatest at longer exposure durations, suggestive of more entailed comparison in forensic examiners. Second, experts were less impaired by image inversion than non-expert students, contrasting with face memory studies that show larger face inversion effects in high performers. We conclude that expertise in matching identity across unfamiliar face images is supported by processes that differ qualitatively from those supporting memory for individual faces. |
Relationships between body roundness with body fat and visceral adipose tissue emerging from a new geometrical model | OBJECTIVE
To develop a new geometrical index that combines height, waist circumference (WC), and hip circumference (HC) and relate this index to total and visceral body fat.
DESIGN AND METHODS
Subject data were pooled from three databases that contained demographic, anthropometric, dual energy X-ray absorptiometry (DXA) measured fat mass, and magnetic resonance imaging measured visceral adipose tissue (VAT) volume. Two elliptical models of the human body were developed. Body roundness was calculated from the model using a well-established constant arising from the theory. Regression models based on eccentricity and other variables were used to predict %body fat and %VAT.
RESULTS
A body roundness index (BRI) was derived to quantify the individual body shape in a height-independent manner. Body roundness slightly improved predictions of %body fat and %VAT compared to the traditional metrics of body mass index (BMI), WC, or HC. On this basis, healthy body roundness ranges were established. An automated graphical program simulating study results was placed at http://www.pbrc.edu/bodyroundness.
CONCLUSION
BRI, a new shape measure, is a predictor of %body fat and %VAT and can be applied as a visual tool for health status evaluations. |
FPGA-based accelerator for long short-term memory recurrent neural networks | Long Short-Term Memory Recurrent neural networks (LSTM-RNNs) have been widely used for speech recognition, machine translation, scene analysis, etc. Unfortunately, general-purpose processors like CPUs and GPGPUs can not implement LSTM-RNNs efficiently due to the recurrent nature of LSTM-RNNs. FPGA-based accelerators have attracted attention of researchers because of good performance, high energy-efficiency and great flexibility. In this work, we present an FPGA-based accelerator for LSTM-RNNs that optimizes both computation performance and communication requirements. The peak performance of our accelerator achieves 7.26 GFLOP/S, which significantly outperforms previous approaches. |
Femtosecond pulsed laser ablation of metal alloy and semiconductor targets | Abstract The properties of metal alloy (CoPt and inconel) and semiconductor (GaAs and InP) nanoclusters formed via femtosecond laser pulses were investigated. Ablation of the target materials was carried out both in vacuum (10 −4 Pa) and at set pressures in a number of background gases. The results of this work indicate that short laser pulses (low picoseconds/femtoseconds) alone are not enough to guarantee the production of films with stoichiometries matching those of the target materials. The production of stoichiometric alloy films depends on the similarity of the vapor pressures of the target constituents, while the production of stoichiometric compound films requires ablation in the presence of a background gas and compound constituents of comparable mass. |
Large scale tissue histopathology image classification, segmentation, and visualization via deep convolutional activation features | Histopathology image analysis is a gold standard for cancer recognition and diagnosis. Automatic analysis of histopathology images can help pathologists diagnose tumor and cancer subtypes, alleviating the workload of pathologists. There are two basic types of tasks in digital histopathology image analysis: image classification and image segmentation. Typical problems with histopathology images that hamper automatic analysis include complex clinical representations, limited quantities of training images in a dataset, and the extremely large size of singular images (usually up to gigapixels). The property of extremely large size for a single image also makes a histopathology image dataset be considered large-scale, even if the number of images in the dataset is limited. In this paper, we propose leveraging deep convolutional neural network (CNN) activation features to perform classification, segmentation and visualization in large-scale tissue histopathology images. Our framework transfers features extracted from CNNs trained by a large natural image database, ImageNet, to histopathology images. We also explore the characteristics of CNN features by visualizing the response of individual neuron components in the last hidden layer. Some of these characteristics reveal biological insights that have been verified by pathologists. According to our experiments, the framework proposed has shown state-of-the-art performance on a brain tumor dataset from the MICCAI 2014 Brain Tumor Digital Pathology Challenge and a colon cancer histopathology image dataset. The framework proposed is a simple, efficient and effective system for histopathology image automatic analysis. We successfully transfer ImageNet knowledge as deep convolutional activation features to the classification and segmentation of histopathology images with little training data. CNN features are significantly more powerful than expert-designed features. |
Real time pothole detection using Android smartphones with accelerometers | The importance of the road infrastructure for the society could be compared with importance of blood vessels for humans. To ensure road surface quality it should be monitored continuously and repaired as necessary. The optimal distribution of resources for road repairs is possible providing the availability of comprehensive and objective real time data about the state of the roads. Participatory sensing is a promising approach for such data collection. The paper is describing a mobile sensing system for road irregularity detection using Android OS based smart-phones. Selected data processing algorithms are discussed and their evaluation presented with true positive rate as high as 90% using real world data. The optimal parameters for the algorithms are determined as well as recommendations for their application. |
Federated Collaborative Filtering for Privacy-Preserving Personalized Recommendation System | The increasing interest in user privacy is leading to new privacy preserving machine learning paradigms. In the Federated Learning paradigm, a master machine learning model is distributed to user clients, the clients use their locally stored data and model for both inference and calculating model updates. The model updates are sent back and aggregated on the server to update the master model then redistributed to the clients. In this paradigm, the user data never leaves the client, greatly enhancing the user’ privacy, in contrast to the traditional paradigm of collecting, storing and processing user data on a backend server beyond the user’s control. In this paper we introduce, as far as we are aware, the first federated implementation of a Collaborative Filter. The federated updates to the model are based on a stochastic gradient approach. As a classical case study in machine learning, we explore a personalized recommendation system based on users’ implicit feedback and demonstrate the method’s applicability to both the MovieLens and an in-house dataset. Empirical validation confirms a collaborative filter can be federated without a loss of accuracy compared to a standard implementation, hence enhancing the user’s privacy in a widely used recommender application while maintaining recommender performance. |
Construct validity of the pediatric Rome III criteria. | OBJECTIVES
Functional gastrointestinal disorders (FGIDs) are common. The diagnosis of FGIDs is based on the Rome criteria, a symptom-based diagnostic classification established by expert consensus. There is little evidence of validity for the pediatric Rome III criteria. The construct validity of the criteria, an overarching term that incorporates other forms of validity, has never been assessed. We assessed the construct validity of the Rome III criteria.
METHODS
Children from 2 schools in Colombia completed the Questionnaire on Pediatric Gastrointestinal Symptoms at baseline and weekly questionnaires of somatic symptoms and disability for 8 weeks (presence and intensity of gastrointestinal symptoms, nongastrointestinal symptoms, impact on daily activities). A total of 255 children completed at least 6 weekly surveys (2041 surveys).
RESULTS
At baseline, 27.8% children were diagnosed as having an FGID. Prevalence of nausea (Δ 7.8%, 95% confidence interval [CI] 4.46-11.14), constipation (Δ 4.39%, 95% CI 1.79-6.99), diarrhea (Δ 6.69%, 95% CI 3.25-10.13), headache (Δ 7.4%, 95% CI 3.51-11.09), chest pain (Δ 9.04%, 95% CI 5.20-12.88), and limb pain (Δ 4.07%, 95% CI 1.76-6.37) and intensity of nausea (Δ 0.23, 95% CI 0.127-0.333), diarrhea (Δ 0.30, 95% CI 0.211-0.389), abdominal pain (Δ 0.18, 95% CI 0.069-0.291), headache (Δ 0.17, 95% CI 0.091-0.249), and limb pain (Δ 0.30, 95% CI 0.084-0.516) were higher in children with FGIDs (P < 0.001). Children with FGIDs had greater interference with daily activities (P < 0.001).
CONCLUSIONS
Children with a Rome III diagnosis had significantly more gastrointestinal and nongastrointestinal complaints, and greater intensity of symptoms and disability than children without an FGID diagnosis. The study suggests that the Rome III pediatric criteria have adequate construct validity. |
Agile Software Development: The Business of Innovation | T he rise and fall of the dot-com-driven Internet economy shouldn't distract us from seeing that the business environment continues to change at a dramatically increasing pace. To thrive in this turbulent environment, we must confront the business need for relentless innovation and forge the future workforce culture. Agile software development approaches such as Extreme Programming , Crystal methods, Lean Development, Scrum, Adaptive Software Development (ASD), and others view change from a perspective that mirrors today's turbulent business and technology environment. In a recent study of more than 200 software development projects, QSM Associates' Michael Mah reported that the researchers couldn't find nearly half of the projects' original plans to measure against. Why? Conforming to plan was no longer the primary goal; instead, satisfying customers—at the time of delivery , not at project initiation—took precedence. In many projects we review, major changes in the requirements, scope, and technology that are outside the development team's control often occur within a project's life span. Accepting that Barry Boehm's life cycle cost differentials theory—the cost of change grows through the software's development life cycle—remains valid, the question today is not how to stop change early in a project but how to better handle inevitable changes throughout its life cycle. Traditional approaches assumed that if we just tried hard enough, we could anticipate the complete set of requirements early and reduce cost by eliminating change. Today, eliminating change early means being unresponsive to business con-ditions—in other words, business failure. Similarly, traditional process manage-ment—by continuous measurement, error identification, and process refine-ments—strove to drive variations out of processes. This approach assumes that variations are the result of errors. Today, while process problems certainly cause some errors, external environmental changes cause critical variations. Because we cannot eliminate these changes, driving down the cost of responding to them is the only viable strategy. Rather than eliminating rework, the new strategy is to reduce its cost. However, in not just accommodating change, but embracing it, we also must be careful to retain quality. Expectations have grown over the years. The market demands and expects innovative, high-quality software that meets its needs— and soon. Agile methods are a response to this expectation. Their strategy is to reduce the cost of change throughout a project. Extreme Programming (XP), for example, calls for the software development team to • produce the first delivery in weeks, to achieve an early win and rapid … |
Sparse representation or collaborative representation: Which helps face recognition? | As a recently proposed technique, sparse representation based classification (SRC) has been widely used for face recognition (FR). SRC first codes a testing sample as a sparse linear combination of all the training samples, and then classifies the testing sample by evaluating which class leads to the minimum representation error. While the importance of sparsity is much emphasized in SRC and many related works, the use of collaborative representation (CR) in SRC is ignored by most literature. However, is it really the l1-norm sparsity that improves the FR accuracy? This paper devotes to analyze the working mechanism of SRC, and indicates that it is the CR but not the l1-norm sparsity that makes SRC powerful for face classification. Consequently, we propose a very simple yet much more efficient face classification scheme, namely CR based classification with regularized least square (CRC_RLS). The extensive experiments clearly show that CRC_RLS has very competitive classification results, while it has significantly less complexity than SRC. |
Architectural criteria for website evaluation - conceptual framework and empirical validation | With the rapid development of the Internet, many types of websites have been developed. This variety of websites makes it necessary to adopt systemized evaluation criteria with a strong theoretical basis. This study proposes a set of evaluation criteria derived from an architectural perspective which has been used for over a 1000 years in the evaluation of buildings. The six evaluation criteria are internal reliability and external security for structural robustness, useful content and usable navigation for functional utility, and system interface and communication interface for aesthetic appeal. The impacts of the six criteria on user satisfaction and loyalty have been investigated through a large-scale survey. The study results indicate that the six criteria have different impacts on user satisfaction for different types of websites, which can be classified along two dimensions: users’ goals and users’ activity levels. |
A Hybrid Model Combining Convolutional Neural Network with XGBoost for Predicting Social Media Popularity | A hybrid model for social media popularity prediction is proposed by combining Convolutional Neural Network (CNN) with XGBoost. The CNN model is exploited to learn high-level representations from the social cues of the data. These high-level representations are used in XGBoost to predict the popularity of the social posts. We evaluate our approach on a real-world Social Media Prediction (SMP) dataset, which consists of 432K Flickr images. The experimental results show that the proposed approach is effective, achieving the following performance: Spearman's Rho: 0.7406, MSE: 2.7293, MAE: 1.2475. |
Social media technology usage and customer relationship performance : A capabilities-based examination of social CRM | a r t i c l e i n f o Keywords: Customer relationship management CRM Customer relationship performance Information technology Marketing capabilities Social media technology This study examines how social media technology usage and customer-centric management systems contribute to a firm-level capability of social customer relationship management (CRM). Drawing from the literature in marketing, information systems, and strategic management, the first contribution of this study is the conceptu-alization and measurement of social CRM capability. The second key contribution is the examination of how social CRM capability is influenced by both customer-centric management systems and social media technologies. These two resources are found to have an interactive effect on the formation of a firm-level capability that is shown to positively relate to customer relationship performance. The study analyzes data from 308 organizations using a structural equation modeling approach. Much like marketing managers in the late 1990s through early 2000s, who participated in the widespread deployment of customer relationship management (CRM) technologies, today's managers are charged with integrating nascent technologies – namely, social media applications – with existing systems and processes to develop new capabilities that foster stronger relationships with customers. This merger of existing CRM systems with social media technology has given way to a new concept of CRM that incorporates a more collaborative and network-focused approach to managing customer relationships. The term social CRM has recently emerged to describe this new way of developing and maintaining customer relationships (Greenberg, 2010). Marketing scholars have defined social CRM as the integration of customer-facing activities, including processes, systems, and technologies, with emergent social media applications to engage customers in collaborative conversations and enhance customer relationships (Greenberg, 2010; Trainor, 2012). Organizations are recognizing the potential of social CRM and have made considerable investments in social CRM technology over the past two years. According to Sarner et al. (2011), spending in social CRM technology increased by more than 40% in 2010 and is expected to exceed $1 billion by 2013. Despite the current hype surrounding social media applications, the efficacy of social CRM technology remains largely unknown and underexplored. Several questions remain unanswered, such as: 1) Can social CRM increase customer retention and loyalty? 2) How do social CRM technologies contribute to firm outcomes? 3) What role is played by CRM processes and technologies? As a result, companies are largely left to experiment with their social application implementations (Sarner et al., 2011), and they … |
Californium: Scalable cloud services for the Internet of Things with CoAP | The Internet of Things (IoT) is expected to interconnect a myriad of devices. Emerging networking and backend support technology not only has to anticipate this dramatic increase in connected nodes, but also a change in traffic patterns. Instead of bulk data such as file sharing or multimedia streaming, IoT devices will primarily exchange real-time sensory and control data in small but numerous messages. Often cloud services will handle these data from a huge number of devices, and hence need to be extremely scalable to support conceivable large-scale IoT applications. To this end, we present a system architecture for IoT cloud services based on the Constrained Application Protocol (CoAP), which is primarily designed for systems of tiny, low-cost, resource-constrained IoT devices. Along with our system architecture, we systematically evaluate the performance of the new Web protocol in cloud environments. Our Californium (Cf) CoAP framework shows 33 to 64 times higher throughput than high-performance HTTP Web servers, which are the state of the art for classic cloud services. The results substantiate that the low overhead of CoAP does not only enable Web technology for lowcost IoT devices, but also significantly improves backend service scalability for vast numbers of connected devices. |
A k-mean clustering algorithm for mixed numeric and categorical data | Use of traditional k-mean type algorithm is limited to numeric data. This paper presents a clustering algorithm based on k-mean paradigm that works well for data with mixed numeric and categorical features. We propose new cost function and distance measure based on co-occurrence of values. The measures also take into account the significance of an attribute towards the clustering process. We present a modified description of cluster center to overcome the numeric data only limitation of k-mean algorithm and provide a better characterization of clusters. The performance of this algorithm has been studied on real world data sets. Comparisons with other clustering algorithms illustrate the effectiveness of this approach. 2007 Elsevier B.V. All rights reserved. |
CLIP: Continuous Location Integrity and Provenance for Mobile Phones | Many location-based services require a mobile user to continuously prove his location. In absence of a secure mechanism, malicious users may lie about their locations to get these services. Mobility trace, a sequence of past mobility points, provides evidence for the user's locations. In this paper, we propose a Continuous Location Integrity and Provenance (CLIP) Scheme to provide authentication for mobility trace, and protect users' privacy. CLIP uses low-power inertial accelerometer sensor with a light-weight entropy-based commitment mechanism and is able to authenticate the user's mobility trace without any cost of trusted hardware. CLIP maintains the user's privacy, allowing the user to submit a portion of his mobility trace with which the commitment can be also verified. Wireless Access Points (APs) or colocated mobile devices are used to generate the location proofs. We also propose a light-weight spatial-temporal trust model to detect fake location proofs from collusion attacks. The prototype implementation on Android demonstrates that CLIP requires low computational and storage resources. Our extensive simulations show that the spatial-temporal trust model can achieve high (> 0.9) detection accuracy against collusion attacks. |
Overview of Ecology and Design | With the development of global environment and environmental protection heat and further understanding of ecology,the environmental ecological awareness of people are more and more strong,different concepts has appeared too.The article analyzed these concepts and summarized the practice of ecological design in recent years. |
Workspace augmentation of spatial 3-DOF cable parallel robots using differential actuation | In this paper, it is proposed to use spatial differentials instead of independently actuated cables to drive cable robots. Spatial cable differentials are constituted of several cables attaching the moving platform to the base but all of these cables are pulled by the same actuator through a differential system. To this aim, cable differentials with both planar and spatial architectures are first described in this work and then, their resultant properties on the force distribution is presented. Next, a special cable differential is selected and used to design the architecture of two incompletely and fully restrained robots. Finally, by comparing the workspaces of these robots with their classically actuated counterparts, the advantage of using differentials on their wrench-closure and wrench-feasible workspaces is illustrated. |
PAMAS - power aware multi-access protocol with signalling for ad hoc networks | In this paper we develop a new multiaccess protocol for ad hoc radio networks. The protocol is based on the original MACA protocol with the adition of a separate signalling channel. The unique feature of our protocol is that it conserves battery power at nodes by intelligently powering off nodes that are not actively transmitting or receiving packets. The manner in which nodes power themselves off does not influence the delay or throughput characteristics of our protocol. We illustrate the power conserving behavior of PAMAS via extensive simulations performed over ad hoc networks containing 10-20 nodes. Our results indicate that power savings of between 10% and 70% are attainable in most systems. Finally, we discuss how the idea of power awareness can be built into other multiaccess protocols as well. |
Cyberbullying : A Review of the Literature | The article is a literature review on cyberbullying from 2007-2013. Topics covered in the review have been categorized starting with definition of cyberbullying; roles of persons involved and statistics of who is being targeted; reasons for cyberbullying; differences between traditional bullying and cyberbullying; and gender comparisons related to cyberbullying. This introduction to cyberbullying will provide a foundation for developing a cyberbullying intervention/prevention program. |
Design of household appliances for a Dc-based nanogrid system: An induction heating cooktop study case | Efficient energy management is becoming an important issue when designing any electrical system. Recently, a significant effort has been devoted to the design of optimized micro and nanogrids comprising residential area subsystems. One of these approaches consists on the design of a dc-based nanogrid optimized for the interoperation of electric loads, sources, and storage elements. Home appliances are one of the main loads in such dc-based nanogrids. In this paper, the design and optimization of an appliance for operation in a dc-based nanogrid is detailed. An induction heating cooktop appliance will be considered as a reference example, being some of the design considerations generalizable to other appliances. The main design aspects, including the inductor system, power converter, EMC filter, and control are considered. Finally, some simulation results of the expected converter performance are shown. |
Fixed Priority Scheduling of Periodic Task Sets with Arbitrary Deadlines | This paper considers the problem of fixed priority scheduling of periodic tasks with arbitrary deadlines. A general criterion for the schedulability of such a task set is given. Worst case bounds are given which generalize the Liu and Layland bound. The results are shown to provide a basis for developing predictable distributed real-time systems. |
High boost converter using voltage multiplier | With the increasing demand for renewable energy, distributed power included in fuel cells have been studied and developed as a future energy source. For this system, a power conversion circuit is necessary to interface the generated power to the utility. In many cases, a high step-up DC/DC converter is needed to boost low input voltage to high voltage output. Conventional methods using cascade DC/DC converters cause extra complexity and higher cost. The conventional topologies to get high output voltage use flyback DC/DC converters. They have the leakage components that cause stress and loss of energy that results in low efficiency. This paper presents a high boost converter with a voltage multiplier and a coupled inductor. The secondary voltage of the coupled inductor is rectified using a voltage multiplier. High boost voltage is obtained with low duty cycle. Theoretical analysis and experimental results verify the proposed solutions using a 300 W prototype. |
Automatic Detection and Classification of Colorectal Polyps by Transferring Low-Level CNN Features From Nonmedical Domain | Colorectal cancer (CRC) is a leading cause of cancer deaths worldwide. Although polypectomy at early stage reduces CRC incidence, 90% of the polyps are small and diminutive, where removal of them poses risks to patients that may outweigh the benefits. Correctly detecting and predicting polyp type during colonoscopy allows endoscopists to resect and discard the tissue without submitting it for histology, saving time, and costs. Nevertheless, human visual observation of early stage polyps varies. Therefore, this paper aims at developing a fully automatic algorithm to detect and classify hyperplastic and adenomatous colorectal polyps. Adenomatous polyps should be removed, whereas distal diminutive hyperplastic polyps are considered clinically insignificant and may be left in situ . A novel transfer learning application is proposed utilizing features learned from big nonmedical datasets with 1.4–2.5 million images using deep convolutional neural network. The endoscopic images we collected for experiment were taken under random lighting conditions, zooming and optical magnification, including 1104 endoscopic nonpolyp images taken under both white-light and narrowband imaging (NBI) endoscopy and 826 NBI endoscopic polyp images, of which 263 images were hyperplasia and 563 were adenoma as confirmed by histology. The proposed method identified polyp images from nonpolyp images in the beginning followed by predicting the polyp histology. When compared with visual inspection by endoscopists, the results of this study show that the proposed method has similar precision (87.3% versus 86.4%) but a higher recall rate (87.6% versus 77.0%) and a higher accuracy (85.9% versus 74.3%). In conclusion, automatic algorithms can assist endoscopists in identifying polyps that are adenomatous but have been incorrectly judged as hyperplasia and, therefore, enable timely resection of these polyps at an early stage before they develop into invasive cancer. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.