title
stringlengths
8
300
abstract
stringlengths
0
10k
Why does Deep Learning work? - A perspective from Group Theory
Why does Deep Learning work? What representations does it capture? How do higher-order representations emerge? We study these questions from the perspective of group theory, thereby opening a new approach towards a theory of Deep learning. One factor behind the recent resurgence of the subject is a key algorithmic step called pretraining: first search for a good generative model for the input samples, and repeat the process one layer at a time. We show deeper implications of this simple principle, by establishing a connection with the interplay of orbits and stabilizers of group actions. Although the neural networks themselves may not form groups, we show the existence of shadow groups whose elements serve as close approximations. Over the shadow groups, the pretraining step, originally introduced as a mechanism to better initialize a network, becomes equivalent to a search for features with minimal orbits. Intuitively, these features are in a way the simplest. Which explains why a deep learning network learns simple features first. Next, we show how the same principle, when repeated in the deeper layers, can capture higher order representations, and why representation complexity increases as the layers get deeper.
A novel hybrid classification model of artificial neural networks and multiple linear regression models
The classification problem of assigning several observations into different disjoint groups plays an important role in business decision making and many other areas. Developing more accurate and widely applicable classification models has significant implications in these areas. It is the reason that despite of the numerous classification models available, the research for improving the effectiveness of these models has never stopped. Combining several models or using hybrid models has become a common practice in order to overcome the deficiencies of single models and can be an effective way of improving upon their predictive performance, especially when the models in combination are quite different. In this paper, a novel hybridization of artificial neural networks (ANNs) is proposed using multiple linear regression models in order to yield more general and more accurate model than traditional artificial neural networks for solving classification problems. Empirical results indicate that the proposed hybrid model exhibits effectively improved classification accuracy in comparison with traditional artificial neural networks and also some other classification models such as linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), K-nearest neighbor (KNN), and support vector machines (SVMs) using benchmark and real-world application data sets. These data sets vary in the number of classes (two versus multiple) and the source of the data (synthetic versus real-world). Therefore, it can be applied as an appropriate alternate approach for solving classification problems, specifically when higher forecasting
Neural Network application in diagnosis of patient: A case study
Patient hep is keys to successfully managing heart diseases and to helping the patients to maintain quality of life. In recent years, with data mining techniques has been extracted important hidden information through clinical data, so these methods confect beneficial information for medical research and health centers. So it will be making a comer designs using hospital information is required fold add influence of clinical centers and hospitals. This paper suppose a decision support model able to help a physician as well as a health care system manage this heart failure population and describes the work on medical data mining and gives useful information about data mining. Finally, the case study is modelled by Neural Network (NN). Training data collects of 40 patients clinical records in health center search about heart problems in persons. The results show that our NN model generates correctly predictions for 85% of test cases.
Satellite Oil Spill Detection Using Artificial Neural Networks
Oil spills represent a major threat to ocean ecosystems and their health. Illicit pollution requires continuous monitoring and satellite remote sensing technology represents an attractive option for operational oil spill detection. Previous studies have shown that active microwave satellite sensors, particularly Synthetic Aperture Radar (SAR) can be effectively used for the detection and classification of oil spills. Oil spills appear as dark spots in SAR images. However, similar dark spots may arise from a range of unrelated meteorological and oceanographic phenomena, resulting in misidentification. A major focus of research in this area is the development of algorithms to distinguish oil spills from `look-alikes'. This paper describes the development of a new approach to SAR oil spill detection employing two different Artificial Neural Networks (ANN), used in sequence. The first ANN segments a SAR image to identify pixels belonging to candidate oil spill features. A set of statistical feature parameters are then extracted and used to drive a second ANN which classifies objects into oil spills or look-alikes. The proposed algorithm was trained using 97 ERS-2 SAR and ENVSAT ASAR images of individual verified oil spills or/and look-alikes. The algorithm was validated using a large dataset comprising full-swath images and correctly identified 91.6% of reported oil spills and 98.3% of look-alike phenomena. The segmentation stage of the new technique outperformed the established edge detection and adaptive thresholding approaches. An analysis of feature descriptors highlighted the importance of image gradient information in the classification stage.
Near-Synonymy and Lexical Choice
We develop a new computational model for representing the fine-grained meanings of near-synonyms and the differences between them. We also develop a lexical-choice process that can decide which of several near-synonyms is most appropriate in a particular situation. This research has direct applications in machine translation and text generation. We first identify the problems of representing near-synonyms in a computational lexicon and show that no previous model adequately accounts for near-synonymy. We then propose a preliminary theory to account for near-synonymy, relying crucially on the notion of granularity of representation, in which the meaning of a word arises out of a context-dependent combination of a context-independent core meaning and a set of explicit differences to its near-synonyms. That is, near-synonyms cluster together. We then develop a clustered model of lexical knowledge, derived from the conventional ontological model. The model cuts off the ontology at a coarse grain, thus avoiding an awkward proliferation of language-dependent concepts in the ontology, yet maintaining the advantages of efficient computation and reasoning. The model groups near-synonyms into subconceptual clusters that are linked to the ontology. A cluster differentiates near-synonyms in terms of fine-grained aspects of denotation, implication, expressed attitude, and style. The model is general enough to account for other types of variation, for instance, in collocational behavior. An efficient, robust, and flexible fine-grained lexical-choice process is a consequence of a clustered model of lexical knowledge. To make it work, we formalize criteria for lexical choice as preferences to express certain concepts with varying indirectness, to express attitudes, and to establish certain styles. The lexical-choice process itself works on two tiers: between clusters and between near-synonyns of clusters. We describe our prototype implementation of the system, called I-Saurus.
Optimal rates of convergence for persistence diagrams in Topological Data Analysis
Computational topology has recently known an important development toward data analysis, giving birth to the field of topological data analysis. Topological persistence, or persistent homology, appears as a fundamental tool in this field. In this paper, we study topological persistence in general metric spaces, with a statistical approach. We show that the use of persistent homology can be naturally considered in general statistical frameworks and persistence diagrams can be used as statistics with interesting convergence properties. Some numerical experiments are performed in various contexts to illustrate our results.
On HARQ-IR for Downlink NOMA Systems
Non-orthogonal multiple access (NOMA) can exploit the power difference between the users to achieve a higher spectral efficiency. Thus, the power allocation plays a crucial role in NOMA. In this paper, we study the power allocation for hybrid automatic repeat request (HARQ) in NOMA with two users. For the power allocation, we consider the error exponents of the outage probabilities in HARQ with incremental redundancy (IR) and derive them based on large deviations. While a closed-form expression for the error exponent (or rate function) without interference is available, there is no closed-form expression for the error exponent with interference. Thus, we focus on the derivation of a lower bound on the error exponent in this paper. Based on the error exponents, we formulate a power allocation problem for HARQ-IR in NOMA to guarantee a certain low outage probability for a given maximum number of retransmissions. From the simulation results, we can confirm that it is possible to guarantee a certain outage probability by the proposed power allocation method.
My Life and an Era: The Autobiography of Buck Colbert Franklin
My Life and an Era The Autobiography of Buck Colbert Eranklin Edited by John Hope Franklin and John Whittington Franklin Louisiana State University Press, 1997 288 pp. Cloth, $29.95 Here is a powerful story that transcends mere description of the author's era. It probes deeply the moral and philosophical aspects of cultural relationships among various peoples, the responsibility of government to ensure safety and justice, and what it means to be human in a supposedly democratic society built upon a Judeo-Christian ethic. Buck Colbert Franklin's autobiography joins those of Ada Lois Sipuel, who integrated the Oklahoma University law school in 1948, civil rights activist Clara Luper, and novelist Ralph Ellison in offering its insight into the role black Oklahomans played in the formation of the state and in social reform. Born in 1879 to David and Milley Franklin, Buck Colbert Franklin lived to witness the emergence of Oklahoma from its territorial stage to statehood. For over five decades he watched carefully the events that led to entrenched segregation and, ultimately, to its fall with the 1954 Brown decision. Although he never completed his college work, young Franklin attended Roger Williams College in Nashville and Atlanta Baptist College (now Morehouse College) before returning to Oklahoma where he passed the bar. As an attorney and active community member with national contacts, Franklin occupied a strategic position from which to observe state and national history as it unfolded. He met the great and near-great within the African American society of his time, and his story strengthens and authenticates much of what we know about the plight of African Americans. This autobiography, though, is more than a thoughtful commentary on heritage and the challenges to black existence in a society that limited opportunities and access to political, social, and economic power. The author's discussion of the family values that characterized his upbringing tells us much about the inner life of the black community and other institutional structures in Oklahoma during his era. Although close to his mother, a teacher who was part Choctaw, Franklin enjoyed an even stronger tie to his father, who died during the young man's college years. One senses the closeness between father and son as David Franklin tries to shelter his offspring from the harshness of discrimination and when he protects him from frontier dangers as the two make a long journey from southern Oklahoma to what is now the western part of the state. Later, Buck Colbert Franklin shows the same devotion to his four children -- B. C. Jr., Mozzella, Anne, and John Hope. The author's wife, Mollie, played a central role in his life and in the black community. Readers interested in women's history will profit from the attention Franklin has given his spouse and her activities. A community-minded person, Mollie Parker Franklin created the first day nursery for black children in Tulsa, after the family moved there from the all-black town of Rentiesville. With a deep concern for young people, she worked to get employment for black youths in local businesses that had not previously employed them. For a time she also served as the national recording secretary of the Colored Women's Clubs and enjoyed a close relationship with one of its presidents, Mary F. Waring, who visited with the Franklins during her visits to the state. Religion also occupies a prominent place in Franklin's work. A church-going man of deep spiritual conviction, he nonetheless had little regard for narrow denominationalism. He saw firsthand how an iron-clad commitment to a particular faith divided the community of Rentiesville, to which he and his family had moved after leaving Ardmore, Oklahoma, where he had practiced law. Although specific circumstances played an important role in his liberalism, Franklin's attitude arose fundamentally from the teachings of his parents, especially his father, who "didn't care a hoot" for denominations. …
Learning Non-local Image Diffusion for Image Denoising
Image diffusion plays a fundamental role for the task of image denoising. The recently proposed trainable nonlinear reaction diffusion (TNRD) model defines a simple but very effective framework for image denoising. However, as the TNRD model is a local model, whose diffusion behavior is purely controlled by information of local patches, it is prone to create artifacts in the homogenous regions and over-smooth highly textured regions, especially in the case of strong noise levels. Meanwhile, it is widely known that the non-local self-similarity (NSS) prior stands as an effective image prior for image denoising, which has been widely exploited in many non-local methods. In this work, we are highly motivated to embed the NSS prior into the TNRD model to tackle its weaknesses. In order to preserve the expected property that end-to-end training remains available, we exploit the NSS prior by defining a set of non-local filters, and derive our proposed trainable non-local reaction diffusion (TNLRD) model for image denoising. Together with the local filters and influence functions, the non-local filters are learned by employing loss-specific training. The experimental results show that the trained TNLRD model produces visually plausible recovered images with more textures and less artifacts, compared to its local versions. Moreover, the trained TNLRD model can achieve strongly competitive performance to recent state-of-the-art image denoising methods in terms of peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM).
Scheduling and management of data intensive application workflows in grid and cloud computing environments
Large-scale scientific experiments are being conducted in collaboration with teams that are dispersed globally. Each team shares its data and utilizes distributed resources for conducting experiments. As a result, scientific data are replicated and cached at distributed locations around the world. These data are part of application workflows, which are designed for reducing the complexity of executing andmanaging ondistributed computing environments. In order to execute these workflows in time and cost efficient manner, a workflowmanagement systemmust take into account the presence of multiple data sources in addition to distributed compute resources provided by platforms such as Grids and Clouds. Therefore, this thesis builds upon an existing workflow architecture and proposes enhanced scheduling algorithms, specifically designed for managing data intensive applications. It begins with a comprehensive survey of scheduling techniques that formed the core of Grid systems in the past. It proposes an architecture that incorporates data management components and examines its practical feasibility by executing several real world applications such as Functional Magnetic Resonance Imaging (fMRI), Evolutionary Multi-objective Optimization algorithms, and so forth, using distributed Grid and Cloud resources. It then proposes several heuristics based algorithms that take into account time and cost incurred for transferring data frommultiple sourceswhile scheduling tasks. All the heuristic proposed are based on multi-source-parallel-data-retrieval technique in contrast to retrieving data from a single best resource, as done in the past. In addition to non-linear modeling approach, the thesis explores iterative techniques, such as particle-swarm optimization, to obtain schedules quicker. In summary, this thesis makes several contributions towards the scheduling and management of data intensive application workflows. The major contributions are: (i) enhanced the abstract workflow architecture by including components that handle multisource parallel data transfers; (ii) deployed several real-world application workflows using the proposed architecture and tested the feasibility of the design on real testbeds; (iii) proposed anon-linearmodel for schedulingworkflowswith anobjective tominimize both execution time and execution cost; (iv) proposed static and dynamicworkflowscheduling heuristic that leverages the presence of multiple data sources to minimize total execution time; (v) designed and implemented a particle-swarm-optimization based heuristic that provides feasible solutions to the workflow scheduling problem with good convergence; (vi) implemented a prototype workflow management system that consists of a portal as user-interface, a workflow engine that implements all the proposed scheduling heuristic and the real-world application workflows, and plugins to communicate with Grid and Cloud resources.
The Use of Immersive Virtual Reality in the Learning Sciences: Digital Transformations of Teachers, Students, and Social Context
This paper illustrates the utility of using virtual environments to transform social interaction via behavior and context, with the goal of improving learning in digital environments. We first describe the technology and theories behind virtual environments, and then report data from four empirical studies. In Experiment 1 we demonstrated that teachers with augmented social perception (i.e., receiving visual warnings alerting them to students not receiving enough teacher eye gaze) were able to spread their attention more equally among students than teachers without augmented perception. In Experiments 2 and 3, we demonstrated that by breaking the rules of spatial proximity that exist in physical space, students can learn more by being in the center of the teacher’s field of view (compared to the periphery) and by being closer to the teacher (compared to farther away). In Experiment 4, we demonstrated that inserting virtual co-learners who were either model students or distracting students changed the learning abilities of experimental subjects who conformed to the virtual co-learners. Results suggest that virtual environments will have a unique ability to alter the social dynamics of learning environments via transformed social interaction. Immersive Virtual Environments in Learning 3 Introduction Many researchers have investigated the viability of virtual environments (VEs), digital simulations that involve representations of teachers, students and/or content, for learning applications. In this article, we describe how VEs enable transformed social interaction (TSI), the ability of teachers and students to use digital technology to strategically alter their online representations and contexts in order to improve learning. We present evidence from a series of empirical studies that demonstrate how breaking the social physics of traditional learning environments can increase learning in VEs. Of course immersive virtual reality currently is not yet an easily acquired technology in classroom settings. Nevertheless, VEs are becoming more common place, and it is important to understand how this digital technology will aid the basic learning process. In this Introduction, we first provide a discussion of the taxonomies of VEs in general, and discuss previous implementations of learning systems in VEs. We next provide an assimilation of the literature on learning in VEs, focusing on the unique affordances provided by VEs not possible in face-to-face settings, including explicating our theory of TSI. Finally, we provide an overview of the current experiments. Definitions and Taxonomies of Virtual Environments VEs are distinct from other types of multimedia learning environments (e.g., Mayer, 2001). In this paper, we define VEs as “synthetic sensory information that leads to perceptions of environments and their contents as if they were not synthetic” (Blascovich, et al., 2002, pg. 105). Typically, digital computers are used to generate these images and to enable real-time interaction between users and VEs. In principle, people can interact with a VE by using any perceptual channel, including visual (e.g., by wearing Immersive Virtual Environments in Learning 4 a head-mounted display with digital displays that project VEs), auditory (e.g., by wearing earphones that help localize sound in VEs), haptic (e.g., by wearing gloves that use mechanical feedback or air blasts systems that simulate contact with objects VEs), or olfactory (e.g., by wearing a nosepiece or collars that release different smells when a person approaches different objects in VEs). An immersive virtual environment (IVE) is one that perceptually surrounds the user, increasing his or her sense of presence or actually being within it. Consider a child’s video game; playing that game using a joystick and a television set is a VE. On the other hand, if the child were to have special equipment that allowed her to take on the actual point of view of the main character of the video game, that is, to control that character’s movements with her own movements such that the child is actually inside the video game, then she is in an IVE. In other words, in an IVE, the sensory information of the VE is more psychologically prominent and engaging than the sensory information of the outside physical world. For this to occur, IVEs typically include two characteristic systems. First, the users are unobtrusively tracked physically as they interact with the IVE. User actions such as head orientation and body position (e.g., the direction of their gaze) are automatically and continually recorded and the IVE, in turn, is updated to reflect the changes resulting from these actions. In this way, as a person in the IVE moves, the tracking technology senses this movement and renders the virtual scene to match the user’s position and orientation. Second, sensory information from the physical world is kept to a minimum. For example, in an IVE that relies on visual images, the user wears a head-mounted display (HMD) or sits in a dedicated projection room. By doing so, the Immersive Virtual Environments in Learning 5 user cannot see the objects from the physical world, and consequently it is easier for them to become enveloped by the synthetic information. There are two important features of IVEs that will continually surface in later discussions. The first is that IVEs necessarily track a user’s movements, including body position, head direction, as well as facial expressions and gestures, thereby providing a wealth of information about where in the IVE the user is focusing his or her attention, what he observes from that specific vantage point, and his reactions to the environment. The second is that the designer of an IVE has a tremendous control over the user’s experience, and can alter the appearance and design of the virtual world to fit experimental goals, providing a wealth of real-time adjustments to specific user actions. Of course there are limitations to IVEs given current technology. The past few years have demonstrated a sharp acceleration of the realism of VEs and IVEs. However, the technology still has quite a long way to go before the photographic realism and behavioral realism (i.e., gestures, intonations, facial expressions) of avatars digital representations of one another in IVEs approaches the realism of actual people. Moreover, while technology for visual and auditory IVEs steadily develops, systems for the other senses (i.e., haptic) are not progressing as quickly. Consequently, it may be some years before the technology rivals a “real world” experience. And finally, some users of IVEs experience simulator sickness, a feeling of discomfort resulting from the optics of particular technological configurations. However, a recent longitudinal study has demonstrated that simulator sickness is extremely rare today, given the speed of current tracking and graphics systems, and also that the effects for a given user tends to diminish over time (Bailenson & Yee, 2006). Immersive Virtual Environments in Learning 6 Collaborative Virtual Environments (CVEs) involve more than a single user. CVE users interact via avatars. For example, while in a CVE, as Person A communicates verbally and nonverbally in one location, the CVE technology can nearly instantaneously track his or her movements, gestures, expressions and sounds. Person B, in another location, sees and hears Person A’s avatar exhibiting these behaviors in his own version of the CVE when it is networked to Person A’s CVE. Person B’s CVE system then sends all of the tracking information relevant to his own communications over the network to Person A’s system, which then renders all of those movements via Person B’s avatar, which Person A can see and hear. This bidirectional process—tracking the users’ actions, sending those actions over the network, and rendering those actions simultaneously for each user—occurs at an extremely high frequency (e.g., 60 Hz). Traditionally, researchers have distinguished embodied agents, which are models driven by computer algorithms, from avatars, which are models driven by humans in realtime. Most research examining learning in virtual environments has utilized embodied agents (as opposed to avatars see Bailenson & Blascovich, 2004, for a discussion). One reason for this disparity is that readily available commercial technology allowing individuals to create digital avatars that can look like and behave in real-time like the person it represents has emerged only recently. Previously, producing real-time avatars that captured the user’s voice, visual features and subtle movements was quite difficult. Consequently, understanding the implications of the visual and behavioral veridicality of an avatar on the quality of interaction is an important question that has received very little
Succeeding through Disruption : Exploring the Factors Influencing the Adoption of Disruptive Technologies in the Mobile Telecommunications Industry in Zimbabwe Africa Makasi
The research explored factors influencing the adoption of disruptive technologies in the mobile telecommunications industry in Zimbabwe. Data was gathered from the second biggest competitor in the industry with over 3 million subscribers. A survey was conducted by purposively selecting 70 respondents from a population of 3,000,000 (three million) active subscribers from the company’s database. A skip interval of 42,857 was used to randomly select the sample. Customer representatives were selected from the company’s five regional offices using a two-stage cluster sampling technique. Employee participants were purposively selected from the company’s head office. A pilot test was conducted to assess the reliability of the research instrument used. Self-administered questionnaires were used in the research. Research results were collected, recorded and analyzed. All T-tests conducted produced results with p= 0.001 (which was less than 0.05 at which the tests were conducted) indicating that internal company influences such as staff competence, availability of funding and the type of infrastructure help impede or accelerate the rate of adoption of disruptive technologies in companies. Future research should however look at organizational ambidexterity as well as exploitation and exploration paradigms in organizations in the telecommunications industry and their impact on the adoption of disruptive technologies.
A Hybrid 18-Pulse Rectification Scheme for Diode Front-End Rectifiers With Large DC-Bus Capacitor
Diode rectifiers with large dc-bus capacitors, used in the front ends of variable-frequency drives (VFDs) and other ac-to-dc converters, draw discontinuous current from the power system, resulting in current distortion and, hence, voltage distortion. Typically, the power system can handle current distortion without showing signs of voltage distortion. However, when the majority of the load on a distribution feeder is made up of VFDs, current distortion becomes an important issue since it can cause voltage distortion. Multipulse techniques to reduce input current harmonics are popular because they do not interfere with the existing power system either from higher conducted electromagnetic interference, when active techniques are used, or from possible resonance, when capacitor-based filters are employed. In this paper, a new 18-pulse topology is proposed that has two six-pulse rectifiers powered via a phase-shifting isolation transformer, while the third six-pulse rectifier is fed directly from the ac source via a matching inductor. This idea relies on harmonic current cancellation strategy rather than flux cancellation method and results in lower overall harmonics. It is also seen to be smaller in size and weight and lower in cost compared to an isolation transformer. Experimental results are given to validate the concept.
Oil prices, tourism income and economic growth: a structural VAR approach for European Mediterranean countries.
Abstract In this study, a Structural VAR model is employed to investigate the relationship among oil price shocks, tourism variables and economic indicators in four European Mediterranean countries. In contrast with the current tourism literature, we distinguish between three oil price shocks, namely, supply-side, aggregate demand and oil specific demand shocks. Overall, our results indicate that oil specific demand shocks contemporaneously affect inflation and the tourism sector equity index, whereas these shocks do not seem to have any lagged effects. By contrast, aggregate demand oil price shocks exercise a lagged effect, either directly or indirectly, to tourism generated income and economic growth. The paper does not provide any evidence that supply-side shocks trigger any responses from the remaining variables. Results are important for tourism agents and policy makers, should they need to create hedging strategies against future oil price movements or plan for economic policy developments.
Multi-band multi-mode SDR radar platform
This paper presents the design results of the multi-band, multi-mode software-defined radar (SDR) system. The SDR platform consists of a multi-band RF modules of S, X, K-bands, and a multi-mode digital modules with a waveform generator for CW, Pulse, FMCW, and LFM Chirp as well as reconfigurable SDR-GUI software module for user interface. This platform can be used for various applications such as security monitoring, collision avoidance, traffic monitoring, and a radar imaging.
A geometric approach to monitoring threshold functions over distributed data streams
Monitoring data streams in a distributed system is the focus of much research in recent years. Most of the proposed schemes, however, deal with monitoring simple aggregated values, such as the frequency of appearance of items in the streams. More involved challenges, such as the important task of feature selection (e.g., by monitoring the information gain of various features), still require very high communication overhead using naive, centralized algorithms. We present a novel geometric approach by which an arbitrary global monitoring task can be split into a set of constraints applied locally on each of the streams. The constraints are used to locally filter out data increments that do not affect the monitoring outcome, thus avoiding unnecessary communication. As a result, our approach enables monitoring of arbitrary threshold functions over distributed data streams in an efficient manner. We present experimental results on real-world data which demonstrate that our algorithms are highly scalable, and considerably reduce communication load in comparison to centralized algorithms.
Doubly Stochastic Variational Inference for Deep Gaussian Processes
Gaussian processes (GPs) are a good choice for function approximation as they are flexible, robust to overfitting, and provide well-calibrated predictive uncertainty. Deep Gaussian processes (DGPs) are multi-layer generalizations of GPs, but inference in these models has proved challenging. Existing approaches to inference in DGP models assume approximate posteriors that force independence between the layers, and do not work well in practice. We present a doubly stochastic variational inference algorithm that does not force independence between layers. With our method of inference we demonstrate that a DGP model can be used effectively on data ranging in size from hundreds to a billion points. We provide strong empirical evidence that our inference scheme for DGPs works well in practice in both classification and regression.
Comparison of Different Classification Techniques on PIMA Indian Diabetes Data
The development of data-mining applications such as classification and clustering has been applied to large scale data. In this research, we present comparative study of different classification techniques using three data mining tools named WEKA, TANAGRA and MATLAB. The aim of this paper is to analyze the performance of different classification techniques for a set of large data. The algorithm or classifiers tested are Multilayer Perceptron, Bayes Network, J48graft (c4.5), Fuzzy Lattice Reasoning (FLR), NaiveBayes, JRip (RIPPER), Fuzzy Inference System (FIS), Adaptive Neuro-Fuzzy Inference Systems(ANFIS). A fundamental review on the selected technique is presented for introduction purposes. The diabetes data with a total instance of 768 and 9 attributes (8 for input and 1 for output) will be used to test and justify the differences between the classification methods or algorithms. Subsequently, the classification technique that has the potential to significantly improve the common or conventional methods will be suggested for use in large scale data, bioinformatics or other general applications.
Design of low-phase-noise CMOS ring oscillators
This paper presents a framework for modeling the phase noise in complementary metal–oxide–semiconductor (CMOS) ring oscillators. The analysis considers both linear and nonlinear operations, and it includes both device noise and digital switching noise coupled through the power supply and substrate. In this paper, we show that fast rail-to-rail switching is required in order to achieve low phase noise. Further, flicker noise from the bias circuit can potentially dominate the phase noise at low offset frequencies. We define the effective factor for ring oscillators with large and nonlinear voltage swings and predict its increase for CMOS processes with smaller feature sizes. Our phase-noise analysis is validated via simulation and measurement results for ring oscillators fabricated in a number of CMOS processes.
Competence and warmth in context: The compensatory nature of stereotypic views of national groups
In two experiments we show that the context in which groups are perceived influences how they are judged in a compensatory manner on the fundamental dimensions of social judgment, that is, warmth and competence. We manipulate the type of country (high in competence and low in warmth vs. high in warmth and low in competence) to which a target country is compared. Our data show that the target country is perceived as warmer and less competent when the comparison country is stereotypically high (vs. low) in competence and low (vs. high) in warmth. We also found compensation correlationally across targets and across dimensions in that the higher the comparison country is rated on one of the two dimensions, the higher the target country is rated on the other. Compensation effects are shown to affect judgments of both the ingroup (Experiment 1) and an outgroup (Experiment 2). Our results shed new light on context effects in group judgments as well as on the compensatory relation of the two fundamental dimensions of social judgment. Copyright # 2008 John Wiley & Sons, Ltd. Research has identified two fundamental dimensions of social perception. Although different names have been used, there is wide agreement on the common core of those dimensions (Abele & Wojciszke, 2007). Here, we use the labels warmth and competence (Fiske, Cuddy, Glick, & Xu, 2002). Recent experimental work by Judd, James-Hawkins, Yzerbyt, & Kashima (2005) provides clear evidence that social perception is characterized by a compensatory relation between competence and warmth: when a social target is seen as higher than another on one dimension, this target will likely be perceived as lower than the other on the second dimension. Our question is whether the use of a different comparison group in the context alters the evaluation of a target group in a way that demonstrates compensation. Compensation in Social Judgments In a study that examined the dimensions of competence and warmth in a full ingroup–outgroup design, Yzerbyt, Provost, and Corneille (2005) asked French and Belgian respondents to indicate how they perceived their own and the other group in terms of competence and warmth. The data strongly supported the compensation effect in that both groups of respondents described the French as more competent than the Belgians but also the Belgians as warmer than the French. Using a more controlled setting, Cuddy, Fiske, and Glick (2004) had their participants examine several individual profiles in the context of a personnel evaluation procedure. Female professionals with children were viewed as warmer than European Journal of Social Psychology Eur. J. Soc. Psychol. 38, 1175–1183 (2008) Published online in Wiley InterScience (www.interscience.wiley.com) DOI: 10.1002/ejsp.526 *Correspondence to: Nicolas Kervyn, Department of Psychology, Université Catholique de Louvain, 10, Pl. Cardinal Mercier, Louvain-la-Neuve, Belgium 1348. E-mail: [email protected] Copyright # 2008 John Wiley & Sons, Ltd. Received 28 September 2007 Accepted 29 February 2008 competent but also as warmer and as less competent than female professionals without children (Cuddy, Norton, & Fiske, 2005). Given the dearth of experimental work on these two dimensions in the context of intergroup relations, Judd et al. (2005) conducted a series of experiments investigating the relation between competence and warmth and the factors that may influence this relationship. Participants received lists of behaviors allegedly describing two different groups. High competence behaviors were attributed to the members of one group and low competence behaviors to the members of the other group. Each participant judged both groups on various scales that were related to warmth and competence. Unsurprisingly, the high competent group was judged to be more competent than the low competent one. Of more interest, the high competence group was also judged as less warm than the low competence group. Similar effects were observed on competence when warmth was manipulated (Experiment 2). These results emerged despite the fact that when the behaviors were pre-tested (Judd et al., 2005) to verify that, for instance, competence relevant behaviors conveyed competence information but not warmth, a small positive correlation materialized between their pretest ratings on competence and warmth. Thus, even though more competent behaviors were judged as slightly warmer than less competent ones (replicating the halo effect, Rosenberg, Nelson, & Vivekananthan, 1968), when two groups were described, one with high and the other with low competent behaviors, they were judged as differing in warmth in the opposite direction. Importantly, Judd et al. (2005, Experiment 4) showed that a comparative context is a necessary condition for compensation. When participants were presented with either the high or the low competence group, halo rather than compensation was observed. Specifically, the high competence group was rated as warmer than the low competence group. Finally, Judd et al. (2005, Experiment 5) found compensation even when participants were led to believe that they belonged to one of the groups. Assessing Compensation Compensation effects (Cuddy et al., 2004; Judd et al., 2005; Yzerbyt et al., 2005) have mainly been examined at the mean level. Evidence for compensation rests on the finding that participants see one group to be higher than the other on one dimension while rating this same group lower than the other group on the second dimension. However, compensation may not only materialize under the form of a negative relationship between warmth and competence at the mean level but also in terms of a negative relationship between the judgments at the respondent level. Specifically, the participants that differentiate more the two groups on one dimension should also differentiate them more on the other dimension but in the opposite direction. Such a correlation was examined by Judd et al. (2005) and, although the results were at times marginal, the predicted negative correlation emerged in all four studies in which two groups were compared. Rather than looking at the correlation that involves within-dimension differences across groups, an alternative way to examine the compensation effect at the correlational level is to compute the correlation across dimensions and across groups. Compensation would be found if the higher one group is rated on one of the two dimensions, the higher the other group is rated on the other dimension. The compensation pattern between warmth and competence would therefore translate into a positive correlation across groups and across dimensions. Yzerbyt et al. (2005) computed and found such a positive correlation. In a recent study, Demoulin, Geeraert, and Yzerbyt (2007) examined how the perception of exchange students of their homeand host-country evolved through time. These authors recommended computing a series of regression models in which one group’s standing on one dimension (e.g. warmth) is predicted by the same group’s standing on the other dimension (e.g. competence) and by the other group’s position on each of the two dimensions. Results again confirmed the presence of compensation in that the evaluation of one group on one dimension was positively predicted by the evaluation of the other group on the other dimension. In the present work, we examine compensation both by looking at mean levels of judgments as well as correlations across dimensions and across groups. The Impact of Contexts on Stereotype Content Research suggests that, within a given population, there is wide agreement on how social groups in general and nations in particular are perceived (Schneider, 2004). This consensus holds for the evaluations on the two fundamental dimensions Copyright # 2008 John Wiley & Sons, Ltd. Eur. J. Soc. Psychol. 38, 1175–1183 (2008) DOI: 10.1002/ejsp 1176 Nicolas Kervyn et al.
COGNITIVE APPRENTICESHIP IN EDUCATIONAL PRACTICE : RESEARCH ON SCAFFOLDING , MODELING , MENTORING , AND COACHING AS INSTRUCTIONAL STRATEGIES
Apprenticeship is an inherently social learning method with a long history of helping novices become experts in fields as diverse as midwifery, construction, and law. At the center of apprenticeship is the concept of more experienced people assisting less experienced ones, providing structure and examples to support the attainment of goals. Traditionally apprenticeship has been associated with learning in the context of becoming skilled in a trade or craft—a task that typically requires both the acquisition of knowledge, concepts, and perhaps psychomotor skills and the development of the ability to apply the knowledge and skills in a context-appropriate manner—and far predates formal schooling as it is known today. In many nonindustrialized nations apprenticeship remains the predominant method of teaching and learning. However, the overall concept of learning from experts through social interactions is not one that should be relegated to vocational and trade-based training while K–12 and higher educational institutions seek to prepare students for operating in an information-based society. Apprenticeship as a method of teaching and learning is just as relevant within the cognitive and metacognitive domain as it is in the psychomotor domain. In the last 20 years, the recognition and popularity of facilitating learning of all types through social methods have grown tremendously. Educators and educational researchers have looked to informal learning settings, where such methods have been in continuous use, as a basis for creating more formal instructional methods and activities that take advantage of these social constructivist methods. Cognitive apprenticeship— essentially, the use of an apprentice model to support learning in the cognitive domain—is one such method that has gained respect and popularity throughout the 1990s and into the 2000s. Scaffolding, modeling, mentoring, and coaching are all methods of teaching and learning that draw on social constructivist learning theory. As such, they promote learning that occurs through social interactions involving negotiation of content, understanding, and learner needs, and all three generally are considered forms of cognitive apprenticeship (although certainly they are not the only methods). This chapter first explores prevailing definitions and underlying theories of these teaching and learning strategies and then reviews the state of research in these area.
Time-Frequency Distributions Based on Compact Support Kernels: Properties and Performance Evaluation
This paper presents two new time-frequency distributions (TFDs) based on kernels with compact support (KCS) namely the separable (CB) (SCB) and the polynomial CB (PCB) TFDs. The implementation of this family of TFDs follows the method developed for the Cheriet-Belouchrani (CB) TFD. The mathematical properties of these three TFDs are analyzed and their performance is compared to the best classical quadratic TFDs using several tests on multicomponent signals with linear and nonlinear frequency modulation (FM) components including the noise effects. Instead of relying solely on visual inspection of the time-frequency domain plots, comparisons include the time slices' plots and the evaluation of the Boashash-Sucic's normalized instantaneous resolution performance measure that permits to provide the optimized TFD using a specific methodology. In all presented examples, the KCS-TFDs show a significant interference rejection, with the component energy concentration around their respective instantaneous frequency laws yielding high resolution measure values.
The Cost of Digital Advertisement: Comparing User and Advertiser Views
Digital advertisements are delivered in the form of static images, animations or videos, with the goal to promote a product, a service or an idea to desktop or mobile users. Thus, the advertiser pays a monetary cost to buy ad-space in a content provider’s medium (e.g., website) to place their advertisement in the consumer’s display. However, is it only the advertiser who pays for the ad delivery? Unlike traditional advertisements in mediums such as newspapers, TV or radio, in the digital world, the end-users are also paying a cost for the advertisement delivery. Whilst the cost on the advertiser’s side is clearly monetary, on the end-user, it includes both quantifiable costs, such as network requests and transferred bytes, and qualitative costs such as privacy loss to the ad ecosystem. In this study, we aim to increase user awareness regarding the hidden costs of digital advertisement in mobile devices, and compare the user and advertiser views. Specifically, we built OpenDAMP, a transparency tool that passively analyzes users’ web traffic and estimates the costs in both sides. We use a year-long dataset of 1270 real mobile users and by juxtaposing the costs of both sides, we identify a clear imbalance: the advertisers pay several times less to deliver ads, than the cost paid by the users to download them. In addition, the majority of users experience a significant privacy loss, through the personalized ad delivery mechanics.
ObjectNet3D: A Large Scale Database for 3D Object Recognition
We contribute a large scale database for 3D object recognition, named ObjectNet3D, that consists of 100 categories, 90,127 images, 201,888 objects in these images and 44,147 3D shapes. Objects in the 2D images in our database are aligned with the 3D shapes, and the alignment provides both accurate 3D pose annotation and the closest 3D shape annotation for each 2D object. Consequently, our database is useful for recognizing the 3D pose and 3D shape of objects from 2D images. We also provide baseline experiments on four tasks: region proposal generation, 2D object detection, joint 2D detection and 3D object pose estimation, and image-based 3D shape retrieval, which can serve as baselines for future research using our database. Our database is available online at http://cvgl.stanford.edu/projects/objectnet3d.
Penetration Testing on Virtual Environments
Since the beginning, computer systems have faced the challenge of protecting the information with which they work, and with the technological development, computational security techniques have become more complex to face the potentials attacks. Currently we are facing a war game with the usual two sides, attackers and defenders. The attackers want to have complete control over the systems. In is turn, defenders virtualized systems to maintain the resources safety in case of attack. The attackers have also developed increasingly sophisticated techniques to break such protections, being necessary to anticipate such events, which may be achieved through the application of preventative measures. This may be done by simulating Penetration Testing (PT). PT is an attack on a computer system, using a set of specialized tools that looks for security weaknesses, which eventually may have access to computer's features and data, allowing the discovery of such evidence vulnerability. Virtual Environments have a higher exposure to cyber-attacks. The aim of this paper is propose a framework to provide guidelines for Penetration Testing in Virtual Environments.
Remarks on photons and the aether
We expand upon some topics reviewed and sketched in a book to appear with more details, embellishments, and some new material of a speculative nature.
Compressive sampling of ECG bio-signals: Quantization noise and sparsity considerations
Compressed sensing (CS) is an emerging signal processing paradigm that enables the sub-Nyquist processing of sparse signals; i.e., signals with significant redundancy. Electrocardiogram (ECG) signals show significant time-domain sparsity that can be exploited using CS techniques to reduce energy consumption in an adaptive data acquisition scheme. A measurement matrix of random values is central to CS computation. Signal-to-quantization noise ratio (SQNR) results with ECG signals show that 5- and 6-bit Gaussian random coefficients are sufficient for compression factors up to 6X and from 8X-16X, respectively, whereas 6-bit uniform random coefficients are needed for 2X-16X compression ratios.
Regional Grey Matter Structure Differences between Transsexuals and Healthy Controls—A Voxel Based Morphometry Study
Gender identity disorder (GID) refers to transsexual individuals who feel that their assigned biological gender is incongruent with their gender identity and this cannot be explained by any physical intersex condition. There is growing scientific interest in the last decades in studying the neuroanatomy and brain functions of transsexual individuals to better understand both the neuroanatomical features of transsexualism and the background of gender identity. So far, results are inconclusive but in general, transsexualism has been associated with a distinct neuroanatomical pattern. Studies mainly focused on male to female (MTF) transsexuals and there is scarcity of data acquired on female to male (FTM) transsexuals. Thus, our aim was to analyze structural MRI data with voxel based morphometry (VBM) obtained from both FTM and MTF transsexuals (n = 17) and compare them to the data of 18 age matched healthy control subjects (both males and females). We found differences in the regional grey matter (GM) structure of transsexual compared with control subjects, independent from their biological gender, in the cerebellum, the left angular gyrus and in the left inferior parietal lobule. Additionally, our findings showed that in several brain areas, regarding their GM volume, transsexual subjects did not differ significantly from controls sharing their gender identity but were different from those sharing their biological gender (areas in the left and right precentral gyri, the left postcentral gyrus, the left posterior cingulate, precuneus and calcarinus, the right cuneus, the right fusiform, lingual, middle and inferior occipital, and inferior temporal gyri). These results support the notion that structural brain differences exist between transsexual and healthy control subjects and that majority of these structural differences are dependent on the biological gender.
HIV Infection and Cardiovascular Disease in Women
BACKGROUND HIV infection is associated with increased risk of cardiovascular disease (CVD) in men. Whether HIV is an independent risk factor for CVD in women has not yet been established. METHODS AND RESULTS We analyzed data from the Veterans Aging Cohort Study on 2187 women (32% HIV infected [HIV(+)]) who were free of CVD at baseline. Participants were followed from their first clinical encounter on or after April 01, 2003 until a CVD event, death, or the last follow-up date (December 31, 2009). The primary outcome was CVD (acute myocardial infarction [AMI], unstable angina, ischemic stroke, and heart failure). CVD events were defined using clinical data, International Classification of Diseases, Ninth Revision, Clinical Modification codes, and/or death certificate data. We used Cox proportional hazards models to assess the association between HIV and incident CVD, adjusting for age, race/ethnicity, lipids, smoking, blood pressure, diabetes, renal disease, obesity, hepatitis C, and substance use/abuse. Median follow-up time was 6.0 years. Mean age at baseline of HIV(+) and HIV uninfected (HIV(-)) women was 44.0 versus 43.2 years (P<0.05). Median time to CVD event was 3.1 versus 3.7 years (P=0.11). There were 86 incident CVD events (53%, HIV(+)): AMI, 13%; unstable angina, 8%; ischemic stroke, 22%; and heart failure, 57%. Incident CVD/1000 person-years was significantly higher among HIV(+) (13.5; 95% confidence interval [CI]=10.1, 18.1) than HIV(-) women (5.3; 95% CI=3.9, 7.3; P<0.001). HIV(+) women had an increased risk of CVD, compared to HIV(-) (hazard ratio=2.8; 95% CI=1.7, 4.6; P<0.001). CONCLUSIONS HIV is associated with an increased risk of CVD in women.
Distance Metric Learning : A Comprehensive Survey
Many machine learning algorithms, such as K Nearest Neighbor (KNN), heavily rely on the distance metric for the input data patterns. Distance Metric learning is to learn a distance metric for the input space of data from a given collection of pair of similar/dissimilar points that preserves the distance relation among the training data. In recent years, many studies have demonstrated, both empirically and theoretically, that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. This paper surveys the field of distance metric learning from a principle perspective, and includes a broad selection of recent work. In particular, distance metric learning is reviewed under different learning conditions: supervised learning versus unsupervised learning, learning in a global sense versus in a local sense; and the distance matrix based on linear kernel versus nonlinear kernel. In addition, this paper discusses a number of techniques that is central to distance metric learning, including convex programming, positive semi-definite programming, kernel learning, dimension reduction, K Nearest Neighbor, large margin classification, and graph-based approaches.
Energy-aware wireless microsensor networks
Self-configuring wireless sensor networks can be invaluable in many civil and military applications for collecting, processing, and disseminating wide ranges of complex environmental data. Because of this, they have attracted considerable research attention in the last few years. The WINS [1] and SmartDust [2] projects, for instance, aim to integrate sensing, computing, and wireless communication capabilities into a small form factor to enable low-cost production of these tiny nodes in large numbers. Several other groups are investigating efficient hardware/software system architectures, signal processing algorithms, and network protocols for wireless sensor networks [3]-[5]. Sensor nodes are battery driven and hence operate on an extremely frugal energy budget. Further, they must have a lifetime on the order of months to years, since battery replacement is not an option for networks with thousands of physically embedded nodes. In some cases, these networks may be required to operate solely on energy scavenged from the environment through seismic, photovoltaic, or thermal conversion. This transforms energy consumption into the most important factor that determines sensor node lifetime. Conventional low-power design techniques [6] and hardware architectures only provide point solutions which are insufficient for these highly energy-constrained systems. Energy optimization, in the case of sensor networks, is much more complex, since it involves not only reducing the energy consumption of a single sensor node but also maximizing the lifetime of an entire network. The network lifetime can be maximized
Tagging Sentence Boundaries
In this paper we tackle sentence boundary disambiguation through a part-of-speech (POS) tagging framework. We describe necessary changes in text tokenization and the implementat ion of a POS tagger and provide results of an evaluation of this system on two corpora. We also describe an extension of the traditional POS tagging by combining it with the document-centered approach to proper name identification and abbreviation handling. This made the resulting system robust to domain and topic shifts. 1 I n t r o d u c t i o n Sentence boundary disambiguation (SBD) is an important aspect in developing virtually any practical text processing application syntactic parsing, Information Extraction, Machine Translation, Text Alignment, Document Summarization, etc. Segmenting text into sentences in most cases is a simple m a t t e r a period, an exclamation mark or a question mark usually signal a sentence boundary. However, there are cases when a period denotes a decimal point or is a part of an abbreviation and thus it does not signal a sentence break. Furthermore, an abbreviation itself can be the last token in a sentence, in which case its period acts at the same time as part of this abbreviation and as the end-of-sentence indicator (fullstop). The first large class of sentence boundary disambiguators uses manually built rules which are usually encoded in terms of regular expression grammars supplemented with lists of abbreviations, common words, proper names, etc. For instance, the Alembic workbench (Aberdeen et al., 1995) contains a sentence splitting module which employs over 100 regular-expression rules written in Flex. To put together a few rules which do a job is fast and easy, but to develop a good rule-based system is quite a labour consuming enterprise. Another potential shortcoming is that such systems are usually closely tailored to a particular corpus and are not easily portable across domains. Automatically trainable software is generally seen as a way of producing systems quickly re-trainable for a new corpus, domain or even for another language. Thus, the second class of SBD systems employs machine learning techniques such as decision tree classifiers (Riley, 1989), maximum entropy modeling (MAXTERMINATOR) (Reynar and Ratnaparkhi, 1997), neural networks (SATZ) (Palmer and Hearst , 1997), etc.. Machine learning systems treat the SBD task as a classification problem, using features such as word spelling, capitalization, suffix, word class, etc., found in the local context of potent im sentence breaking punctuation. There is, however, one catch all machine learning approaches to the SBD task known to us require labeled examples for training. This implies an investment in the annotat ion phase. There are two corpora normally used for evaluation and development in a number of text processing tasks and in the SBD task in particular: the Brown Corpus and the Wall Street Journal (WSJ) corpus both par t of the Penn Treebank (Marcus, Marcinkiewicz, and Santorini, 1993). Words in both these corpora are annota ted with part-ofspeech (POS) information and the text is split into documents, paragraphs and sentences. This gives all necessary information for the development of an SBD system and its evaluation. State-of-theart machine-learning and rule-based SBD systems achieve the error rate of about 0.8-1.5% measured on the Brown Corpus and the WSJ. The best performance on the WSJ was achieved by a combination of the SATZ system with the Alembic system 0.5% error rate. The best performance on the Brown Corpus, 0.2% error rate, was reported by (Riley, 1989), who trained a decision tree classifier on a 25 million word corpus. 1.1 W o r d b a s e d vs . S y n t a c t i c M e t h o d s The first source of ambiguity in end-of-sentence marking is introduced by abbreviations: if we know tha t the word which precedes a period is n o t an abbreviation, then almost certainly this period denotes a sentence break. However, if this word is an abbreviation, then it is not tha t easy to make a clear decision. The second major source of information
Synthesis of Modulated-Metasurface Antennas With Amplitude, Phase, and Polarization Control
An effective synthesis procedure for planar antennas realized with nonuniform metasurfaces (MTSs) excited by a point source is presented. This synthesis potentiates previous formulations by introducing a control of the amplitude of the aperture field while improving the polarization and phase performances. The class of MTS antennas we are dealing with is realized by using subwavelength patches of different dimensions printed on a grounded slab, illuminated by a transverse magnetic point source. These antennas are based on the interaction between a cylindrical surface-wave and the periodic modulation of the MTS, which leads to radiation through a leaky-wave (LW) effect. This new design method permits a systematic and simple synthesis of amplitude, phase, and polarization of the aperture field by designing the boundary conditions imposed by the MTS. The polarization control is based on the local value of the MTS anisotropy, the phase is controlled by the shape and periodicity of the modulation, and the amplitude is controlled by the local leakage attenuation parameter of the LW. The synthesis is based on analytical formulas derived by an adiabatic Floquet-wave expansion of currents and fields over the surface, which are simultaneously published in this journal issue. The effectiveness of the procedure is tested through several numerical examples involving realistic structures.
A zero-sum power allocation game in the parallel Gaussian wiretap channel with an unfriendly jammer
This paper investigates optimal power allocation strategies over a bank of independent parallel Gaussian wiretap channels where a legitimate transmitter and a legitimate receiver communicate in the presence of an eavesdropper and an unfriendly jammer. In particular, we formulate a zero-sum power allocation game between the transmitter and the jammer where the payoff function is the secrecy rate. We characterize the optimal power allocation strategies as well as the Nash equilibrium in some asymptotic regimes. We also provide a set of results that cast further insight into the problem. Our scenario, which is applicable to current OFDM communications systems, demonstrates that transmitters that adapt to jammer experience much higher secrecy rates than non-adaptive transmitters.
Research on Cooperative Learning and Achievement : What We Know , What We Need to Know
Research on cooperative learning is one of the greatest success stories in the history of educational research. While there was some research on this topic from the early days of this century, the amount and quality of that research greatly accelerated in the early 1970's, and continues unabated today, a quarter-century later. Hundreds of studies have compared cooperative learning to various control methods on a broad range of measures, but by far the most frequent objective of this research is to determine the effects of cooperative learning on student achievement. Studies of the achievement effects of cooperative learning have taken place in every major subject, at all grade levels, in all types of schools in many countries. Both field studies and laboratory studies have produced a great deal of knowledge about the effects of many types of cooperative interventions and about the mechanisms responsible for these effects. Further, cooperative learning is not only a subject of research and theory; it is used at some level by millions of teachers. A recent national survey (Puma, Jones, Rock, & Fernandez, 1993) found that 79% of elementary teachers and 62% of middle school teachers reported making some sustained use of cooperative learning.
Applying Morphology Generation Models to Machine Translation
We improve the quality of statistical machine translation (SMT) by applying models that predict word forms from their stems using extensive morphological and syntactic information from both the source and target languages. Our inflection generation models are trained independently of the SMT system. We investigate different ways of combining the inflection prediction component with the SMT system by training the base MT system on fully inflected forms or on word stems. We applied our inflection generation models in translating English into two morphologically complex languages, Russian and Arabic, and show that our model improves the quality of SMT over both phrasal and syntax-based SMT systems according to BLEU and human judgements.
Comparison between maximal lengthening and shortening contractions for biceps brachii muscle oxygenation and hemodynamics.
Eccentric contractions (ECC) require lower systemic oxygen (O2) and induce greater symptoms of muscle damage than concentric contractions (CON); however, it is not known if local muscle oxygenation is lower in ECC than CON during and following exercise. This study compared between ECC and CON for changes in biceps brachii muscle oxygenation [tissue oxygenation index (TOI)] and hemodynamics [total hemoglobin volume (tHb)=oxygenated-Hb+deoxygenated-Hb], determined by near-infrared spectroscopy over 10 sets of 6 maximal contractions of the elbow flexors of 10 healthy subjects. This study also compared between ECC and CON for changes in TOI and tHb during a 10-s sustained and 30-repeated maximal isometric contraction (MVC) task measured immediately before and after and 1-3 days following exercise. The torque integral during ECC was greater (P<0.05) than that during CON by approximately 30%, and the decrease in TOI was smaller (P<0.05) by approximately 50% during ECC than CON. Increases in tHb during the relaxation phases were smaller (P<0.05) by approximately 100% for ECC than CON; however, the decreases in tHb during the contraction phases were not significantly different between sessions. These results suggest that ECC utilizes a lower muscle O2 relative to O2 supply compared with CON. Following exercise, greater (P<0.05) decreases in MVC strength and increases in plasma creatine kinase activity and muscle soreness were evident 1-3 days after ECC than CON. Torque integral, TOI, and tHb during the sustained and repeated MVC tasks decreased (P<0.01) only after ECC, suggesting that muscle O2 demand relative to O2 supply during the isometric tasks was decreased after ECC. This could mainly be due to a lower maximal muscle mass activated as a consequence of muscle damage; however, an increase in O2 supply due to microcirculation dysfunction and/or inflammatory vasodilatory responses after ECC is recognized.
Deep Spectral-Based Shape Features for Alzheimer's Disease Classification
Alzheimer’s disease (AD) and mild cognitive impairment (MCI) are the most prevalent neurodegenerative brain diseases in elderly population. Recent studies on medical imaging and biological data have shown morphological alterations of subcortical structures in patients with these pathologies. In this work, we take advantage of these structural deformations for classification purposes. First, triangulated surface meshes are extracted from segmented hippocampus structures in MRI and point-to-point correspondences are established among population of surfaces using a spectral matching method. Then, a deep learning variational auto-encoder is applied on the vertex coordinates of the mesh models to learn the low dimensional feature representation. A multi-layer perceptrons using softmax activation is trained simultaneously to classify Alzheimer’s patients from normal subjects. Experiments on ADNI dataset demonstrate the potential of the proposed method in classification of normal individuals from early MCI (EMCI), late MCI (LMCI), and AD subjects with classification rates outperforming standard SVM based approach.
DotSlash: Handling Web Hotspots at Dynamic Content Web Sites
We propose DotSlash, a self-configuring and scalable rescue system, for handling web hotspots at dynamic content Web sites. To support load migration for dynamic content, an origin Web server sets up needed rescue servers drafted from other Web sites on the fly, and those rescue servers retrieve the scripts dynamically from the origin Web server, cache the scripts locally, and access the corresponding database server directly. We have implemented a prototype of DotSlash for the LAMP configuration, and tested our implementation using the RUBBoS bulletin board benchmark. Experiments show that by using DotSlash a dynamic content web site can completely remove its web server bottleneck, and can support a request rate constrained only by the capacity of its database server.
A 64-Element 28-GHz Phased-Array Transceiver With 52-dBm EIRP and 8–12-Gb/s 5G Link at 300 Meters Without Any Calibration
This paper presents a 64-element 28-GHz phased-array transceiver for 5G communications based on <inline-formula> <tex-math notation="LaTeX">$2\times 2$ </tex-math></inline-formula> transmit/ receive (TRX) beamformer chips. Sixteen of the <inline-formula> <tex-math notation="LaTeX">$2\times 2$ </tex-math></inline-formula> TRX chips are assembled on a 12-layer printed circuit board (PCB) together with a Wilkinson combiner/divider network and 28–32-GHz stacked-patch antennas. The 64-element array results in 1.1 dB and 8.9° rms amplitude and phase error, respectively, with no calibration due to the symmetric design of the <inline-formula> <tex-math notation="LaTeX">$2\times 2$ </tex-math></inline-formula> beamformer chips and the PCB Wilkinson network. The effect of phase and amplitude mismatch between the 64 elements is analyzed and shown to have little impact on the 64-element array performance due to the averaging effects of phased arrays. Detailed pattern, effective isotropic radiated power (EIRP), and link measurements performed without any array calibration are presented and show the robustness of the symmetrical design technique. The phased array can scan to ±50° in azimuth (<inline-formula> <tex-math notation="LaTeX">$H$ </tex-math></inline-formula>-plane) and ±25° in elevation (<inline-formula> <tex-math notation="LaTeX">$E$ </tex-math></inline-formula>-plane) with low sidelobes and achieves a saturated EIRP of 52 dBm with 4-GHz 3-dB bandwidth. A 300-m wireless link is demonstrated with a record-setting data rate of 8–12 Gb/s over all scan angles using two 64-element TRX arrays and 16-/64-QAM waveforms.
Understanding Cybercrime
Existing cybercrime research in the information systems (IS) field has focused on a subset of corporate incidents (e.g., fraud, hacking intrusions), and emphasized solutions designed to repel attacks or to minimize their aftermath (e.g., barrier technologies, enhanced security procedures). This focused, defensive, and pragmatic posture is valuable and necessary as an immediate triage response, to "stop the bleeding" and provide protection from imminent harm. However, the extant work has not painted a sufficiently broad picture of the scope of cybercriminal activity, nor paid adequate attention to its root causes. This paper presents a different view. It analyzes 113 U.S. Department of Justice federal cybercrime cases from 2008 and 2009, categorizes these cases using an applied criminal offense framework developed by the FBI, considers philosophical explanations for criminal motives, and then identifies the apparent motive(s) that led to the commission of each crime. This paper seeks to contribute to an improved understanding of what cybercrime is, and why it is occurring at the individual level, in order to develop more proactive and effective solutions.
A Verification Platform for SDN-Enabled Applications
Recent work on integration of SDNs with application-layer systems like Hadoop has created a class of system, SDN-Enabled Applications, which implement application-specific functionality on the network layer by exposing network monitoring and control semantics to application developers. This requires domain-specific knowledge to correctly reason about network behavior and properties, as the SDN is now tightly coupled to the larger system. Existing tools for SDN verification and analysis are insufficiently expressive to capture this composition of network and domain models. Unfortunately, it is exactly this kind of automated reasoning and verification that is necessary to develop robust SDN-enabled applications for real-world systems. In this paper, we present ongoing work on Verificare, a verification platform being built to enable formal verification of SDNs as components of a larger domain-specific system. SLA, safety, and security requirements can selected from a variety of formal libraries and automatically verified using a variety of off-the-shelf tools. This approach not only extends the flexibility of existing SDN verification systems, but can actually provide more fine-grained analysis of possible network states due to extra information supplied by the domain model.
Super resolution: an overview
Super-resolution algorithms produce a single highresolution image from a set of several, low-resolution images of the desired scene. The low-resolution frames are shifted differently with respect to the high resolution frame with subpixel increments. This paper presents first a theoretical overview of super-resolution algorithms. The most important methods, namely, the iterative back-projection, projection onto convex sets, and maximum a posteriori estimation are then compared within the same framework of implementation.
Predicting Ego-Vehicle Paths from Environmental Observations with a Deep Neural Network
Ahstract- Advanced driver assistance systems allow for increasing user comfort and safety by sensing the environment and anticipating upcoming hazards. Often, this requires to accurately predict how situations will change. Recent approaches make simplifying assumptions on the predictive model of the Ego-Vehicle motion or assume prior knowledge, such as road topologies, to be available. However, in many urban areas this assumption is not satisfied. Furthermore, temporary changes (e.g. construction areas, vehicles parked on the street) are not considered by such models. Since many cars observe the environment with several different sensors, predictive models can benefit from them by considering environmental properties. In this work, we present an approach for an Ego-Vehicle path prediction from such sensor measurements of the static vehicle environment. Besides proposing a learned model for predicting the driver's multi-modal future path as a grid-based prediction, we derive an approach for extracting paths from it. In driver assistance systems both can be used to solve varying assistance tasks. The proposed approach is evaluated on real driving data and outperforms several baseline approaches.
Basics of Oracle Text Retrieval
Most current information management systems can be classifi ed into text retrieval systems, relational/object database systems, or semistructured/XML database systems . However, in practice, many applications data sets involve a combination of free text, structured dat a, nd semistructured data. Hence, integration of different types of information management systems has be en, and continues to be, an active research topic. In this paper, we present a short survey of prior work o n integrating and inter-operating between text, structured, and semistructured database systems. We classify existing literature based on the kinds of systems being integrated and the approach to integration . Based on this classification, we identify the challenges and the key themes underlying existing work in th is area.
Machine learning plug-ins for GNU Radio Companion
This paper gives an insight about how to create classifier plug-ins (signal processing blocks) using hard-code input for GNU Radio Companion (GRC). GNU Radio Companion is an open source Visual programming language for any real time signal processing applications. At present there is no classifier block available inside this GRC tool. Here we are introducing a low cost classifier which utilizes the basic machine learning algorithms:linear regression and logistic regression. The creation of classifier plug-ins in an open source software enables easy manipulation of real time classification problems during the transmission and reception of signals in Software Defined Radios. So this workdescribes the development of signal processing block that can be done by changing the Python code and C++ codes of the `gr-modtool' package. It is highly cost effective and with great potential since GNU Radio software is open source and free.
The German Aortic Valve Registry (GARY): a nationwide registry for patients undergoing invasive therapy for severe aortic valve stenosis.
Background The increasing prevalence of severe aortic valve defects correlates with the increase of life expectancy. For decades, surgical aortic valve replacement (AVR), under the use of extracorporeal circulation, has been the gold standard for treatment of severe aortic valve diseases. In Germany ~12,000 patients receive isolated aortic valve surgery per year. For some time, percutaneous balloon valvuloplasty has been used as a palliative therapeutic option for very few patients. Currently, alternatives for the established surgical procedures such as transcatheter aortic valve implantation (TAVI) have become available, but there are only limited data from randomized studies or low-volume registries concerning long-time outcome. In Germany, the implementation of this new technology into hospital care increased rapidly in the past few years. Therefore, the German Aortic Valve Registry (GARY) was founded in July 2010 including all available therapeutic options and providing data from a large quantity of patients.Methods The GARY is assembled as a complete survey for all invasive therapies in patients with relevant aortic valve diseases. It evaluates the new therapeutic options and compares them to surgical AVR. The model for data acquisition is based on three data sources: source I, the mandatory German database for external performance measurement; source II, a specific registry dataset; and source III, a follow-up data sheet (generated by phone interview). Various procedures will be compared concerning observed complications, mortality, and quality of life up to 5 years after the initial procedure. Furthermore, the registry will enable a compilation of evidence-based indication criteria and, in addition, also a comparison of all approved operative procedures, such as Ross or David procedures, and the use of different mechanical or biological aortic valve prostheses.Results Since the launch of data acquisition in July 2010, almost all institutions performing aortic valve procedures in Germany joined the registry. By now, 91 sites which perform TAVI in Germany participate and more than 15,000 datasets are already in the registry.Conclusion The implementation of new or innovative medical therapies needs supervision under the conditions of a well-structured scientific project. Up to now relevant data for implementation of TAVI and long-term results are missing. In contrast to randomized controlled trials, GARY is a prospective, controlled, 5-year observational multicenter registry, and a real world investigation with only one exclusion criterion, the absence of patients' written consent.
Survey on RGB, 3D, Thermal, and Multimodal Approaches for Facial Expression Recognition: History, Trends, and Affect-Related Applications
Facial expressions are an important way through which humans interact socially. Building a system capable of automatically recognizing facial expressions from images and video has been an intense field of study in recent years. Interpreting such expressions remains challenging and much research is needed about the way they relate to human affect. This paper presents a general overview of automatic RGB, 3D, thermal and multimodal facial expression analysis. We define a new taxonomy for the field, encompassing all steps from face detection to facial expression recognition, and describe and classify the state of the art methods accordingly. We also present the important datasets and the bench-marking of most influential methods. We conclude with a general discussion about trends, important questions and future lines of research.
Understanding environmental influences on walking; Review and research agenda.
BACKGROUND Understanding how environmental attributes can influence particular physical activity behaviors is a public health research priority. Walking is the most common physical activity behavior of adults; environmental innovations may be able to influence rates of participation. METHOD Review of studies on relationships of objectively assessed and perceived environmental attributes with walking. Associations with environmental attributes were examined separately for exercise and recreational walking, walking to get to and from places, and total walking. RESULTS Eighteen studies were identified. Aesthetic attributes, convenience of facilities for walking (sidewalks, trails); accessibility of destinations (stores, park, beach); and perceptions about traffic and busy roads were found to be associated with walking for particular purposes. Attributes associated with walking for exercise were different from those associated with walking to get to and from places. CONCLUSIONS While few studies have examined specific environment-walking relationships, early evidence is promising. Key elements of the research agenda are developing reliable and valid measures of environmental attributes and walking behaviors, determining whether environment-behavior relationships are causal, and developing theoretical models that account for environmental influences and their interactions with other determinants.
Neuroscience and Architecture: Seeking Common Ground
As these paired Commentaries discuss, neuroscientists and architects are just beginning to collaborate, each bringing what they know about their respective fields to the task of improving the environment of research buildings and laboratories.
Fullerene Anions: Unusual Charge Distribution in C706.
The full NMR assignment by the INADEQUATE method clarifies the reduced aromaticity of the C706- ion (see picture; a-e indicate the five different carbon environments) relative to the neutral system, as well as the charge delocalization pattern. The reduction process was carried out with lithium in the presence of corannulene, which acts as an "electron shuttle".
Exploiting Dynamic Resource Allocation for Efficient Parallel Data Processing in the Cloud
In recent years ad hoc parallel data processing has emerged to be one of the killer applications for Infrastructure-as-a-Service (IaaS) clouds. Major Cloud computing companies have started to integrate frameworks for parallel data processing in their product portfolio, making it easy for customers to access these services and to deploy their programs. However, the processing frameworks which are currently used have been designed for static, homogeneous cluster setups and disregard the particular nature of a cloud. Consequently, the allocated compute resources may be inadequate for big parts of the submitted job and unnecessarily increase processing time and cost. In this paper, we discuss the opportunities and challenges for efficient parallel data processing in clouds and present our research project Nephele. Nephele is the first data processing framework to explicitly exploit the dynamic resource allocation offered by today's IaaS clouds for both, task scheduling and execution. Particular tasks of a processing job can be assigned to different types of virtual machines which are automatically instantiated and terminated during the job execution. Based on this new framework, we perform extended evaluations of MapReduce-inspired processing jobs on an IaaS cloud system and compare the results to the popular data processing framework Hadoop.
Intermediation in Innovation
Intermediation in Innovation1 by Heidrun C. Hoppe2 and Emre Ozdenoren3 The paper offers a new theoretical framework to examine the role of intermediaries between creators and potential users of new inventions. Using a model of universityindustry technology transfer, we demonstrate that technology transfer offices can provide an opportunity to economize on a critical component of efficient innovation investments: the expertise to locate new, external inventions and to overcome the problem of sorting ‘profitable’ from ‘unprofitable’ ones. The findings may help explain the surge in university patenting and licensing since the Bayh-Dole Act of 1980. Furthermore, the study identifies several limitations to the potential efficiency of intermediation in innovation.
Deep Captioning with Attention-Based Visual Concept Transfer Mechanism for Enriching Description
In this paper, we propose a novel deep captioning framework called Attention-based multimodal recurrent neural network with Visual Concept Transfer Mechanism (A-VCTM). There are three advantages of the proposed A-VCTM. (1) A multimodal layer is used to integrate the visual representation and context representation together, building a bridge that connects context information with visual information directly. (2) An attention mechanism is introduced to lead the model to focus on the regions corresponding to the next word to be generated (3) We propose a visual concept transfer mechanism to generate novel visual concepts and enrich the description sentences. Qualitative and quantitative results on two standard benchmarks, MSCOCO and Flickr30K show the effectiveness and practicability of the proposed A-VCTM framework.
Advancing high performance heterogeneous integration through die stacking
This paper describes the industry's first heterogeneous Stacked Silicon Interconnect (SSI) FPGA family (3D integration). Each device is housed in a low-temperature co-fired ceramic (LTCC) package for optimal signal integrity. Inside the package, a heterogeneous IC stack delivers up to 2.78Tb/s transceiver bandwidth. The resulting bandwidth is approximately three times that achievable in a monolithic solution. Mounted on a passive silicon interposer with through-silicon vias (TSVs), the heterogeneous IC stack comprises FPGA ICs with 13.1-Gb/s transceivers and dedicated analog ICs with 28-Gb/s transceivers. Optimization took place concurrently on multiple facets of the design which were necessary to successfully implement the 3-D integration. In particular, this paper outlines the choices that were made in terms of package substrate material and interposer resistivity in order to optimize 28Gb/s system channel characteristics. These choices were validated through extensive electrical simulation and test chip correlation. In addition, this paper describes the design and timing verification of inter-die interconnects, an area that the electronic design automation industry had not yet fully addressed. This paper further describes 3D thermal-mechanical modeling and analysis for package reliability. The modeling was performed to address package coplanarity issues and stresses imposed by the interposer on the active dice, the low-k dielectric material, the micro-bumps and the C4 attach. The results indicate heterogeneous stacked-silicon (3D) integration is a reliable method to build very high-bandwidth multi-chip devices that exceed current monolithic capabilities.
Rituximab in lymphocyte-predominant Hodgkin disease: results of a phase 2 trial.
Lymphocyte-predominant Hodgkin disease (LPHD) is a unique clinical entity characterized by indolent nodal disease that tends to relapse after standard radiotherapy or chemotherapy. The malignant cells of LPHD are CD20+ and therefore rituximab may have activity with fewer late effects than standard therapy. In this phase 2 trial, 22 patients with CD20+ LPHD received 4 weekly doses of rituximab at 375 mg/m2. Ten patients had previously been treated for Hodgkin disease, while 12 patients had untreated disease. All 22 patients responded to rituximab (overall response rate, 100%) with complete response (CR) in 9 (41%), unconfirmed complete response in 1 (5%), and partial response in 12 (54%). Acute treatment-related adverse events were minimal. With a median follow-up of 13 months, 9 patients had relapsed, and estimated median freedom from progression was 10.2 months. Progressive disease was biopsied in 5 patients: 3 had recurrent LPHD, while 2 patients had transformation to large-cell non-Hodgkin lymphoma (LCL). All 3 patients with recurrent LPHD were retreated with rituximab, with a second CR seen in 1 patient and stable disease in 2. Rituximab induced prompt tumor reduction in each of 22 LPHD patients with minimal acute toxicity; however, based on the relatively short response duration seen in our trial and the concerns about transformation, rituximab should be considered investigational treatment for LPHD. Further clinical trials are warranted to determine the optimal dosing schedule of rituximab, the potential for combination treatment, and the possible relationship of rituximab treatment to the development of LCL.
Can we steal your vocal identity from the Internet?: Initial investigation of cloning Obama's voice using GAN, WaveNet and low-quality found data
Thanks to the growing availability of spoofing databases and rapid advances in using them, systems for detecting voice spoofing attacks are becoming more and more capable, and error rates close to zero are being reached for the ASVspoof2015 database. However, speech synthesis and voice conversion paradigms that are not considered in the ASVspoof2015 database are appearing. Such examples include direct waveform modelling and generative adversarial networks. We also need to investigate the feasibility of training spoofing systems using only low-quality found data. For that purpose, we developed a generative adversarial networkbased speech enhancement system that improves the quality of speech data found in publicly available sources. Using the enhanced data, we trained state-of-the-art text-to-speech and voice conversion models and evaluated them in terms of perceptual speech quality and speaker similarity. The results show that the enhancement models significantly improved the SNR of low-quality degraded data found in publicly available sources and that they significantly improved the perceptual cleanliness of the source speech without significantly degrading the naturalness of the voice. However, the results also show limitations when generating speech with the low-quality found data.
Genetic determinants of anti-malarial acquired immunity in a large multi-centre study
Many studies report associations between human genetic factors and immunity to malaria but few have been reliably replicated. These studies are usually country-specific, use small sample sizes and are not directly comparable due to differences in methodologies. This study brings together samples and data collected from multiple sites across Africa and Asia to use standardized methods to look for consistent genetic effects on anti-malarial antibody levels. Sera, DNA samples and clinical data were collected from 13,299 individuals from ten sites in Senegal, Mali, Burkina Faso, Sudan, Kenya, Tanzania, and Sri Lanka using standardized methods. DNA was extracted and typed for 202 Single Nucleotide Polymorphisms with known associations to malaria or antibody production, and antibody levels to four clinical grade malarial antigens [AMA1, MSP1, MSP2, and (NANP)4] plus total IgE were measured by ELISA techniques. Regression models were used to investigate the associations of clinical and genetic factors with antibody levels. Malaria infection increased levels of antibodies to malaria antigens and, as expected, stable predictors of anti-malarial antibody levels included age, seasonality, location, and ethnicity. Correlations between antibodies to blood-stage antigens AMA1, MSP1 and MSP2 were higher between themselves than with antibodies to the (NANP)4 epitope of the pre-erythrocytic circumsporozoite protein, while there was little or no correlation with total IgE levels. Individuals with sickle cell trait had significantly lower antibody levels to all blood-stage antigens, and recessive homozygotes for CD36 (rs321198) had significantly lower anti-malarial antibody levels to MSP2. Although the most significant finding with a consistent effect across sites was for sickle cell trait, its effect is likely to be via reducing a microscopically positive parasitaemia rather than directly on antibody levels. However, this study does demonstrate a framework for the feasibility of combining data from sites with heterogeneous malaria transmission levels across Africa and Asia with which to explore genetic effects on anti-malarial immunity.
Human-Like Rewards to Train a Reinforcement Learning Controller for Planar Arm Movement
High-level spinal cord injury (SCI) in humans causes paralysis below the neck. Functional electrical stimulation (FES) technology applies electrical current to nerves and muscles to restore movement, and controllers for upper extremity FES neuroprostheses calculate stimulation patterns to produce desired arm movement. However, currently available FES controllers have yet to restore natural movements. Reinforcement learning (RL) is a reward-driven control technique; it can employ user-generated rewards, and human preferences can be used in training. To test this concept with FES, we conducted simulation experiments using computer-generated “pseudo-human” rewards. Rewards with varying properties were used with an actor-critic RL controller for a planar two-degree-of-freedom biomechanical human arm model performing reaching movements. Results demonstrate that sparse, delayed pseudo-human rewards permit stable and effective RL controller learning. The frequency of reward is proportional to learning success, and human-scale sparse rewards permit greater learning than exclusively automated rewards. Diversity of training task sets did not affect learning. Long-term stability of trained controllers was observed. Using human-generated rewards to train RL controllers for upper-extremity FES systems may be useful. Our findings represent progress toward achieving human-machine teaming in control of upper-extremity FES systems for more natural arm movements based on human user preferences and RL algorithm learning capabilities.
ENHANCING THE TOURISM EXPERIENCE THROUGH MOBILE AUGMENTED REALITY: CHALLENGES AND PROSPECTS
The paper discusses the use of Augmented Reality (AR) applications for the needs of tourism. It describes the technology’s evolution from pilot applications into commercial mobile applications. We address the technical aspects of mobile AR applications development, emphasizing on the technologies that render the delivery of augmented reality content possible and experientially superior. We examine the state of the art, providing an analysis concerning the development and the objectives of each application. Acknowledging the various technological limitations hindering AR’s substantial end-user adoption, the paper proposes a model for developing AR mobile applications for the field of tourism, aiming to release AR’s full potential within the field.
Frictional properties of diamond-like carbon glassy carbon and nitrides with femtosecond-laser-induced nanostructure
This paper reports macro and micro frictional properties of DLC, TiN, CrN films and GC substrate of which surfaces are nanostructured with femtosecond (fs) laser pulses. The friction coefficient μ of the nanostructured surface was measured at a usual load with a ball-on-disk friction test machine. The results have shown that carbon materials of DLC and GC provide lower values of μ than TiN and CrN, and μ of DLC and TiN measured with a hardened steel ball decreases with an increase of the laser pulse energy. On the other hand, μ of nanostructured surfaces of thin films monotonously increases with an increase in laser pulse energy, which was measured with a micro-scratch test at an ultralight load of 1.5 mN utilizing a diamond tip. The friction coefficient of the GC substrate irradiated at a low fluence around the ablation threshold has shown a lower value than that of the non-irradiated surface.
Silicon carbide power electronics for electric vehicles
The success of electric vehicle drives heavily depends on the maximization of energy conversion efficiency. Losses in power electronic converters increase system size, weight and cost, raise energy demand and limit the operating distance range. Advances in semiconductor technology can remedy these problems and silicon carbide devices are of special interest in this context. This material enables manufacturing high-voltage devices with lower on-state voltage drop and shorter switching times, thus reducing both static and dynamic power loss. In this paper, recent achievements in silicon carbide technology as well as their applications in electrical vehicles have been reviewed.
Chronic pain after childbirth.
PURPOSE OF REVIEW Although childbirth is considered a natural event, some deliveries may necessitate instrumentation or surgical intervention. In contrast with trauma or surgery, persistent pain after delivery has received little attention until recently, despite the large number of individuals potentially at risk. RECENT FINDINGS Excluding pre-existing pain or pain that developed during pregnancy, prospective studies show a surprisingly low prevalence of persistent pain after childbirth, much lower than the prevalence reported in retrospective studies and that of persistent postsurgical pain in a general population for similar procedures. The nature of persistent pain itself remains poorly characterized; the chronic pain following caesarean delivery appears to be predominantly neuropathic, but the intensity is generally lower than usually reported for other types of chronic neuropathic pain. Finally, the type of delivery and the degree of tissue trauma do not seem to impact the risk of developing persistent pain. It is unclear whether individual factors place specific women at a risk for persistent pain. Experimental study suggests that protective mechanisms against the development of neuropathic pain may be active during the puerperium, but whether these mechanisms exist following human childbirth remains unknown. SUMMARY Some recent findings on the development of persistent pain after childbirth are intriguing and might open the way to interesting perspectives for the treatment of persistent pain caused by trauma or surgery.
A retail store SKU promotions optimization model for category multi-period profit maximization
Consumer promotions are an important element of competitive dynamics in retail markets and make a significant difference in the retailer’s profits. But no study has so far included all the elements that are required to meet retail business objectives. We extend the existing literatures by considering all the basic requirements for a promotional Decision Support System (DSS): reliance on operational (store-level) data only, the ability to predict sales as a function of prices and the inclusion of other promotional variables affecting the category. The new model delivers an optimizing promotional schedule at Stock-KeepingUnit (SKU) level which maximizes multi-period category level profit under the constraints of business rules typically applied in practice. We first develop a high dimensional distributed lag demand model which integrates both cross-SKU competitive promotion information and cross-period promotional influences. We estimate the model by proposing a two stage sign constrained regularization approach to ensure realistic promotional parameters. Based on the demand model, we then build a nonlinear integer programming model to maximize the retailer’s category profits over a planning horizon under constraints that model important business rules. The output of the model provides optimized prices, display and feature advertising planning together with sales and profit forecasts. Empirical tests over a number of stores and categories using supermarket data suggest that our model generates accurate sales forecasts and increases category profits by approximately 17% and that including cross-item and cross-period effects is also valuable. Crown Copyright © 2016 Published by Elsevier B.V. All rights reserved.
Rethinking Vehicular Communications: Merging VANET with cloud computing
Despite the surge in Vehicular Ad Hoc NETwork (VANET) research, future high-end vehicles are expected to under-utilize the on-board computation, communication, and storage resources. Olariu et al. envisioned the next paradigm shift from conventional VANET to Vehicular Cloud Computing (VCC) by merging VANET with cloud computing. But to date, in the literature, there is no solid architecture for cloud computing from VANET standpoint. In this paper, we put forth the taxonomy of VANET based cloud computing. It is, to the best of our knowledge, the first effort to define VANET Cloud architecture. Additionally we divide VANET clouds into three architectural frameworks named Vehicular Clouds (VC), Vehicles using Clouds (VuC), and Hybrid Vehicular Clouds (HVC). We also outline the unique security and privacy issues and research challenges in VANET clouds.
Modularity and architecture of PLC-based software for automated production Systems: An analysis in industrial companies
Adaptive and flexible production systems require modular and reusable software especially considering their long-term life cycle of up to 50 years. SWMAT4aPS, an approach to measure Software Maturity for automated Production Systems is introduced. The approach identifies weaknesses and strengths of various companies’ solutions for modularity of software in the design of automated Production Systems (aPS). At first, a self-assessed questionnaire is used to evaluate a large number of companies concerning their software maturity. Secondly, we analyze PLC code, architectural levels, workflows and abilities to configure code automatically out of engineering information in four selected companies. In this paper, the questionnaire results from 16 German world-leading companies in machine and plant manufacturing and four case studies validating the results from the detailed analyses are introduced to prove the applicability of the approach and give a survey of the state of the art in industry. Keywords—factory automation, automated production systems, maturity, modularity, control software, Programmable Logic Controller.
A New Switching Strategy for Pulse Width Modulation (PWM) Power Converters
This paper presents a new switching strategy for pulse width modulation (PWM) power converters. Since the proposed strategy uses independent on/off switching action of the upper or lower arm according to the polarity of the current, the dead time is not needed except instant of current polarity change. Therefore, it is not necessary to compensate the dead time effect and the possibility of arm short is strongly eliminated. The current control of PWM power converters can easily adopt the proposed switching strategy by using the polarity information of the reference current instead of the real current, thus eliminating the problems that commonly arise from real current detection. In order to confirm the usefulness of the proposed switching strategy, experimental tests were done using a single-phase inverter with passive loads, a three-phase inverter for induction motor drives, a three-phase ac/dc PWM converter, a three-phase active power filter, and a class-D amplifier, the results of which are presented in this paper
Knowledge, attitude and techniques of breastfeeding among Nigerian mothers from a semi-urban community
Mothers’ poor knowledge and negative attitude towards breastfeeding may influence practices and constitute barriers to optimizing the benefits of the baby-friendly initiative. This study assessed breastfeeding knowledge, attitude and techniques of postures, positioning, hold practice and latch-on among Nigerian mothers from a Semi-Urban community. Three hundred and eighty three consenting lactating mothers who have breastfed for 6 months and up to two years volunteered for this cross-sectional survey, yielding a response rate of 95.7%. A self-administered questionnaire that sought information on maternal socio-demographic variables, knowledge, attitudes and breastfeeding techniques of mothers was employed. Based on cumulative breastfeeding knowledge and attitude scores, 71.3% of the respondents had good knowledge while 54.0% had positive attitude. Seventy one point three percent practiced advisable breastfeeding posture. Sitting on a chair to breastfeed was common (62.4%); and comfort of mother/baby (60.8%) and convenience (29.5%) were the main reasons for adopting breastfeeding positions. Cross-cradle hold (80.4%), football hold technique (13.3%), breast-to-baby (18.0%) and baby-to-breast latch-on (41.3%) were the common breastfeeding techniques. A majority of the respondents (75.7%) agreed that neck flexion, slight back flexion, arm support with pillow and foot rest was essential during breastfeeding. There was no significant association between breastfeeding posture practice and each of cumulative breastfeeding knowledge score levels (X2 = 0.044; p = 0.834) and attitude score levels (X2 = 0.700; p = 0.403). Nigerian mothers demonstrated good knowledge and positive attitude towards breastfeeding. Most of the mothers practiced advisable breastfeeding postures, preferred sitting on a chair to breastfeed and utilized cross-cradle hold and baby-to-breast latch-on.
Using Jane Jacobs and Henry George to Tame Gentrification
The solutions that Jane Jacobs proposed to improve neighborhoods created a paradoxical problem: improvement increased demand for the amenities of the area, which caused land prices to rise. The net result was at least partial displacement of the old residents of the neighborhood with new ones. Jane Jacobs has been criticized for ignoring gentrification, but she was clearly aware of this process and tried to find means to counter it. By combining the ideas of Henry George about land taxation with the ideals of Jane Jacobs about neighborhood diversity, we can mitigate the negative effects of gentrification and direct the energy of market forces into producing a greater supply of desirable neighborhoods.
Re-architecting the on-chip memory sub-system of machine-learning accelerator for embedded devices
The rapid development of deep learning are enabling a plenty of novel applications such as image and speech recognition for embedded systems, robotics or smart wearable devices. However, typical deep learning models like deep convolutional neural networks (CNNs) consume so much on-chip storage and high-throughput compute resources that they cannot be easily handled by mobile or embedded devices with thrifty silicon and power budget. In order to enable large CNN models in mobile or more cutting-edge devices for IoT or cyberphysics applications, we proposed an efficient on-chip memory architecture for CNN inference acceleration, and showed its application to our in-house general-purpose deep learning accelerator. The redesigned on-chip memory subsystem, Memsqueezer, includes an active weight buffer set and data buffer set that embrace specialized compression methods to reduce the footprint of CNN weight and data set respectively. The Memsqueezer buffer can compress the data and weight set according to their distinct features, and it also includes a built-in redundancy detection mechanism that actively scans through the work-set of CNNs to boost their inference performance by eliminating the data redundancy. In our experiment, it is shown that the CNN accelerators with Memsqueezer buffers achieves more than 2x performance improvement and reduces 80% energy consumption on average over the conventional buffer design with the same area budget.
A Queueing Network Model of Patient Flow in an Accident and Emergency Department
In many complex processing systems with limited resources, fast response times are demanded, but are seldom delivered. This is an especially serious problem in healthcare systems providing critical patient care. In this paper, we develop a multiclass Markovian queueing network model of patient flow in the Accident and Emergency department of a major London hospital. Using real patient timing data to help parameterise the model, we solve for moments and probability density functions of patient response time using discrete event simulation. We experiment with different patient handling priority schemes and compare the resulting response time moments and densities with real data.
Repairing post burn scar contractures with a rare form of Z-plasty.
BACKGROUND Although many precautions have been introduced into early burn management, post burn contractures are still significant problems in burn patients. In this study, a form of Z-plasty in combination with relaxing incision was used for the correction of contractures. METHODS Preoperatively, a Z-advancement rotation flap combined with a relaxing incision was drawn on the contracture line. Relaxing incision created a skin defect like a rhomboid. Afterwards, both limbs of the Z flap were incised. After preparation of the flaps, advancement and rotation were made in order to cover the rhomboid defect. Besides subcutaneous tissue, skin edges were closely approximated with sutures. RESULTS This study included sixteen patients treated successfully with this flap. It was used without encountering any major complications such as infection, hematoma, flap loss, suture dehiscence or flap necrosis. All rotated and advanced flaps healed uneventfully. In all but one patient, effective contracture release was achieved by means of using one or two Z-plasty. In one patient suffering severe left upper extremity contracture, a little residual contracture remained due to inadequate release. CONCLUSION When dealing with this type of Z-plasty for mild contractures, it offers a new option for the correction of post burn contractures, which is safe, simple and effective.
Dealing with Data Difficulty Factors While Learning from Imbalanced Data
Learning from imbalanced data is still one of challenging tasks in machine learning and data mining. We discuss the following data difficulty factors which deteriorate classification performance: decomposition of the minority class into rare sub-concepts, overlapping of classes and distinguishing different types of examples. New experimental studies showing the influence of these factors on classifiers are presented. The paper also includes critical discussions of methods for their identification in real world data. Finally, open research issues are stated .
Real-time 6D stereo Visual Odometry with non-overlapping fields of view
In this paper, we present a framework for 6D absolute scale motion and structure estimation of a multi-camera system in challenging indoor environments. It operates in real-time and employs information from two cameras with non-overlapping fields of view. Monocular Visual Odometry supplying up-to-scale 6D motion information is carried out in each of the cameras, and the metric scale is recovered via a linear solution by imposing the known static transformation between both sensors. The redundancy in the motion estimates is finally exploited by a statistical fusion to an optimal 6D metric result. The proposed technique is robust to outliers and able to continuously deliver a reasonable measurement of the scale factor. The quality of the framework is demonstrated by a concise evaluation on indoor datasets, including a comparison to accurate ground truth data provided by an external motion tracking system.
Unimodal thresholding
Most thresholding algorithms have difficulties processing images with unimodal distributions. In this paper an algorithm, based on finding a corner in the histogram plot, is proposed that is capable of performing bilevel thresholding of such images. Its effectiveness is demonstrated on synthetic data as well as a variety of real data, showing its application to edges, difference images, optic flow, texture difference images, polygonal approximation of curves, and image segmentation.
RIOT OS: Towards an OS for the Internet of Things
The Internet of Things (IoT) is characterized by heterogeneous devices. They range from very lightweight sensors powered by 8-bit microcontrollers (MCUs) to devices equipped with more powerful, but energy-efficient 32-bit processors. Neither a traditional operating system (OS) currently running on Internet hosts, nor typical OS for sensor networks are capable to fulfill the diverse requirements of such a wide range of devices. To leverage the IoT, redundant development should be avoided and maintenance costs should be reduced. In this paper we revisit the requirements for an OS in the IoT. We introduce RIOT OS, an OS that explicitly considers devices with minimal resources but eases development across a wide range of devices. RIOT OS allows for standard C and C++ programming, provides multi-threading as well as real-time capabilities, and needs only a minimum of 1.5 kB of RAM.
Resonant Enhancements in WIMP Capture by the Earth
The exact formulae for the capture of WIMPS (weakly interacting massive particles) by a massive body are derived. Capture by the earth is found to be significantly enhanced whenever the WIMP mass is roughly equal to the nuclear mass of an element present in the earth in large quantities. For Dirac neutrino WIMPS of mass 10 to 90 GeV, the capture rate is 10 to 300 times that previously believed. Capture rates for the sun are also recalculated and found to be from 1.5 times higher to 3 times lower than previously believed, depending on the mass and type of WIMP. The earth alone, or the earth in combination with the sun is found to give a much stronger annihilation signal from Dirac neutrino WIMPS than the sun alone over a very large mass range. This is particularly important in the neighborhood of mass of iron where previous analyses could not set any significant limits. Submitted to Astrophysical Journal * Work supported by the Department of Energy, contract DE AC03 76SF00515.
On the Trivariate Non-Central Chi-Squared Distribution
In this paper, we derive a new infinite series representation for the trivariate non-central chi-squared distribution when the underlying correlated Gaussian variables have tridiagonal form of inverse covariance matrix. We make use of the Miller's approach and the Dougall's identity to derive the joint density function. Moreover, the trivariate cumulative distribution function (cdf) and characteristic function (chf) are also derived. Finally, bivariate noncentral chi-squared distribution and some known forms are shown to be special cases of the more general distribution. However, non-central chi-squared distribution for an arbitrary covariance matrix seems intractable with the Miller's approach.
Datum: Managing Data Purchasing and Data Placement in a Geo-Distributed Data Market
This paper studies two design tasks faced by a geo-distributed cloud data market: which data to purchase data purchasing and where to place/replicate the data for delivery data placement. We show that the joint problem of data purchasing and data placement within a cloud data market can be viewed as a facility location problem and is thus NP-hard. However, we give a provably optimal algorithm for the case of a data market made up of a single data center and then generalize the structure from the single data center setting in order to develop a near-optimal, polynomial-time algorithm for a geo-distributed data market. The resulting design, $\mathsf {Datum}$ , decomposes the joint purchasing and placement problem into two subproblems, one for data purchasing and one for data placement, using a transformation of the underlying bandwidth costs. We show, via a case study, that $\mathsf {Datum}$ is near optimal within 1.6% in practical settings.
A scaling normalization method for differential expression analysis of RNA-seq data
The fine detail provided by sequencing-based transcriptome surveys suggests that RNA-seq is likely to become the platform of choice for interrogating steady state RNA. In order to discover biologically important changes in expression, we show that normalization continues to be an essential step in the analysis. We outline a simple and effective method for performing normalization and show dramatically improved results for inferring differential expression in simulated and publicly available data sets.
The effects of femoral derotation osteotomy in children with cerebral palsy: an evaluation using energy cost and functional mobility.
BACKGROUND The effect of femoral derotation osteotomy (FDO) in children with cerebral palsy (CP) has hitherto been examined using various outcome measures including range of motion of lower extremity joints and gait parameters. However, functional ambulation following this procedure has been scarcely investigated. OBJECTIVE To evaluate the effect of FDO on energy cost during stair climbing and functional mobility in children with CP. METHOD A prospective case series study was conducted on 18 children with CP, 11 at Gross Motor Functional Classification System (GMFCS) II and 7 with GMFCS III, aged 8.5 +/- 1.24 years (range, 6.9-11 years) who underwent FDO to correct hip internal rotation. The energy cost was measured using the heart beat cost index (HBCI) during stair climbing test, whereas functional mobility was assessed using the Gillette Functional Assessment Questionnaire (FAQ). Tests were administered before surgery (P0), 6 months (P1), and at approximately a year postoperatively (P2). RESULTS Compared with P0, significant changes in hip rotation were observed at P1 and P2. There was a significant improvement in HBCI from P0 to P2, whereas FAQ improved significantly from P1 to P2. A moderate correlation was found between HBCI and GMFCS at all times (r = 0.61-0.78). Negative correlations were found between the HBCI and FAQ and between GMFCS and FAQ at all times (r = -0.5). CONCLUSION This study indicates that children with CP may benefit functionally from FDO as judged by HBCI and functional mobility rating.
Hand Gesture Recognition System
Gestures are a major form of human communication. Hence gestures can be found to be an appealing way to interact with computers, since they are already a natural part of how people communicate. A primary goal of gesture recognition is to create a system which can identify specific human gestures and use them to convey information for controlling device and by implementing real time gesture recognition a user can control a computer by doing a specific gesture in front of a video camera which is linked to the computer. A primary goal of this gesture recognition research is to create a system which can identify specific human gestures for the control the traffic signals and mouse. This project also covers various issues like what are gesture, their classification, their role in implementing a gesture recognition system for traffic and mouse control, system architecture concepts for implementing a gesture recognition system, major issues involved in implementing gesture recognition system, and future scope of gesture recognition system. For implementation of this system real-time hand tracking and extraction algorithm, and feature extraction are used.
FPDetective: dusting the web for fingerprinters
In the modern web, the browser has emerged as the vehicle of choice, which users are to trust, customize, and use, to access a wealth of information and online services. However, recent studies show that the browser can also be used to invisibly fingerprint the user: a practice that may have serious privacy and security implications. In this paper, we report on the design, implementation and deployment of FPDetective, a framework for the detection and analysis of web-based fingerprinters. Instead of relying on information about known fingerprinters or third-party-tracking blacklists, FPDetective focuses on the detection of the fingerprinting itself. By applying our framework with a focus on font detection practices, we were able to conduct a large scale analysis of the million most popular websites of the Internet, and discovered that the adoption of fingerprinting is much higher than previous studies had estimated. Moreover, we analyze two countermeasures that have been proposed to defend against fingerprinting and find weaknesses in them that might be exploited to bypass their protection. Finally, based on our findings, we discuss the current understanding of fingerprinting and how it is related to Personally Identifiable Information, showing that there needs to be a change in the way users, companies and legislators engage with fingerprinting.
Creative accounting : some ethical issues of macro-and micro-manipulation
This paper examines two principal categories of manipulative behaviour. The term ‘macro-manipulation’ is used to describe the lobbying of regulators to persuade them to produce regulation that is more favourable to the interests of preparers. ‘Micromanipulation’ describes the management of accounting figures to produce a biased view at the entity level. Both categories of manipulation can be viewed as attempts at creativity by financial statement preparers. The paper analyses two cases of manipulation which are considered in an ethical context. The paper concludes that the manipulations described in it can be regarded as morally reprehensible. They are not fair to users, they involve an unjust exercise of power, and they tend to weaken the authority of accounting regulators.
Interaction of the surface of ribbons of amorphous soft-magnetic alloys with vapor during isothermal holding upon heat treatment
The effect of in-air heat treatment in combination with water vapor on the magnetization distribution and magnetic properties has been investigated based on the example of ribbons of amorphous soft-magnetic alloys Fe77Ni1Si9B13 and Fe81B13Si4C2 with positive magnetostriction. The results of the investigation showed the temperature lag of the dependence of the maximum magnetic permeability and of relative volume of domains with orthogonal magnetization on the isothermal-holding temperature. This effect can be associated with the inhibition of processes of surface crystallization by hydrogen and oxygen atoms introduced into the ribbon surface. Distinctive features of the heat treatment with and without vapor on the magnetization distribution in the ribbon plane that are explained within the theory of directed ordering with allowance for the processes of crystallization at the cooling stage have been found. This demonstrates the importance of the contribution of diffusion processes at this stage of treatment to the formation of the level of magnetic properties. It has been shown that the interaction of the ribbon surface with water vapor is not physical adsorption. Interaction with atmospheric gases is carried out by dispersion forces and exerts an influence on the magnetization distribution in the ribbon plane and maximum magnetic permeability.
Accidental death due to complete autoerotic asphyxia associated with transvestic fetishism and anal self-stimulation - case report.
A case is reported of a 36-year-old male, found dead in his locked room, lying on a bed, dressed in his mother's clothes, with a plastic bag over his head, hands tied and with a barrel wooden cork in his rectum. Two pornographic magazines were found on a chair near the bed, so that the deceased could see them well. Asphyxia was controlled with a complex apparatus which consisted of two elastic luggage rack straps, the first surrounding his waist, perineum, and buttocks, and the second the back of his body, and neck. According to the psychological autopsy based on a structured interview (SCID-I, SCID-II) with his father, the deceased was single, unemployed and with a part college education. He had grown up in a poor family with a reserved father and dominant mother, and was indicative of fulfilling DSM-IV diagnostic criteria for alcohol dependence, paraphilia involving hypoxyphilia with transvestic fetishism and anal masturbation and a borderline personality disorder. There was no evidence of previous psychiatric treatment. The Circumstances subscale of Beck's Suicidal Intent Scale (SIS-CS) pointed at the lack of final acts (thoughts or plans) in anticipation of death, and absence of a suicide note or overt communication of suicidal intent before death. Integration of the crime scene data with those of the forensic medicine and psychological autopsy enabled identification of the event as an accidental death, caused by neck strangulation, suffocation by a plastic bag, and vagal stimulation due to a foreign body in the rectum.
Automating change-level self-admitted technical debt determination
Technical debt (TD) is a metaphor to describe the situation where developers introduce suboptimal solutions during software development to achieve short-term goals that may affect the long-term software quality. Prior studies proposed different techniques to identify TD, such as identifying TD through code smells or by analyzing source code comments. Technical debt identified using comments is known as Self-Admitted Technical Debt (SATD) and refers to TD that is introduced intentionally. Compared with TD identified by code metrics or code smells, SATD is more reliable since it is admitted by developers using comments. Thus far, all of the state-of-the-art approaches identify SATD at the file-level. In essence, they identify whether a file has SATD or not. However, all of the SATD is introduced through software changes. Previous studies that identify SATD at the file-level in isolation cannot describe the TD context related to multiple files. Therefore, it is beneficial to identify the SATD once a change is being made. We refer to this type of TD identification as “Change-level SATD Determination”, which determines whether or not a change introduces SATD. Identifying SATD at the change-level can help to manage and control TD by understanding the TD context through tracing the introducing changes. To build a change-level SATD Determination model, we first identify TD from source code comments in source code files of all versions. Second, we label the changes that first introduce the SATD comments as TD-introducing changes. Third, we build the determination model by extracting 25 features from software changes that are divided into three dimensions, namely diffusion, history and message, respectively. To evaluate the effectiveness of our proposed model, we perform an empirical study on 7 open source projects containing a total of 100,011 software changes. The experimental results show that our model achieves a promising and better performance than four baselines in terms of AUC and cost-effectiveness (i.e., percentage of TD-introducing changes identified when inspecting 20% of changed LOC). On average across the 7 experimental projects, our model achieves AUC of 0.82, cost-effectiveness of 0.80, which is a significant improvement over the comparison baselines used. In addition, we found that “Diffusion” is the most discriminative dimension among the three dimensions of features for determining TD-introducing changes.
A 0.3 GHz to 1.4 GHz N-path mixer-based code-domain RX with TX self-interference rejection
A code-domain N-path RX is proposed based on PN-code modulated LO pulses for concurrent reception of two code-modulated signals. Additionally, a combination of Walsh-Function and PN sequence is proposed to translate in-band TX self-interference (SI) to out-of-band at N-path RX output enabling frequency filtering for high SI rejection. A 0.3 GHz–1.4 GHz 65-nm CMOS implementation has 35 dB gain for desired signals and concurrently receives two RX signals while rejecting mismatched spreading codes at RF input. Proposed TX SI mitigation approach results in 38.5 dB rejection for −11.8dBm 1.46 Mb/s QPSK modulated SI at RX input. The RX achieves 23.7dBm OP1dB for in-band SI, while consuming ∼35mW and occupies 0.31mm2.
Bayesian reinforcement learning in continuous POMDPs with application to robot navigation
We consider the problem of optimal control in continuous and partially observable environments when the parameters of the model are not known exactly. Partially observable Markov decision processes (POMDPs) provide a rich mathematical model to handle such environments but require a known model to be solved by most approaches. This is a limitation in practice as the exact model parameters are often difficult to specify exactly. We adopt a Bayesian approach where a posterior distribution over the model parameters is maintained and updated through experience with the environment. We propose a particle filter algorithm to maintain the posterior distribution and an online planning algorithm, based on trajectory sampling, to plan the best action to perform under the current posterior. The resulting approach selects control actions which optimally trade-off between 1) exploring the environment to learn the model, 2) identifying the system's state, and 3) exploiting its knowledge in order to maximize long-term rewards. Our preliminary results on a simulated robot navigation problem show that our approach is able to learn good models of the sensors and actuators, and performs as well as if it had the true model.
Dynamic conditional random fields: factorized probabilistic models for labeling and segmenting sequence data
In sequence modeling, we often wish to represent complex interaction between labels, such as when performing multiple, cascaded labeling tasks on the same sequence, or when long-range dependencies exist. We present dynamic conditional random fields (DCRFs), a generalization of linear-chain conditional random fields (CRFs) in which each time slice contains a set of state variables and edges---a distributed state representation as in dynamic Bayesian networks (DBNs)---and parameters are tied across slices. Since exact inference can be intractable in such models, we perform approximate inference using several schedules for belief propagation, including tree-based reparameterization (TRP). On a natural-language chunking task, we show that a DCRF performs better than a series of linear-chain CRFs, achieving comparable performance using only half the training data.
Accelerated DP based search for statistical translation
In this paper, we describe a fast search algorithm for statistical translation based on dynamic programming (DP) and present experimental results. The approach is based on the assumption that the word alignment is monotone with respect to the word order in both languages. To reduce the search e ort for this approach, we introduce two methods: an acceleration technique to e ciently compute the dynamic programming recursion equation and a beam search strategy as used in speech recognition. The experimental tests carried out on the Verbmobil corpus showed that the search space, measured by the number of translation hypotheses, is reduced by a factor of about 230 without a ecting the translation performance.
Face Recognition Based on Wavelet Transform and PCA
A novel technique for face recognition is presented in this paper. Wavelet Transform, PCA and SVM are combined in this technique. Wavelet Transform is used for preprocessing the images in order to handle bad illumination. In recognition stage, support vector machine (SVM) is adopted as classifier. Experiments based on Cambridge ORL face database indicated that our approach can achieve better accuracy than use PCA only.
Revisiting the PnP Problem: A Fast, General and Optimal Solution
In this paper, we revisit the classical perspective-n-point (PnP) problem, and propose the first non-iterative O(n) solution that is fast, generally applicable and globally optimal. Our basic idea is to formulate the PnP problem into a functional minimization problem and retrieve all its stationary points by using the Gr"obner basis technique. The novelty lies in a non-unit quaternion representation to parameterize the rotation and a simple but elegant formulation of the PnP problem into an unconstrained optimization problem. Interestingly, the polynomial system arising from its first-order optimality condition assumes two-fold symmetry, a nice property that can be utilized to improve speed and numerical stability of a Grobner basis solver. Experiment results have demonstrated that, in terms of accuracy, our proposed solution is definitely better than the state-of-the-art O(n) methods, and even comparable with the reprojection error minimization method.
Generative Adversarial Networks recover features in astrophysical images of galaxies beyond the deconvolution limit
Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data. Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon–Nyquist sampling theorem. Here, we train a generative adversarial network (GAN) on a sample of 4550 images of nearby galaxies at 0.01 < z < 0.02 from the Sloan Digital Sky Survey and conduct 10× crossvalidation to evaluate the results. We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance that far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low signal to noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.
Forecasting dynamic public transport Origin-Destination matrices with long-Short term Memory recurrent neural networks
A considerable number of studies have been undertaken on using smart card data to analyse urban mobility. Most of these studies aim to identify recurrent passenger habits, reveal mobility patterns, reconstruct and predict passenger flows, etc. Forecasting mobility demand is a central problem for public transport authorities and operators alike. It is the first step to efficient allocation and optimisation of available resources. This paper explores an innovative approach to forecasting dynamic Origin-Destination (OD) matrices in a subway network using long Short-term Memory (LSTM) recurrent neural networks. A comparison with traditional approaches, such as calendar methodology or Vector Autoregression is conducted on a real smart card dataset issued from the public transport network of Rennes Métropole, France. The obtained results show that reliable short-term prediction (over a 15 minutes time horizon) of OD pairs can be achieved with the proposed approach. We also experiment with the effect of taking into account additional data about OD matrices of nearby transport systems (buses in this case) on the prediction accuracy.
DeepWind , 19-20 January 2012 , Trondheim , Norway Maintenance strategies for large offshore wind farms
Up to one third of the total cost of energy from offshore wind generation is contributed by operation and maintenance (O&M). Compared to its onshore counterpart, this fraction is significantly higher. Costs are not only caused by spareparts and repair actions, but also by production losses due to downtime. The accessibility of a turbine in case of a failure is one main aspect affecting downtime. Therefore, a tool has been developed and implemented in MATLAB to simulate the operating phase of a wind farm with special emphasis toward the modeling of failures and repair. As an example application, a site at the UK east coast was chosen, and a few distinct scenarios were considered. Results include how sensitive availability changes with respect to changes in maintenance fleet and maintenance scheduling strategy. A quantification of potential cost savings due to an increase in availability is also stated. © 2011 Published by Elsevier Ltd.
Depression Assessment by Fusing High and Low Level Features from Audio, Video, and Text
Depression is a major cause of disability world-wide. The present paper reports on the results of our participation to the depression sub-challenge of the sixth Audio/Visual Emotion Challenge (AVEC 2016), which was designed to compare feature modalities (audio, visual, interview transcript-based) in gender-based and gender-independent modes using a variety of classification algorithms. In our approach, both high and low level features were assessed in each modality. Audio features were extracted from the low-level descriptors provided by the challenge organizers. Several visual features were extracted and assessed including dynamic characteristics of facial elements (using Landmark Motion History Histograms and Landmark Motion Magnitude), global head motion, and eye blinks. These features were combined with statistically derived features from pre-extracted features (emotions, action units, gaze, and pose). Both speech rate and word-level semantic content were also evaluated. Classification results are reported using four different classification schemes: i) gender-based models for each individual modality, ii) the feature fusion model, ii) the decision fusion model, and iv) the posterior probability classification model. Proposed approaches outperforming the reference classification accuracy include the one utilizing statistical descriptors of low-level audio features. This approach achieved f1-scores of 0.59 for identifying depressed and 0.87 for identifying not-depressed individuals on the development set and 0.52/0.81, respectively for the test set.
Effect of bilateral subthalamic nucleus stimulation on parkinsonian gait
Clinical reports show that bilateral subthalamic nucleus (STN) stimulation is effective in improving parkinsonian gait. Quantitative analysis of the efficacy of STN stimulation on gait is of interest and can be carried out using a commercially available stride analyser. Ten parkinsonian patients (5 men, 5 women) with a mean age of 55.8, SD 9.6 years were included in our study. They had a mean duration of Parkinson's disease (PD) of 13.3, SD 4.5 years and a motor examination score (part III of the Unified Parkinson's Disease Rating Scale) (UPDRS) of 43, SD 13 in off-stimulation off-drug condition. All the patients had bilateral chronic STN stimulation which had started from 3 to 36 months before the study. Patients were evaluated in off-drug and on-drug conditions both with and without stimulation. We analysed the principal gait measures : velocity, cadence, stride length, gait cycle, duration of single and double limb support. The clinical parkinsonian signs were evaluated with the part III of the UPDRS. In the off-drug condition, STN stimulation significantly (p < 0.05) improved velocity and stride length. The effect was similar to that of levodopa. When STN stimulation was switched on at the best of the levodopa induced effect, no further improvement was observed. The UPDRS motor score was significantly (p < 0.001) decreased after both stimulation and levodopa. In conclusion, STN stimulation is effective on parkinsonian gait.
Towards Efficient Heap Overflow Discovery
Heap overflow is a prevalent memory corruption vulnerability, playing an important role in recent attacks. Finding such vulnerabilities in applications is thus critical for security. Many state-of-art solutions focus on runtime detection, requiring abundant inputs to explore program paths in order to reach a high code coverage and luckily trigger security violations. It is likely that the inputs being tested could exercise vulnerable program paths, but fail to trigger (and thus miss) vulnerabilities in these paths. Moreover, these solutions may also miss heap vulnerabilities due to incomplete vulnerability models. In this paper, we propose a new solution HOTracer to discover potential heap vulnerabilities. We model heap overflows as spatial inconsistencies between heap allocation and heap access operations, and perform an indepth offline analysis on representative program execution traces to identify heap overflows. Combining with several optimizations, it could efficiently find heap overflows that are hard to trigger in binary programs. We implemented a prototype of HOTracer, evaluated it on 17 real world applications, and found 47 previously unknown heap vulnerabilities, showing its effectiveness.
On-line Trend Analysis with Topic Models: \#twitter Trends Detection Topic Model Online
We present a novel topic modelling-based methodology to track emerging events in microblogs such as Twitter. Our topic model has an in-built update mechanism based on time slices and implements a dynamic vocabulary. We first show that the method is robust in detecting events using a range of datasets with injected novel events, and then demonstrate its application in identifying trending topics in Twitter.