title
stringlengths
8
300
abstract
stringlengths
0
10k
Detection and Inpainting of Facial Wrinkles Using Texture Orientation Fields and Markov Random Field Modeling
Facial retouching is widely used in media and entertainment industry. Professional software usually require a minimum level of user expertise to achieve the desirable results. In this paper, we present an algorithm to detect facial wrinkles/imperfection. We believe that any such algorithm would be amenable to facial retouching applications. The detection of wrinkles/imperfections can allow these skin features to be processed differently than the surrounding skin without much user interaction. For detection, Gabor filter responses along with texture orientation field are used as image features. A bimodal Gaussian mixture model (GMM) represents distributions of Gabor features of normal skin versus skin imperfections. Then, a Markov random field model is used to incorporate the spatial relationships among neighboring pixels for their GMM distributions and texture orientations. An expectation-maximization algorithm then classifies skin versus skin wrinkles/imperfections. Once detected automatically, wrinkles/imperfections are removed completely instead of being blended or blurred. We propose an exemplar-based constrained texture synthesis algorithm to inpaint irregularly shaped gaps left by the removal of detected wrinkles/imperfections. We present results conducted on images downloaded from the Internet to show the efficacy of our algorithms.
A Single Generative Model for Joint Morphological Segmentation and Syntactic Parsing
Morphological processes in Semitic languages deliver space-delimited words which introduce multiple, distinct, syntactic units into the structure of the input sentence. These words are in turn highly ambiguous, breaking the assumption underlying most parsers that the yield of a tree for a given sentence is known in advance. Here we propose a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. Using a treebank grammar, a data-driven lexicon, and a linguistically motivated unknown-tokens handling technique our model outperforms previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
Design, Simulation and Fabrication of a Microstrip Bandpass Filter
This paper presents the design technique, simulation, fabrication and comparison between measured and simulated results of a parallel coupled microstrip BPF. The filter is designed and optimized at 2.44 GHz with a FBW of 3.42%. The first step in designing of this filter is approximated calculation of its lumped component prototype. Admittance inverter is used to transform the lumped component circuit into an equivalent form using microwave structures. After getting the required specifications, the filter structure is realized using parallel coupled technique. Simulation is done using ADS software. Next, optimization is done to achieve low insertion loss and a selective skirt. The simulated filter is fabricated on FR-4 substrate. Comparison between the simulated and measured results shows that they are approximately equal.
Personality measurement, faking, and employment selection.
Real job applicants completed a 5-factor model personality measure as part of the job application process. They were rejected; 6 months later they (n = 5,266) reapplied for the same job and completed the same personality measure. Results indicated that 5.2% or fewer improved their scores on any scale on the 2nd occasion; moreover, scale scores were as likely to change in the negative direction as the positive. Only 3 applicants changed scores on all 5 scales beyond a 95% confidence threshold. Construct validity of the personality scales remained intact across the 2 administrations, and the same structural model provided an acceptable fit to the scale score matrix on both occasions. For the small number of applicants whose scores changed beyond the standard error of measurement, the authors found the changes were systematic and predictable using measures of social skill, social desirability, and integrity. Results suggest that faking on personality measures is not a significant problem in real-world selection settings.
Analysis and Design of an Input-Series Two-Transistor Forward Converter For High-Input Voltage Multiple-Output Applications
In this paper, an input-series two-transistor forward converter is proposed and investigated, which is aimed at high-input voltage multiple-output applications. In this converter, all of the switches are operating synchronously, and the input voltage sharing (IVS) of each series-module is achieved automatically by the coupling of primary windings of the common forward integrated transformer. The active IVS processes are analyzed based on the model of the forward integrated transformer. Through the influence analysis when the mismatches in various series-modules are considered, design principles of the key parameters in each series-module are discussed to suppress the input voltage difference. Finally, a 96-W laboratory-made prototype composed of two forward series-modules is built, and the feasibility of the proposed method and the theoretical analysis are verified by the experimental results.
Responses of Gut Microbiota and Glucose and Lipid Metabolism to Prebiotics in Genetic Obese and Diet-Induced Leptin-Resistant Mice
OBJECTIVE To investigate deep and comprehensive analysis of gut microbial communities and biological parameters after prebiotic administration in obese and diabetic mice. RESEARCH DESIGN AND METHODS Genetic (ob/ob) or diet-induced obese and diabetic mice were chronically fed with prebiotic-enriched diet or with a control diet. Extensive gut microbiota analyses, including quantitative PCR, pyrosequencing of the 16S rRNA, and phylogenetic microarrays, were performed in ob/ob mice. The impact of gut microbiota modulation on leptin sensitivity was investigated in diet-induced leptin-resistant mice. Metabolic parameters, gene expression, glucose homeostasis, and enteroendocrine-related L-cell function were documented in both models. RESULTS In ob/ob mice, prebiotic feeding decreased Firmicutes and increased Bacteroidetes phyla, but also changed 102 distinct taxa, 16 of which displayed a >10-fold change in abundance. In addition, prebiotics improved glucose tolerance, increased L-cell number and associated parameters (intestinal proglucagon mRNA expression and plasma glucagon-like peptide-1 levels), and reduced fat-mass development, oxidative stress, and low-grade inflammation. In high fat-fed mice, prebiotic treatment improved leptin sensitivity as well as metabolic parameters. CONCLUSIONS We conclude that specific gut microbiota modulation improves glucose homeostasis, leptin sensitivity, and target enteroendocrine cell activity in obese and diabetic mice. By profiling the gut microbiota, we identified a catalog of putative bacterial targets that may affect host metabolism in obesity and diabetes.
The GDPR and Big Data: Leading the Way for Big Genetic Data?
Genetic data as a category of personal data creates a number of challenges to the traditional understanding of personal data and the rules regarding personal data processing. Although the peculiarities of and heightened risks regarding genetic data processing were recognized long before the data protection reform in the EU, the General Data Protection Regulation (GDPR) seems to pay no regard to this. Furthermore, the GDPR will create more legal grounds for (sensitive) personal data (incl. genetic data) processing whilst restricting data subjects’ means of control over their personal data. One of the reasons for this is that, amongst other aims, the personal data reform served to promote big data business in the EU. The substantive clauses of the GDPR concerning big data, however, do not differentiate between the types of personal data being processed. Hence, like all other categories of personal data, genetic data is subject to the big data clauses of the GDPR as well; thus leading to the question whether the GDPR is creating a pathway for ‘big genetic data’. This paper aims to analyse the implications that the role of the GDPR as a big data enabler bears on genetic data processing and the respective rights of the data
Statins in low doses reduce VEGF and bFGF serum levels in patients with type 2 diabetes mellitus.
BACKGROUND/AIMS Recent experimental research revealed that statins at low doses induce angiogenesis, which in turn may be related to the course of atherosclerosis. There are no clinical studies evaluating the effect of 'low-dose' statins on serum levels of angiogenesis regulators in diabetic subjects. We aimed to explain how low doses of statins modify the serum concentrations of two potent proangiogenic factors, vascular endothelial growth factor (VEGF) and basic fibroblast growth factor (bFGF), in patients with type 2 diabetes. METHODS Measurements of fasting glucose level, HbA1c, 1,5-anhydro-D-glucitol and lipid profile were taken from 47 patients with type 2 diabetes treated with low doses of atorvastatin (10 mg daily) or simvastatin (10-20 mg daily), from 45 statin-free patients with type 2 diabetes and from 23 nondiabetic subjects. Measurements of VEGF and bFGF in serum were taken using the BD™ Cytometric Bead Array. RESULTS AND CONCLUSION Statins used in low doses in patients with type 2 diabetes reduce the serum concentration of VEGF and bFGF which suggests antiangiogenic potential of these doses. Nevertheless, this effect could be neutralized by postprandial hyperglycemia.
A Non-Parametric Factor Microfacet Model for Isotropic BRDFs
We investigate the expressiveness of the microfacet model for isotropic bidirectional reflectance distribution functions (BRDFs) measured from real materials by introducing a non-parametric factor model that represents the model’s functional structure but abandons restricted parametric formulations of its factors. We propose a new objective based on compressive weighting that controls rendering error in high-dynamic-range BRDF fits better than previous factorization approaches. We develop a simple numerical procedure to minimize this objective and handle dependencies that arise between microfacet factors. Our method faithfully captures a more comprehensive set of materials than previous state-of-the-art parametric approaches yet remains compact (3.2KB per BRDF). We experimentally validate the benefit of the microfacet model over a naïve orthogonal factorization and show that fidelity for diffuse materials is modestly improved by fitting an unrestricted shadowing/masking factor. We also compare against a recent data-driven factorization approach [Bilgili et al. 2011] and show that our microfacet-based representation improves rendering accuracy for most materials while reducing storage by more than 10 ×.
Efficiency of direct and indirect shoot organogenesis, molecular profiling, secondary metabolite production and antioxidant activity of micropropagated Ceropegia santapaui
Ceropegias has acquired significant importance due to their medicinal properties, edible tubers, and its ornamental flowers. The aim of this study was to optimize direct shoot organogenesis (DSO), indirect shoot organogenesis (ISO) and plant regeneration of threatened medicinal plant Ceropegia santapaui, followed by analysis of genetic status and biochemical characterization of micropropagated plantlets. For optimization, cotyledonary nodes and cotyledons were used as source of explants in DSO and ISO respectively. The highest frequency of regeneration (88.0 %) for DSO with 8.1 ± 0.6 shoots per explant was obtained from cotyledonary nodes cultured on Murashige and Skoog’s (MS) medium containing 2.0 mg L−1 2iP. The best response for callus induction and proliferation was achieved with 1.5 mg L−1 PR (picloram) in which 97.5 % of cultures produced an average of 913 ± 10.9 mg (fresh weight) of callus. The highest frequency of shoot formation (92.5 %) with an average of 19.7 ± 0.3 shoots in ISO was obtained when calli were transferred to MS medium supplemented with 2.5 mg L−1 BAP and 0.4 mg L−1 IBA. Regenerated shoots were best rooted in half-strength MS medium with 2.0 mg L−1 NAA. Plantlets successfully acclimatized were morphologically indistinguishable from the source plant. Micropropagated plantlets subjected to random amplified polymorphic DNA and inter simple sequence repeats (ISSR) marker based profiling reveled uniform banding pattern in DSO-derived plantlets which was similar to mother plant. ISSR fingerprints of ISO-derived plants showed low variation. Method of regeneration, plant part and solvent system significantly affected the levels of total phenolics, flavonoids and antioxidant capacity. Assay of antioxidant activity of different tissues revealed that significantly higher antioxidant activity was observed in ISO-derived tissues than DSO-derived and mother tissues. RP-HPLC analysis of micropropagated plantlets showed the presence of three major phenolic compounds which were similar to those detected in mother plant. Rapid multiplication rate, genetic stability and biochemical parameter ensures the efficacy of the protocol developed for the propagation of this threatened medicinal plant.
A Review on the Computational Methods for Emotional State Estimation from the Human EEG
A growing number of affective computing researches recently developed a computer system that can recognize an emotional state of the human user to establish affective human-computer interactions. Various measures have been used to estimate emotional states, including self-report, startle response, behavioral response, autonomic measurement, and neurophysiologic measurement. Among them, inferring emotional states from electroencephalography (EEG) has received considerable attention as EEG could directly reflect emotional states with relatively low costs and simplicity. Yet, EEG-based emotional state estimation requires well-designed computational methods to extract information from complex and noisy multichannel EEG data. In this paper, we review the computational methods that have been developed to deduct EEG indices of emotion, to extract emotion-related features, or to classify EEG signals into one of many emotional states. We also propose using sequential Bayesian inference to estimate the continuous emotional state in real time. We present current challenges for building an EEG-based emotion recognition system and suggest some future directions.
Three-dimensional evaluation of soft tissue change gradients after mandibular setback surgery in skeletal Class III malocclusion.
OBJECTIVE To evaluate whether mandibular setback surgery (MSS) for Class III patients would produce gradients of three-dimensional (3D) soft tissue changes in the vertical and transverse aspects. MATERIALS AND METHODS The samples consisted of 26 Class III patients treated with MSS using bilateral sagittal split ramus osteotomy. Lateral cephalograms and 3D facial scan images were taken before and 6 months after MSS, and changes in landmarks and variables were measured using a Rapidform 2006. Paired and independent t-tests were performed for statistical analysis. RESULTS Landmarks in the upper lip and mouth corner (cheilion, Ch) moved backward and downward (respectively, cupid bow point, 1.0 mm and 0.3 mm, P < .001 and P < .01; alar curvature-Ch midpoint, 0.6 mm and 0.3 mm, both P < .001; Ch, 3.4 mm and 0.8 mm, both P < .001). However, landmarks in stomion (Stm), lower lip, and chin moved backward (Stm, 1.6 mm; labrale inferius [Li], 6.9 mm; LLBP, 6.9 mm; B', 6.7 mm; Pog', 6.7 mm; Me', 6.6 mm; P < .001, respectively). Width and height of upper and lower lip were not altered significantly except for a decrease of lower vermilion height (Stm-Li, 1.7 mm, P < .001). Chin height (B'-Me') was decreased because of backward and upward movement of Me' (3.1 mm, P < .001). Although upper lip projection angle and Stm-transverse projection angle became acute (Ch(Rt)-Ls-Ch(Lt), 5.7 degrees; Ch(Rt)-Stm-Ch(Lt), 6.4 degrees, both P < .001) because of the greater backward movement of Ch than Stm, lower lip projection angle and Stm-vertical projection angle became obtuse (Ch(Rt)-Li-Ch(Lt), 10.8 degrees ; Ls-Stm-Li, 23.5 degrees , both P < .001) because of the larger backward movement of Li than labrale superius (Ls). CONCLUSIONS Three-dimensional soft tissue changes in Class III patients after MSS exhibited increased gradients from upper lip and lower lip to chin as well as from Stm to Ch.
MRL — Memristor Ratioed Logic
Memristive devices are novel structures, developed primarily as memory. Another interesting application for memristive devices is logic circuits. In this paper, MRL (Memristor Ratioed Logic) - a hybrid CMOS-memristive logic family - is described. In this logic family, OR and AND logic gates are based on memristive devices, and CMOS inverters are added to provide a complete logic structure and signal restoration. Unlike previously published memristive-based logic families, the MRL family is compatible with standard CMOS logic. A case study of an eight-bit full adder is presented and related design considerations are discussed.
Bitcoin: Benefit or Curse?
The new world of mobile devices offers reasonable likelihood that virtual currency will prevail on a global scale. Currently, the bitcoin crypto-currency model appears to be a forerunner. Bitcoin, a highly disruptive technology, has both supporters and detractors. Nonetheless, in concert with other trends, some form of virtual currency, even if a successor to bitcoin, appears to have a path forward. Virtual currencies will likely gain in stature as other novel, unspecified, and disruptive innovations take hold in a world of increasingly autonomous systems. This department is part of a special issue on mobile commerce.
A COLLECTIVE ACTION MODEL OF INSTITUTIONAL INNOVATION
We introduce a collective action model of institutional innovation. This model, based on converging perspectives from the technology innovation management and social movements literature, views institutional change as a dialectical process in which partisan actors espousing conflicting views confront each other and engage in political behaviors to create and change institutions. The model represents an important complement to existing models of institutional change. We discuss how these models together account for various stages and cycles of institutional change.
Mediation Analysis in Social Psychology : Current Practices and New Recommendations
A key aim of social psychology is to understand the psychological processes through which independent variables affect dependent variables in the social domain. This objective has given rise to statistical methods for mediation analysis. In mediation analysis, the significance of the relationship between the independent and dependent variables has been integral in theory testing, being used as a basis to determine (1) whether to proceed with analyses of mediation and (2) whether one or several proposed mediator(s) fully or partially accounts for an effect. Synthesizing past research and offering new arguments, we suggest that the collective evidence raises considerable concern that the focus on the significance between the independent and dependent variables, both before and after mediation tests, is unjustified and can impair theory development and testing. To expand theory involving social psychological processes, we argue that attention in mediation analysis should be shifted towards assessing the magnitude and significance of indirect effects. Understanding the psychological processes by which independent variables affect dependent variables in the social domain has long been of interest to social psychologists. Although moderation approaches can test competing psychological mechanisms (e.g., Petty, 2006; Spencer, Zanna, & Fong, 2005), mediation is typically the standard for testing theories regarding process (e.g., Baron & Kenny, 1986; James & Brett, 1984; Judd & Kenny, 1981; MacKinnon, 2008; MacKinnon, Lockwood, Hoffman, West, & Sheets, 2002; Muller, Judd, & Yzerbyt, 2005; Preacher & Hayes, 2004; Preacher, Rucker, & Hayes, 2007; Shrout & Bolger, 2002). For example, dual process models of persuasion (e.g., Petty & Cacioppo, 1986) often distinguish among competing accounts by measuring the postulated underlying process (e.g., thought favorability, thought confidence) and examining their viability as mediators (Tormala, Briñol, & Petty, 2007). Thus, deciding on appropriate requirements for mediation is vital to theory development. Supporting the high status of mediation analysis in our field, MacKinnon, Fairchild, and Fritz (2007) report that research in social psychology accounts for 34% of all mediation tests in psychology more generally. In our own analysis of journal articles published from 2005 to 2009, we found that approximately 59% of articles in the Journal of Personality and Social Psychology (JPSP) and 65% of articles in Personality and Social Psychology Bulletin (PSPB) included at least one mediation test. Consistent with the observations of MacKinnon et al., we found that the bulk of these analyses continue to follow the causal steps approach outlined by Baron and Kenny (1986). Social and Personality Psychology Compass 5/6 (2011): 359–371, 10.1111/j.1751-9004.2011.00355.x a 2011 The Authors Social and Personality Psychology Compass a 2011 Blackwell Publishing Ltd The current article examines the viability of the causal steps approach in which the significance of the relationship between an independent variable (X) and a dependent variable (Y) are tested both before and after controlling for a mediator (M) in order to examine the validity of a theory specifying mediation. Traditionally, the X fi Y relationship is tested prior to mediation to determine whether there is an effect to mediate, and it is also tested after introducing a potential mediator to determine whether that mediator fully or partially accounts for the effect. At first glance, the requirement of a significant X fi Y association prior to examining mediation seems reasonable. If there is no significant X fi Y relationship, how can there be any mediation of it? Furthermore, the requirement that X fi Y become nonsignificant when controlling for the mediator seems sensible in order to claim ‘full mediation’. What is the point of hypothesizing or testing for additional mediators if the inclusion of one mediator renders the initial relationship indistinguishable from zero? Despite the intuitive appeal of these requirements, the present article raises serious concerns about their use.
Do elderly people prefer a conversational humanoid as a shopping assistant partner in supermarkets?
Assistive robots can be perceived in two main ways: tools or partners. In past research, assistive robots that offer physical assistance for the elderly are often designed in the context of a tool metaphor. This paper investigates the effect of two design considerations for assistive robots in a partner metaphor: conversation and robot-type. The former factor is concerned with whether robots should converse with people even if the conversation is not germane for completing the task. The latter factor is concerned with whether people prefer a communication/function oriented design for assistive robots. To test these design considerations, we selected a shopping assistance situation where a robot carries a shopping basket for elderly people, which is one typical scenario used for assistive robots. A field experiment was conducted in a real supermarket in Japan where 24 elderly participants shopped with robots. The experimental results revealed that they prefer a conversational humanoid as a shopping assistant partner.
Naturalistic Observations of Peer Interventions in Bullying
This study examined peer intervention in bullying using naturalistic observations on school playgrounds. The sample comprised 58 children (37 boys and 21 girls) in Grades 1 to 6 who were observed to intervene in bullying. Peers were present during 88% of bullying episodes and intervened in 19%. In 47% of the episodes, peers intervened aggressively. Interventions directed toward the bully were more likely to be aggressive, whereas interventions directed toward the victim or the bully-victim dyad were more likely to be nonaggressive. The majority (57%) of interventions were effective in stopping bullying. Boys were more likely to intervene when the bully and victim were male and girls when the bully and victim were female. The implications for antibullying interventions are discussed.
VHDL Implementation Of Reconfigurable Crossbar Switch For Binoc Router
Network-on-Chip (NoC) is the interconnection platform that answers the requirements of the modern on-Chip design. Small optimizations in NoC router architecture can show a significant improvement in the overall performance of NoC based systems. Power consumption, area overhead and the entire NoC performance is influenced by the router crossbar switch. This paper presents implementation of 10x10 reconfigurable crossbar switch (RCS) architecture for Dynamic Self-Reconfigurable BiNoC Architecture for Network On Chip. Its main purpose is to increase the performance, flexibility. This paper presents a VHDL based cycle accurate register transfer level model for evaluating the, Power and Area of reconfigurable crossbar switch in BiNoC architectures. We implemented a parameterized register transfer level design of reconfigurable crossbar switch (RCS) architec ture. The design is parameterized on (i) size of packets, (ii) length and width of physical links, (iii) number, and depth of arbi ters, and (iv) switching technique. The paper discusses in detail the architecture and characterization of the various reconfigurable crossbar switch (RCS) architecture components. The characterized values were integrated into the VHDL based RTL design to build the cycle accurate performance model. In this paper we show the result of simple 4 x4 as well as 10x10 crossbar switch .The results include VHDL simulation of RCS on ModelSim tool for 4 x4 crossbar switch and Xilinx ISE 13.1 software tool for 10x10 crossbar switch.
Real-Time High Resolution 3D Data on the HoloLens
The recent appearance of augmented reality headsets, such as the Microsoft HoloLens, is a marked move from traditional 2D screen to 3D hologram-like interfaces. Striving to be completely portable, these devices unfortunately suffer multiple limitations, such as the lack of real-time, high quality depth data, which severely restricts their use as research tools. To mitigate this restriction, we provide a simple method to augment a HoloLens headset with much higher resolution depth data. To do so, we calibrate an external depth sensor connected to a computer stick that communicates with the HoloLens headset in real-time. To show how this system could be useful to the research community, we present an implementation of small object detection on HoloLens device.
Familiarity with, understanding of, and attitudes toward epilepsy among people with epilepsy and healthy controls in South Korea
This study identifies differences between people with epilepsy (PWE) and healthy controls in South Korea with respect to their familiarity with, understanding of, and attitudes toward epilepsy. PWE and controls older than 18 years of age were recruited from outpatient clinics and health promotion centers, respectively, associated with five university hospitals located throughout the country. Structured questionnaires consisting of 18 items were administered in face-to-face interviews. The sample consisted of 1924 participants (PWE: 384, controls: 1540). The groups did not differ with respect to age, sex, and place of residence. However, the groups did differ significantly in educational, marital, and occupational status (P=0.000). Familiarity with seizures and epilepsy (two items) did not differ significantly between the groups. Questions pertaining to understanding seizures and epilepsy (seven items) showed that controls had significantly greater misunderstanding of the etiology and long-term prognosis of epilepsy compared with PWE. Attitudes expressed toward PWE were significantly different in response to six of seven questions. Control subjects expressed more negative attitudes toward PWE than did PWE themselves, particularly concerning potential relationships with their children (e.g., friendships, marriage). In conclusion, we found significant differences between PWE and controls, particularly with respect to understanding of and attitudes toward epilepsy. We recommend the development of different strategies for PWE and controls to improve understanding of and attitudes toward epilepsy and to reduce the knowledge gap between these groups. Nationwide educational programs conducted by associated organizations and the government may provide the solution to this problem.
Driver Gaze Region Estimation Without Using Eye Movement
Postmaster: Send undelivered copies and address changes to IEEE Intelligent Systems, Membership Processing Dept., IEEE Service Center, 445 Hoes Lane, Piscataway, NJ 08854-4141. Periodicals Postage Paid at New York, NY, and at additional mailing offices. Canadian GST #125634188. Canada Post Publications Mail Agreement Number 40013885. Return undeliverable Canadian addresses to 4960-2 Walker Rd., Windsor, ON N9A 6J3. Printed in the USA. Reuse Rights and Reprint Permissions: Educational or personal use of this material is permitted without fee, provided such use 1) is not made for profit, 2) includes this notice and a full citation to the original work on the first page of the copy, and 3) does not imply IEEE endorsement of any third-party products or services. Authors and their companies are permitted to post the accepted version of IEEE-copyrighted material on their own Web servers without permission, provided that the IEEE copyright notice and a full citation to the original work appear on the first screen of the posted copy. An accepted manuscript is a version that has been revised by the author to incorporate review suggestions but not the published version with copyediting, proofreading, and formatting added by IEEE. For more information, please go to http://www. ieee.org/publications_standards/publications/rights/ paperversionpolicy.html. Permission to reprint/republish this material for commercial, advertising, or promotional purposes or for creating new collective works for resale or redistribution must be obtained from IEEE by writing to the IEEE Intellectual Property Rights Office, 445 Hoes Lane, Piscataway, NJ 08854-4141 or pubs-permissions@ ieee.org. Copyright © 2016 IEEE. All rights reserved. Abstracting and Library Use: Abstracting is permitted with credit to the source. Libraries are permitted to photocopy for private use of patrons, provided the per-copy fee indicated in the code at the bottom of the first page is paid through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923. ARTICLES
Briquetting agricultural waste as an energy source in Ghana EBO TAWIAH QUARTEY Faculty of Economic and Administration University of Pardubice Studentska
This article aims at providing biomass as an alternative to wood charcoal using in Ghana. Using agricultural wastes converted into charcoal briquettes to provide much needed source of cheap fuel that is cleaner in burning. It is also intended to create awareness of agricultural wastes briquettes technology in Ghana and to make use of the technology by small scale entrepreneurs. This paper also seeks to explore benefits Ghana can achieve by using agricultural residue as a substitute for wood fuel burning. Agricultural residue includes all leaves, straw and husks left in the field after harvest, hulls and shells removed during processing of crop at the mills.
MRI-PET registration with automated algorithm.
OBJECTIVE We have previously reported an automated method for within-modality (e.g., PET-to-PET) image alignment. We now describe modifications to this method that allow for cross-modality registration of MRI and PET brain images obtained from a single subject. METHODS This method does not require fiducial markers and the user is not required to identify common structures on the two image sets. To align the images, the algorithm seeks to minimize the standard deviation of the PET pixel values that correspond to each MRI pixel value. The MR images must be edited to exclude nonbrain regions prior to using the algorithm. RESULTS AND CONCLUSION The method has been validated quantitatively using data from patients with stereotaxic fiducial markers rigidly fixed in the skull. Maximal three-dimensional errors of < 3 mm and mean three-dimensional errors of < 2 mm were measured. Computation time on a SPARCstation IPX varies from 3 to 9 min to align MR image sets with [18F]fluorodeoxyglucose PET images. The MR alignment with noisy H2(15)O PET images typically requires 20-30 min.
Coordinate Noun Phrase Disambiguation in a Generative Parsing Model
In this paper we present methods for improving the disambiguation of noun phrase (NP) coordination within the framework of a lexicalised history-based parsing model. As well as reducing noise in the data, we look at modelling two main sources of information for disambiguation: symmetry in conjunct structure, and the dependency between conjunct lexical heads. Our changes to the baseline model result in an increase in NP coordination dependency f-score from 69.9% to 73.8%, which represents a relative reduction in f-score error of 13%.
The Inverse Mean Curvature Flow and the Riemannian Penrose Inequality
LetM be an asymptotically flat 3-manifold of nonnegative scalar curvature. The Riemannian Penrose Inequality states that the area of an outermost minimal surface N in M is bounded by the ADM mass m according to the formula |N | ≤ 16πm2. We develop a theory of weak solutions of the inverse mean curvature flow, and employ it to prove this inequality for each connected component of N using Geroch’s monotonicity formula for the ADM mass. Our method also proves positivity of Bartnik’s gravitational capacity by computing a positive lower bound for the mass purely in terms of local geometry. 0. Introduction In this paper we develop the theory of weak solutions for the inverse mean curvature flow of hypersurfaces in a Riemannian manifold, and apply it to prove the Riemannian Penrose Inequality for a connected horizon, to wit: the total mass of an asymptotically flat 3-manifold of nonnegative scalar curvature is bounded below in terms of the area of each smooth, compact, connected, “outermost” minimal surface in the 3-manifold. A minimal surface is called outermost if it is not separated from infinity by any other compact minimal surface. The result was announced in [51]. The first author acknowledges the support of Sonderforschungsbereich 382, Tübingen. The second author acknowledges the support of an NSF Postdoctoral Fellowship, NSF Grants DMS-9626405 and DMS-9708261, a Sloan Foundation Fellowship, and the Max-Planck-Institut for Mathematics in the Sciences, Leipzig. Received May 15, 1998.
There's something about MRAI: Timing diversity can exponentially worsen BGP convergence
To better support interactive applications, individual network operators are decreasing the timers that affect BGP convergence, leading to greater diversity in the timer settings across the Internet. While decreasing timers is intended to improve routing convergence, we show that, ironically, the resulting timer heterogeneity can make routing convergence substantially worse. We examine the widely-used Min Route Advertisement Interval (MRAI) timer that rate-limits update messages to reduce router overhead. We show that, while routing systems with homogeneous MRAI timers have linear convergence time, diverse MRAIs can cause exponential increases in both the number of BGP messages and the convergence time (as measured in “activations”). We prove tight upper bounds on these metrics in terms of MRAI timer diversity in general dispute-wheel-free networks and economically sensible (Gao-Rexford) settings. We also demonstrate significant impacts on the data plane: blackholes sometimes last throughout the route-convergence process, and forwarding changes, at best, are only polynomially less frequent than routing changes. We show that these problems vanish in contiguous regions of the Internet with homogeneous MRAIs or with next-hop-based routing policies, suggesting practical strategies for mitigating the problem, especially when all routers are administered by one institution.
Significant inverse associations of serum n-6 fatty acids with plasma plasminogen activator inhibitor-1.
Epidemiological studies suggested that n-6 fatty acids, especially linoleic acid (LA), have beneficial effects on CHD, whereas some in vitro studies have suggested that n-6 fatty acids, specifically arachidonic acid (AA), may have harmful effects. We examined the association of serum n-6 fatty acids with plasminogen activator inhibitor-1 (PAI-1). A population-based cross-sectional study recruited 926 randomly selected men aged 40-49 years without CVD during 2002-2006 (310 Caucasian, 313 Japanese and 303 Japanese-American men). Plasma PAI-1 was analysed in free form, both active and latent. Serum fatty acids were measured with gas-capillary liquid chromatography. To examine the association between total n-6 fatty acids (including LA and AA) and PAI-1, multivariate regression models were used. After adjusting for confounders, total n-6 fatty acids, LA and AA, were inversely and significantly associated with PAI-1 levels. These associations were consistent across three populations. Among 915 middle-aged men, serum n-6 fatty acids had significant inverse associations with PAI-1.
A Framework for the Analysis of Unevenly Spaced Time Series Data
This paper presents methods for analyzing and manipulating unevenly spaced time series without a transformation to equally spaced data. Processing and analyzing such data in its unaltered form avoids the biases and information loss caused by resampling. Care is taken to develop a framework consistent with a traditional analysis of equally spaced data, as in Brockwell and Davis (1991), Hamilton (1994) and Box, Jenkins, and Reinsel (2004).
Analysis of a bounding box heuristic for object intersection
Bounding boxes are commonly used in computer graphics and other fields to improve the performance of algorithms that should process only the intersecting objects. A bounding-box-based heuristic avoids unnecessary intersection processing by eliminating the pairs whose bounding boxes are disjoint. Empirical evidence suggests that the heuristic works well in many practical applications, although its worst-case performance can be bad for certain pathological inputs. What is a pathological input, however, is not well understood, and consequently there is no guarantee that the heuristic will always work well in a specific application. In this paper, we analyze the performance of bounding box heuristic in terms of two natural shape parameters, aspect ratio and scale factor. These parameters can be used to realistically measure the degree to which the objects are pathologically shaped. We derive tight worst-case bounds on the performance for bounding box heuristic. One of the significant contributions of our paper is that we only require that objects be well shaped on average. Somewhat surprisingly, the bounds are significantly different from the case when all objects are well shaped.
A Comparative Study of Stemming Algorithms Ms .
Stemming is a pre-processing step in Text Mining applications as well as a very common requirement of Natural Language processing functions. In fact it is very important in most of the Information Retrieval systems. The main purpose of stemming is to reduce different grammatical forms / word forms of a word like its noun, adjective, verb, adverb etc. to its root form. We can say that the goal of stemming is to reduce inflectional forms and sometimes derivationally related forms of a word to a common base form. In this paper we have discussed different methods of stemming and their comparisons in terms of usage, advantages as well as limitations. The basic difference between stemming and lemmatization is also discussed. Keywordsstemming; text mining; NLP; IR; suffix
Being Accurate Is Not Enough: New Metrics for Disk Failure Prediction
Traditionally, disk failure prediction accuracy is used to evaluate disk failure prediction model. However, accuracy may not reflect their practical usage (protecting against failures, rather than only predicting failures) in cloud storage systems. In this paper, we propose two new metrics for disk failure prediction models: migration rate, which measures how much at-risk data is protected as a result of correct failure predictions, and mismigration rate, which measures how much data is migrated needlessly as a result of false failure predictions. To demonstrate their effectiveness, we compare disk failure prediction methods: (a) a classification tree (CT) model vs. a state-of-the-art recurrent neural network (RNN) model, and (b) a proposed residual life prediction model based on gradient boosted regression trees (GBRTs) vs. RNN. While prediction accuracy experiments favor the RNN model, migration rate experiments can favor the CT and GBRT models (depending on transfer rates). We conclude that prediction accuracy can be a misleading metric. Moreover, the proposed GBRT model offers a practical improvement in disk failure prediction in real-world data centers.
Bag-of-Audio-Words Approach for Multimedia Event Classification
With the popularity of online multimedia videos, there has been much interest in recent years in acoustic event detection and classification for the improvement of online video search. The audio component of a video has the potential to contribute significantly to multimedia event classification. Recent research in audio document classification has drawn parallels to text and image document retrieval by employing what is referred to as the bag-of-audio words (BoAW) method. Compared to supervised approaches where audio concept detectors are trained using annotated data and extracted labels are used as lowlevel features for multimedia event classification. The BoAW approach extracts audio concepts in an unsupervised fashion. Hence this method has the advantage that it can be employed easily for a new set of audio concepts in multimedia videos without going through a laborious annotation effort. In this paper, we explore variations of the BoAW method and present results on NIST 2011 multimedia event detection (MED) dataset.
Electrophysiological and structural assessment of the central retina following intravitreal injection of bevacizumab for treatment of macular edema
Purpose To evaluate with electrophysiological responses and Optical Coherence Tomography (OCT), the short term functional and structural effects at the macula following intravitreal injection of bevacizumab for macular edema. Methods Prospective, non-randomized, interventional case study. In total, 17 eyes of 17 patients with macular edema due to vein occlusions and diabetic retinopathy received intravitreal bevacizumab. All Patients underwent complete ophthalmic examination including Snellen visual acuity testing, Multifocal Electroretinography (mfERG) and Full Field Electroretinography (FERG), OCT scanning at baseline at 1 week and 2 months after intravitreal bevacizumab. Results FERG did not show any change in waveform parameters following intravitreal injection of bevacizumab. Average mfERG macular responses within central 20° showed increased P1 amplitude (P < 0.05) at 2 months after treatment as compared to the baseline recordings in all subjects. No changes were seen in the implicit time. There was 22% improvement in central retinal thickness (CRT) at 2 months compared to the baseline (P < 001). Conclusion Intravitreal injection bevacizumab resulted in reduction in the central retinal thickness and mild to moderate improvement in the mfERG amplitudes in this short-term study. The visual acuity changes did not directly correlate with the reduced central retinal thickness or improvement in mfERG. The short-term results showed no serious ocular adverse effects. Therefore on short-term follow up the off label drug showed improvement of macular edema secondary to vein occlusion and diabetic retinopathy with no demonstrable toxic effects.
Embedded Devices Security and Firmware Reverse Engineering BH 13 US Workshop
Embedded devices have become the usual presence in the network of (m)any household(s), SOHO, enterprise or critical infrastructure. The preached Internet of Things promises to gazillionuple their number and heterogeneity in the next few years. However, embedded devices are becoming lately the usual suspects in security breaches and security advisories and thus become the Achilles’ heel of one’s overall infrastructure security. An important aspect is that embedded devices run on what’s commonly known as firmwares. To understand how to secure embedded devices, one needs to understand their firmware and how it works. This workshop aims at presenting a quick-start at how to inspect firmwares and a hands-on presentation with exercises on real firmwares from a security analysis standpoint.
CloseGraph: mining closed frequent graph patterns
Recent research on pattern discovery has progressed form mining frequent itemsets and sequences to mining structured patterns including trees, lattices, and graphs. As a general data structure, graph can model complicated relations among data with wide applications in bioinformatics, Web exploration, and etc. However, mining large graph patterns in challenging due to the presence of an exponential number of frequent subgraphs. Instead of mining all the subgraphs, we propose to mine closed frequent graph patterns. A graph g is closed in a database if there exists no proper supergraph of g that has the same support as g. A closed graph pattern mining algorithm, CloseGraph, is developed by exploring several interesting pruning methods. Our performance study shows that CloseGraph not only dramatically reduces unnecessary subgraphs to be generated but also substantially increases the efficiency of mining, especially in the presence of large graph patterns.
Efficient occupancy grid computation on the GPU with lidar and radar for road boundary detection
Accurate maps of the static environment are essential for many advanced driver-assistance systems. A new method for the fast computation of occupancy grid maps with laser range-finders and radar sensors is proposed. The approach utilizes the Graphics Processing Unit to overcome the limitations of classical occupancy grid computation in automotive environments. It is possible to generate highly accurate grid maps in just a few milliseconds without the loss of sensor precision. Moreover, in the case of a lower resolution radar sensor it is shown that it is suitable to apply super-resolution algorithms to achieve the accuracy of a higher resolution laser-scanner. Finally, a novel histogram based approach for road boundary detection with lidar and radar sensors is presented.
Design of Cooperative Non-Orthogonal Multicast Cognitive Multiple Access for 5G Systems: User Scheduling and Performance Analysis
Non-orthogonal multiple access (NOMA) is emerging as a promising, yet challenging, multiple access technology to improve spectrum utilization for the fifth generation (5G) wireless networks. In this paper, the application of NOMA to multicast cognitive radio networks (termed as MCR-NOMA) is investigated. A dynamic cooperative MCR-NOMA scheme is proposed, where the multicast secondary users serve as relays to improve the performance of both primary and secondary networks. Based on the available channel state information (CSI), three different secondary user scheduling strategies for the cooperative MCR-NOMA scheme are presented. To evaluate the system performance, we derive the closed-form expressions of the outage probability and diversity order for both networks. Furthermore, we introduce a new metric, referred to as mutual outage probability to characterize the cooperation benefit compared to non-cooperative MCR-NOMA scheme. Simulation results demonstrate significant performance gains are obtained for both networks, thanks to the use of our proposed cooperative MCR-NOMA scheme. It is also demonstrated that higher spatial diversity order can be achieved by opportunistically utilizing the CSI available for the secondary user scheduling.
Cost-effectiveness analysis of aripiprazole vs standard-of-care in the management of community-treated patients with schizophrenia: STAR study.
UNLABELLED Abstract Background: The Schizophrenia Trial of Aripiprazole (STAR) showed superior efficacy for aripiprazole compared with atypical antipsychotic standard-of-care (SOC) for the community treatment of schizophrenia 1 based on the Investigator Assessment Questionnaire total score. OBJECTIVE To determine the cost-effectiveness of aripiprazole compared with SOC medications from a health and social care system perspective. METHODS Information on health and social care service use was collected using the Client Socio-demographic and Service Receipt Inventory (CSSRI). Unit costs attached to each service were used to calculate patients' healthcare and other costs. The primary outcome measure was Investigator's Assessment Questionnaire (IAQ) score; secondary measures included the Clinical Global Impression (CGI)-Improvement response and Quality of Life Scale (QLS). Incremental cost-effectiveness was measured over 26 weeks as the ratio of the difference in mean costs between aripiprazole and SOC (olanzapine, quetiapine and risperidone) to the difference in mean outcomes. Net benefit was used to plot the cost-effectiveness acceptability curve. RESULTS The analysis sample (all randomised subjects who met the study inclusion criteria) included 282 individuals randomised to aripiprazole and 266 to SOC (olanzapine, n = 75; quetiapine, n = 110 and risperidone, n = 81). The additional mean cost of achieving a clinically significant difference on the IAQ was £3896, where a clinically significant difference was taken to be an 8-point improvement. The cost-effectiveness acceptability curve for the IAQ indicated that aripiprazole has a relatively high probability of being viewed as cost-effective for a range of plausible values attached to the incremental outcome difference. Additional costs of a clinically significant improvement on the CGI-Improvement and QLS were £575 and £835, respectively. These measures therefore support the view that aripiprazole is more cost-effective than SOC from a health and social care perspective for people with schizophrenia treated in the community. CONCLUSION In the STAR study, use of aripiprazole in the management of patients with schizophrenia was cost-effective.
Neutrophil β2-adrenergic receptor coupling efficiency to Gs protein in subjects with post-traumatic stress disorder and normal controls
The symptomatology of post-traumatic stress disorder (PTSD) involves sympathetic hyperarousal. Several of these sympathetic symptoms are mediated through end-organ beta2-adrenergic receptors (β2AR). Increased sympathetic activity in PTSD could therefore be due to increased βAR function. This study investigated βAR function in 30 healthy controls and 20 drug-free PTSD patients. βAR binding studies were conducted using antagonist-saturation and agonist-displacement experiments. Measures of β2AR coupling to Gs protein were derived from agonist-displacement experiments. PTSD patients had significantly higher β2AR density – particularly in the high-conformational state – and higher β2AR coupling than controls, as reflected in a higher percentage of receptors in the high conformational state and a higher ratio of the agonist dissociaton constant from the receptor in the low-/high-conformational state. Increased βAR function in PTSD is consistent with the symptomatology of this disorder. Increased βAR density and coupling may be consistent with downregulation of βAR density and uncoupling by antidepressants and may underlie their partial efficacy in PTSD. Dysregulation in Gs protein function is postulated and, agonist-mediated regulation of βAR expression and/or βAR kinase activity in PTSD should be investigated in future studies.
Single-institute comparative analysis of unrelated bone marrow transplantation and cord blood transplantation for adult patients with hematologic malignancies.
Unrelated cord blood transplantation (CBT) has now become more common, but as yet there have been only a few reports on its outcome compared with bone marrow transplantation (BMT), especially for adults. We studied the clinical outcomes of 113 adult patients with hematologic malignancies who received unrelated BM transplants (n = 45) or unrelated CB transplants (n = 68). We analyzed the hematopoietic recovery, rates of graft-versus-host disease (GVHD), risks of transplantation-related mortality (TRM) and relapse, and disease-free survival (DFS) using Cox proportional hazards models. The time from donor search to transplantation was significantly shorter among CB transplant recipients (median, 2 months) than BM transplant recipients (median, 11 months; P < .01). Multivariate analysis demonstrated slow neutrophil (P < .01) and platelet (P < .01) recoveries in CBT patients compared with BMT patients. Despite rapid tapering of immunosuppressants after transplantation and infrequent use of steroids to treat severe acute GVHD, there were no GVHD-related deaths among CB transplant recipients compared with 10 deaths of 24 among BM transplant recipients. Unrelated CBT showed better TRM and DFS results compared with BMT (P = .02 and P < .01, respectively), despite the higher human leukocyte antigen mismatching rate and lower number of infused cells. These data strongly suggest that CBT could be safely and effectively used for adult patients with hematologic malignancies.
Span-Based Constituency Parsing with a Structure-Label System and Provably Optimal Dynamic Oracles
Parsing accuracy using efficient greedy transition systems has improved dramatically in recent years thanks to neural networks. Despite striking results in dependency parsing, however, neural models have not surpassed stateof-the-art approaches in constituency parsing. To remedy this, we introduce a new shiftreduce system whose stack contains merely sentence spans, represented by a bare minimum of LSTM features. We also design the first provably optimal dynamic oracle for constituency parsing, which runs in amortized O(1) time, compared to O(n) oracles for standard dependency parsing. Training with this oracle, we achieve the best F1 scores on both English and French of any parser that does not use reranking or external data.
Rehabilitation of hand function after spinal cord injury using a novel handgrip device: a pilot study
Activity-based therapy (ABT) for patients with spinal cord injury (SCI), which consists of repetitive use of muscles above and below the spinal lesion, improves locomotion and arm strength. Less data has been published regarding its effects on hand function. We sought to evaluate the effects of a weekly hand-focused therapy program using a novel handgrip device on grip strength and hand function in a SCI cohort. Patients with SCI were enrolled in a weekly program that involved activities with the MediSens (Los Angeles, CA) handgrip. These included maximum voluntary contraction (MVC) and a tracking task that required each subject to adjust his/her grip strength according to a pattern displayed on a computer screen. For the latter, performance was measured as mean absolute accuracy (MAA). The Spinal Cord Independence Measure (SCIM) was used to measure each subject’s independence prior to and after therapy. Seventeen patients completed the program with average participation duration of 21.3 weeks. The cohort included patients with American Spinal Injury Association (ASIA) Impairment Scale (AIS) A (n = 12), AIS B (n = 1), AIS C (n = 2), and AIS D (n = 2) injuries. The average MVC for the cohort increased from 4.1 N to 21.2 N over 20 weeks, but did not reach statistical significance. The average MAA for the cohort increased from 9.01 to 21.7% at the end of the study (p = .02). The cohort’s average SCIM at the end of the study was unchanged compared to baseline. A weekly handgrip-based ABT program is feasible and efficacious at increasing hand task performance in subjects with SCI.
Rugby-Specific Small-Sided Games Training Is an Effective Alternative to Stationary Cycling at Reducing Clinical Risk Factors Associated with the Development of Type 2 Diabetes: A Randomized, Controlled Trial
INTRODUCTION The present study investigated whether rugby small-sided games (SSG) could be an effective alternative to continuous stationary cycling (CYC) training at reducing clinical risk factors associated with the development of type 2 diabetes mellitus (T2DM). METHODS Thirty-three middle-aged (48.6±6.6y), inactive men were randomized into a CYC (n=11), SSG (n=11), or control (CON, n=11) group. Participants trained 3d·wk(-1) for 8 weeks, while control participants maintained normal activity and dietary patterns. Exercise duration was matched between groups, which involved CYC or SSG (four quarters, interspersed with 2-min passive recovery). Both training programs were designed to induce similar internal loads of maximal heart rate (~80-85%HRmax) and rating of perceived exertion. Pre- and post-intervention testing included dual-energy x-ray absorptiometry scan, graded exercise test, fasting 2 h oral glucose tolerance test and resting muscle biopsy. Western blotting was used to assess the content of skeletal muscle proteins associated with mitochondrial biogenesis and glucose regulation. RESULTS Both CYC and SSG increased VO2 at 80%HRmax, and reduced glycated haemoglobin, glucose area under the curve (AUC; SSG, -2.3±2.4; CYC -2.2±1.6 mmol·L(1)(120 min)(1); p<0.05), and total body fat-mass (SSG -2.6±0.9%; CYC -2.9±1.1%), compared to no change in CON (p<0.05). SSG reduced insulin AUC (-30.4±40.7 µlU·mL(1)(120 min)(1); p<0.05) and increased total body fat-free mass (1.1±1.2 kg; p<0.05), with no change in CYC or CON (P>0.05). There were no differences within or between conditions for protein content of peroxisome proliferator-activated receptor gamma coactivator-1α, sirtuin-1, p53, glucose transporter-4, protein kinase AKT/PKB, myocyte enhancer factor 2A, mitochondrial transcription factor, nuclear respiratory factor (NRF)-1, NRF-2 or mitochondrial complexes I-V (p>0.05). CONCLUSION Rugby small-sided games is an effective alternative to continuous cycling for improving metabolic risk-factors associated with the prevention of T2DM. Despite such positive adaptations in clinical risk factors, there were no changes in the content of skeletal muscle proteins associated with glucose regulation and mitochondrial biogenesis. TRIAL REGISTRATION Australian New Zealand Clinical Trial Registry ACTRN12613000874718.
Energy-eficient packet transmission over a wireless link
The paper considers the problem of minimizing the energy used to transmit packets over a wireless link via lazy schedules that judiciously vary packet transmission times. The problem is motivated by the following observation. With many channel coding schemes, the energy required to transmit a packet can be significantly reduced by lowering transmission power and code rate, and therefore transmitting the packet over a longer period of time. However, information is often time-critical or delay-sensitive and transmission times cannot be made arbitrarily long. We therefore consider packet transmission schedules that minimize energy subject to a deadline or a delay constraint. Specifically, we obtain an optimal offline schedule for a node operating under a deadline constraint. An inspection of the form of this schedule naturally leads us to an online schedule which is shown, through simulations, to perform closely to the optimal offline schedule. Taking the deadline to infinity, we provide an exact probabilistic analysis of our offline scheduling algorithm. The results of this analysis enable us to devise a lazy online algorithm that varies transmission times according to backlog. We show that this lazy schedule is significantly more energy-efficient compared to a deterministic (fixed transmission time) schedule that guarantees queue stability for the same range of arrival rates.
Real-Time Billboard Substitution in a Video Stream
We present a system that accepts as input a continuous stream of TV broadcast images from sports events. The system automatically detects a predetermined billboard in the scene, and replaces it with a user defined pattern, with no cooperation from the billboards or camera operators. The replacement is performed seamlessly so that a viewer should not be able to detect the substitution. This allows the targeting of advertising to the appropriate audience, which is especially useful for international events. This requires several modules using state of the art computer graphics, image processing and computer vision technology. The system relies on modular design, and on a pipeline architecture, in which the search and track modules propagate their results in symbolic form throughout the pipeline buffer, and the replacement is performed at the exit of the pipe only, therefore relying on accumulated information. Also, images are processed only once. This allows the system to make replacement decisions based on complete sequences, thus avoiding mid-sequence on screen billboard changes. We present the algorithms and the overall system architecture, and discuss further applications of the technology.
Impact of Grassland Disasters on the Restructuring of Western Ecosystem
By integrating different theories concerning grassland science, ecological science, earth science, resources and environment science, social science and disaster prevention science, this paper deals with the necessity of preventing grassland disasters in western regions, followed by an analysis of the major types of disasters, especially the disasters in the past few years that have caused great losses. In the latter part, it presents a study of the natural and human factors that may contribute to disasters, and proposes ways to revive the grassland and restructuring the ecosystem in the western region.
Nonreciprocal Horn Antennas Using Angular Momentum-Biased Metamaterial Inclusions
In this work, we apply angular momentum-biased metamaterials to break the intrinsic reciprocity of antenna systems, with potential applications to satellite communications. We focus our attention on a conical horn antenna exhibiting nonreciprocal response for left-handed and right-handed circularly polarized fields. A metallic screen with an annular aperture is placed between the feeding waveguide and the horn. This inclusion is loaded with modulated capacitors to achieve a nonreciprocal transmittance for circularly polarized fields impinging from opposite sides. We present the analysis of the nonreciprocal annular inclusion and explain its nonreciprocal response, obtained through proper spatiotemporal modulation, with the help of an equivalent transmission-line representation. A possible practical implementation of the modulation circuit based on band-pass and band-stop filters is also discussed. The electrical and radiating performance of the conical horn is numerically evaluated, showing the ability of the proposed antenna to filter signals with same polarization in two separate bands, e.g., uplink and downlink bands of a satellite link, depending on the propagation direction of the signal. These results may pave the way to several interesting applications of nonreciprocal radiators in satellite and radar systems.
Fast, Automated, Scalable Generation of Textured 3D Models of Indoor Environments
3D modeling of building architecture from mobile scanning is a rapidly advancing field. These models are used in virtual reality, gaming, navigation, and simulation applications. State-of-the-art scanning produces accurate point-clouds of building interiors containing hundreds of millions of points. This paper presents several scalable surface reconstruction techniques to generate watertight meshes that preserve sharp features in the geometry common to buildings. Our techniques can automatically produce high-resolution meshes that preserve the fine detail of the environment by performing a ray-carving volumetric approach to surface reconstruction. We present methods to automatically generate 2D floor plans of scanned building environments by detecting walls and room separations. These floor plans can be used to generate simplified 3D meshes that remove furniture and other temporary objects. We propose a method to texture-map these models from captured camera imagery to produce photo-realistic models. We apply these techniques to several data sets of building interiors, including multi-story datasets.
Development of a biomechanical energy harvester
Biomechanical energy harvesting–generating electricity from people during daily activities–is a promising alternative to batteries for powering increasingly sophisticated portable devices. We recently developed a wearable knee-mounted energy harvesting device that generated electricity during human walking. In this methods-focused paper, we explain the physiological principles that guided our design process and present a detailed description of our device design with an emphasis on new analyses. Effectively harvesting energy from walking requires a small lightweight device that efficiently converts intermittent, bi-directional, low speed and high torque mechanical power to electricity, and selectively engages power generation to assist muscles in performing negative mechanical work. To achieve this, our device used a one-way clutch to transmit only knee extension motions, a spur gear transmission to amplify the angular speed, a brushless DC rotary magnetic generator to convert the mechanical power into electrical power, a control system to determine when to open and close the power generation circuit based on measurements of knee angle, and a customized orthopaedic knee brace to distribute the device reaction torque over a large leg surface area. The device selectively engaged power generation towards the end of swing extension, assisting knee flexor muscles by producing substantial flexion torque (6.4 Nm), and efficiently converted the input mechanical power into electricity (54.6%). Consequently, six subjects walking at 1.5 m/s generated 4.8 ± 0.8 W of electrical power with only a 5.0 ± 21 W increase in metabolic cost. Biomechanical energy harvesting is capable of generating substantial amounts of electrical power from walking with little additional user effort making future versions of this technology particularly promising for charging portable medical devices.
User Oriented Resource Management With Virtualization: A Hierarchical Game Approach
The explosive advancements in mobile Internet and Internet of Things challenge the network capacity and architecture. The ossification of wireless networks hinders the further evolution toward the fifth generation of mobile communication systems. Ultra-dense small cell networks are considered as a feasible way to meet high-capacity demands. Meanwhile, ultra-dense small cell network virtualization also exploits an insightful perspective for the evolution because of its superiority, such as diversity, flexibility, low cost, and scalability. In this paper, we specify the necessity of resource management in virtualized ultra-dense small cell networks through a mapping and management architecture and consider the problem of user-oriented virtual resource management. Then, we model the virtual resource management problem as a hierarchical game and obtain the closed-form solutions for spectrum, power, and price, respectively. Furthermore, we propose a customer-first (CF) algorithm that characterizes the user-oriented service of virtualization, and analyze its convergence. Simulation results present the effectiveness of the proposed CF algorithm.
DOEF: A Dynamic Object Evaluation Framework
In object-oriented or object-relational databases such as multimedia databases or most XML databases, access patterns are not static, i.e., applications do not always access the same objects in the same order repeatedly. However, this has been the way these databases and associated optimisation techniques like clustering have been evaluated up to now. This paper opens up research regarding this issue by proposing a dynamic object evaluation framework (DOEF) that accomplishes access pattern change by defining configurable styles of change. This preliminary prototype has been designed to be open and fully extensible. To illustrate the capabilities of DOEF, we used it to compare the performances of four state of the art dynamic clustering algorithms. The results show that DOEF is indeed effective at determining the adaptability of each dynamic clustering algorithm to changes in access pattern.
Transportation Modes Classification Using Sensors on Smartphones
This paper investigates the transportation and vehicular modes classification by using big data from smartphone sensors. The three types of sensors used in this paper include the accelerometer, magnetometer, and gyroscope. This study proposes improved features and uses three machine learning algorithms including decision trees, K-nearest neighbor, and support vector machine to classify the user's transportation and vehicular modes. In the experiments, we discussed and compared the performance from different perspectives including the accuracy for both modes, the executive time, and the model size. Results show that the proposed features enhance the accuracy, in which the support vector machine provides the best performance in classification accuracy whereas it consumes the largest prediction time. This paper also investigates the vehicle classification mode and compares the results with that of the transportation modes.
Seabed characterization on the New Jersey middle and outer shelf: correlatability and spatial variability of seafloor sediment properties
Nearly 100 collocated grab samples and in situ 65 kHz acoustic measurements were collected on the New Jersey middle and outer shelf within an area that had previously been mapped with multibeam backscatter and bathymetry data, and more recently with chirp seismic reflection profiling. Eighteen short cores were also collected and probed for resistivity-based porosity measurements. The combined data set provides a basis for empirically exploring the relationship among the remotely sensed data, such as backscatter and reflection coefficients, and directly measured seabed properties such as grain size distribution, velocity, attenuation and porosity. We also investigate the spatial variability of these properties through semi-variogram analysis to facilitate acoustic modeling of natural environmental variability. Grain size distributions on the New Jersey shelf are commonly multi-modal, leading us to separately characterize coarse % (>4 mm), fine % ( < 63 Am) and mean sand grain size to quantify the distribution. We find that the backscatter is dominated by the coarse component (expressed as weight %), typically shell hash and occasionally terrigenous gravel. In sediment types where coarse material is not significant, backscatter correlates with velocity and fine weight %. Mean sand grain size and fine % are partially correlated with each other, and combined represent the primary control on velocity. The fine %, rather than mean grain size as a whole, appears to be the primary control on attenuation, although coarse % may increase attenuation marginally through scattering. Vertical-incidence seismic reflection coefficients, carefully culled of unreliable values, exhibit a strong correlation with the in situ velocity measurements, suggesting that such data may prove more reliable than backscatter at deriving sediment physical properties from remote sensing data. The velocity and mean sand grain size semi-variograms can be fitted with a von Kármán statistical model with horizontal scale f12.6 km, which provides a basis for generating synthetic realizations. 0025-3227/$ see front matter D 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.margeo.2004.05.030 * Corresponding author. Institute for Geophysics, University of Texas at Austin, 4412 Spicewood Springs Road, Building 600, Austin, TX 78759, USA. Tel.: +1-512-471-0476; fax: +1-512-471-0999. E-mail address: [email protected] (J.A. Goff). J.A. Goff et al. / Marine Geology 209 (2004) 147–172 148 The backscatter and coarse % semi-variograms exhibit two horizontal scales: one f8 km and the other too small to quantify with available data. D 2004 Elsevier B.V. All rights reserved.
An MRI and proton spectroscopy study of the thalamus in children with autism
Thalamic alterations have been reported in autism, but the relationships between these abnormalities and clinical symptoms, specifically sensory features, have not been elucidated. The goal of this investigation is to combine two neuroimaging methods to examine further the pathophysiology of thalamic anomalies in autism and to identify any association with sensory deficits. Structural MRI and multi-voxel, short echo-time proton magnetic resonance spectroscopy ((1)H MRS) measurements were collected from 18 male children with autism and 16 healthy children. Anatomical measurements of thalamic nuclei and absolute concentration levels of key (1)H MRS metabolites were obtained. Sensory abnormalities were assessed using a sensory profile questionnaire. Lower levels of N-acetylaspartate (NAA), phosphocreatine and creatine, and choline-containing metabolites were observed on the left side in the autism group compared with controls. No differences in thalamic volumes were observed between the two groups. Relationships, although limited, were observed between measures of sensory abnormalities and (1)H MRS metabolites. Findings from this study support the role of the thalamus in the pathophysiology of autism and more specifically in the sensory abnormalities observed in this disorder. Further investigations of this structure are warranted, since it plays an important role in information processing as part of the cortico-thalamo-cortical pathways.
Impact evaluation of Swiss Medical Board reports on routine care in Switzerland: a case study of PSA screening and treatment for rupture of anterior cruciate ligament.
QUESTIONS UNDER STUDY Evidence-based recommendations play an important role in medical decision-making, but barriers to adherence are common. In Switzerland, the Swiss Medical Board (SMB) publishes evidence reports that conclude with recommendations. We assessed the impact of two SMB reports on service provision (2009: Recommendation of conservative treatment as first option for rupture of the anterior cruciate ligament of the knee; 2011: Recommendation against PSA screening for prostate cancer). METHODS We performed an observational study and assessed quantitative data over time via interrupted times series analyses. The primary outcome was the quarterly number of performed prostate-specific antigen (PSA) tests and the annual rates of surgical ACL repair in patients with ACL rupture. Data were adjusted for time trends and relevant confounders. RESULTS We analysed PSA tests in 662,874 outpatients from 2005-2013 and treatment data in 101,737 patients with knee injury from 1990-2011. For the number of PSA tests, the secular trend before the intervention showed a continuous but diminishing increase over time. A statistically significant reduction in tests was estimated immediately after the intervention, but a later return to the trend before the intervention cannot be ruled out. The rate of surgical ACL repair had already declined after the late 1990s to about 55% in 2009. No relevant additional change emerged in this secular trend after the intervention. CONCLUSIONS Despite some evidence of a possible change, we did not find a sustained and significant impact of SMB recommendations in our case study. Further monitoring is needed to confirm or refute these findings.
Round Robin Classification
In this paper, we discuss round robin classification (aka pairwise classification), a technique for handling multi-class problems with binary classifiers by learning one classifier for each pair of classes. We present an empirical evaluation of the method, implemented as a wrapper around the Ripper rule learning algorithm, on 20 multi-class datasets from the UCI database repository. Our results show that the technique is very likely to improve Ripper’s classification accuracy without having a high risk of decreasing it. More importantly, we give a general theoretical analysis of the complexity of the approach and show that its run-time complexity is below that of the commonly used one-against-all technique. These theoretical results are not restricted to rule learning but are also of interest to other communities where pairwise classification has recently received some attention. Furthermore, we investigate its properties as a general ensemble technique and show that round robin classification with C5.0 may improve C5.0’s performance on multi-class problems. However, this improvement does not reach the performance increase of boosting, and a combination of boosting and round robin classification does not produce any gain over conventional boosting. Finally, we show that the performance of round robin classification can be further improved by a straight-forward integration with bagging.
Clinical Neurology and Epidemiology of the Major Neurodegenerative Diseases.
Neurodegenerative diseases are a common cause of morbidity and cognitive impairment in older adults. Most clinicians who care for the elderly are not trained to diagnose these conditions, perhaps other than typical Alzheimer's disease (AD). Each of these disorders has varied epidemiology, clinical symptomatology, laboratory and neuroimaging features, neuropathology, and management. Thus, it is important that clinicians be able to differentiate and diagnose these conditions accurately. This review summarizes and highlights clinical aspects of several of the most commonly encountered neurodegenerative diseases, including AD, frontotemporal dementia (FTD) and its variants, progressive supranuclear palsy (PSP), corticobasal degeneration (CBD), Parkinson's disease (PD), dementia with Lewy bodies (DLB), multiple system atrophy (MSA), and Huntington's disease (HD). For each condition, we provide a brief overview of the epidemiology, defining clinical symptoms and diagnostic criteria, relevant imaging and laboratory features, genetics, pathology, treatments, and differential diagnosis.
Learning without Memorizing
Incremental learning (IL) is an important task aimed to increase the capability of a trained model, in terms of the number of classes recognizable by the model. The key problem in this task is the requirement of storing data (e.g. images) associated with existing classes, while training the classifier to learn new classes. However, this is impractical as it increases the memory requirement at every incremental step, which makes it impossible to implement IL algorithms on the edge devices with limited memory. Hence, we propose a novel approach, called ‘Learning without Memorizing (LwM)’, to preserve the information with respect to existing (base) classes, without storing any of their data, while making the classifier progressively learn the new classes. In LwM, we present an information preserving penalty: Attention Distillation Loss (LAD), and demonstrate that penalizing the changes in classifiers’ attention maps helps to retain information of the base classes, as new classes are added. We show that adding LAD to the distillation loss which is an existing information preserving loss consistently outperforms the state-of-the-art performance in the iILSVRC-small and iCIFAR-100 datasets in terms of the overall accuracy of base and incrementally learned classes.
Centralities in Large Networks: Algorithms and Observations
Node centrality measures are important in a large number of graph applications, from search and ranking to social and biological network analysis. In this paper we study node centrality for very large graphs, up to billions of nodes and edges. Various definitions for centrality have been proposed, ranging from very simple (e.g., node degree) to more elaborate. However, measuring centrality in billion-scale graphs poses several challenges. Many of the “traditional” definitions such as closeness and betweenness were not designed with scalability in mind. Therefore, it is very difficult, if not impossible, to compute them both accurately and efficiently. In this paper, we propose centrality measures suitable for very large graphs, as well as scalable methods to effectively compute them. More specifically, we propose effective closeness and LINERANK which are designed for billion-scale graphs. We also develop algorithms to compute the proposed centrality measures in MAPREDUCE, a modern paradigm for large-scale, distributed data processing. We present extensive experimental results on both synthetic and real datasets, which demonstrate the scalability of our approach to very large graphs, as well as interesting findings and anomalies.
Sociolinguistic Foundations to African Centered Pedagogy: A Literature Review
PERIOD I: INDIGENOUS VOICES PERIOD II: COLONIZATION OF THE LITERATURE OF AFRICAN AMERICAN LANGUAGE BEHAVIORS PERIOD III: POST COLONIAL STUDIES: TOWARD AN ETHNOGRAPHY OF AFRICAN AMERICAN LANGUAGE BEHAVIORS PREFACE To the contemporary investigator of African centered pedagogies, merely looking within the broad background of educational research would miss a substantial portion of the entire ethnographic range of investigations. An integral body of literature reposes in two other significant areas: first, within the indigenous voices of African American writers, pioneer ethnographers and scholars; and secondly, within the ethnographic investigations of the field of sociolinguistics. This literature review will focus upon the sociolinguistic foundations of the ethnographies which have formed a substantial part, both in conceptual and applied contexts, of the discussion and development of African centered pedagogies in more recent years. The core issue in the relationship between sociolinguistics and African centered pedagogies is the study of the culture-specific behavior of the African American speech community; the pedagogical corollary of that relationship is the applied strategies of using those patterned and valued ways of speaking for teaching and learning effectiveness in formal classroom settings. The point of departure in my own readings is with the writings of Carol Lee. This author touches upon the concept of "signifying" as a valued form of oral discourse in the African American speech community. Signifying can be characterized as the verbal art of using dual meanings, innuendo and the play upon the sound and meanings of words. As a form of discourse, signifying has been the focus of much investigation. Carol Lee briefly discusses some of the background literature and touches lightly upon many other researchers' work within the field of sociolinguistics. Her article is replete with references on the topic. Since this is such a rich and important focus of research, it is important to delve more deeply into this valuable body of literature. The scope of the review of this body of literature might be delineated in the following historically based outline: Period I: Indigenous Voices To ignore the indigenous voices in the critical history of African American education, or to bypass those perspectives in the earliest of ethnographies, is to deny African Americans that recognition of self-knowledge that would be factually and grossly incorrect. The earliest recorded voices have commented on the processes of education (or schooling, might be more accurate) in relation to the cognitive abilities and language behaviors of African Americans. Maya Angelou, in her poignant autobiography entitled I Know Why The Caged Bird Sings (1969) has described eighth grade graduation day in her native Stamps, Arkansas. The graduation speaker was unequivocal about the visiting artists and the microscopes going to the white schools, while how proud the town was that the athletes (Black students graduating) would be going on to the agricultural and mechanical college. Angelou states: "The white kids were going to have a chance to become Galileos and Madame Curies and Edisons and Gauguins, and our boys (the girls weren't even in on it) would try to be Jesse Owenses and Joe Louises" (p. 151). Although Angelou's memoir was not published until 1969, the period of recollection for this dubious kind of graduation ceremony is set in the 1930's and 40's--at the time when African American cultural and artistic contributions were making a mark in the rest of the country (and world). The indigenous voices, contemporary to this period, were indeed recorded, although not published widely at the time. Ironically, while Maya Angelou was listening disappointedly at a biased graduation drama in Arkansas, across the country in Washington, DC, the Black intellectual and philosopher, Carter Woodson was graphically portraying her and many others' plight in the pivotal The MisEducation of the Negro (1933). …
Mathematics Learning through Computational Thinking Activities: A Systematic Literature Review
Computational Thinking represents a terminology that embraces the complex set of reasoning processes that are held for problem stating and solving through a computational tool. The ability of systematizing problems and solve them by these means is currently being considered a skill to be developed by all students, together with Language, Mathematics and Sciences. Considering that Computer Science has many of its roots on Mathematics, it is reasonable to ponder if and how Mathematics learning can be influenced by offering activities related to Computational Thinking to students. In this sense, this article presents a Systematic Literature Review on reported evidences of Mathematics learning in activities aimed at developing Computational Thinking skills. Forty-two articles which presented didactic activities together with an experimental design to evaluate learning outcomes published from 2006 to 2017 were analyzed. The majority of identified activities used a software tool or hardware device for their development. In these papers, a wide variety of mathematical topics has been being developed, with some emphasis on Planar Geometry and Algebra. Conversion of models and solutions between different semiotic representations is a high level cognitive skill that is most frequently associated to educational outcomes. This review indicated that more recent articles present a higher level of rigor in methodological procedures to assess learning effects. However, joint analysis of evidences from more than one data source is still not frequently used as a validation procedure.
Randomised Double-Blind Comparison of Placebo and Active Drugs for Effects on Risks Associated with Blood Pressure Variability in the Systolic Hypertension in Europe Trial
BACKGROUND In the Systolic Hypertension in Europe trial (NCT02088450), we investigated whether systolic blood pressure variability determines prognosis over and beyond level. METHODS Using a computerised random function and a double-blind design, we randomly allocated 4695 patients (≥60 years) with isolated systolic hypertension (160-219/<95 mm Hg) to active treatment or matching placebo. Active treatment consisted of nitrendipine (10-40 mg/day) with possible addition of enalapril (5-20 mg/day) and/or hydrochlorothiazide (12.5-25.0 mg/day). We assessed whether on-treatment systolic blood pressure level (SBP), visit-to-visit variability independent of the mean (VIM) or within-visit variability (WVV) predicted total (n = 286) or cardiovascular (n = 150) mortality or cardiovascular (n = 347), cerebrovascular (n = 133) or cardiac (n = 217) endpoints. FINDINGS At 2 years, mean between-group differences were 10.5 mm Hg (p<0.0001) for SBP, 0.29 units (p = 0.20) for VIM, and 0.07 mm Hg (p = 0.47) for WVV. Active treatment reduced (p≤0.048) cardiovascular (-28%), cerebrovascular (-40%) and cardiac (-24%) endpoints. In analyses dichotomised by the median, patients with low vs. high VIM had similar event rates (p≥0.14). Low vs. high WVV was not associated with event rates (p≥0.095), except for total and cardiovascular mortality on active treatment, which were higher with low WVV (p≤0.0003). In multivariable-adjusted Cox models, SBP predicted all endpoints (p≤0.0043), whereas VIM did not predict any (p≥0.058). Except for an inverse association with total mortality (p = 0.042), WVV was not predictive (p≥0.15). Sensitivity analyses, from which we excluded blood pressure readings within 6 months after randomisation, 6 months prior to an event or both were confirmatory. CONCLUSIONS The double-blind placebo-controlled Syst-Eur trial demonstrated that blood-pressure lowering treatment reduces cardiovascular complications by decreasing level but not variability of SBP. Higher blood pressure level, but not higher variability, predicted risk. TRIAL REGISTRATION ClinicalTrials.gov NCT02088450.
System Software for Ubiquitous Computing
U biquitous computing, or ubicomp, systems designers embed devices in various physical objects and places. Frequently mobile, these devices— such as those we carry with us and embed in cars—are typically wirelessly networked. Some 30 years of research have gone into creating distributed computing systems, and we've invested nearly 20 years of experience in mobile computing. With this background, and with today's developments in miniaturization and wireless operation, our community seems poised to realize the ubicomp vision. However, we aren't there yet. Ubicomp software must deliver functionality in our everyday world. It must do so on failure-prone hardware with limited resources. Additionally, ubicomp software must operate in conditions of radical change. Varying physical circumstances cause components routinely to make and break associations with peers of a new degree of functional heterogeneity. Mobile and distributed computing research has already addressed parts of these requirements, but a qualitative difference remains between the requirements and the achievements. In this article, we examine today's ubiquitous systems, focusing on software infrastructure, and discuss the road that lies ahead. We base our analysis on physical integration and spontaneous interoperation, two main characteristics of ubicomp systems, because much of the ubicomp vision, as expounded by Mark Weiser and others, 1 either deals directly with or is predicated on them. Physical integration A ubicomp system involves some integration between computing nodes and the physical world. For example, a smart coffee cup, such as a Media-Cup, 2 serves as a coffee cup in the usual way but also contains sensing, processing, and networking elements that let it communicate its state (full or empty, held or put down). So, the cup can give colleagues a hint about the state of the cup's owner. Or consider a smart meeting room that senses the presence of users in meetings, records their actions, 3 and provides services as they sit at a table or talk at a whiteboard. 4 The room contains digital furniture such as chairs with sensors, whiteboards that record what's written on them, and projectors that you can activate from anywhere in the room using a PDA (personal digital assistant). Human administrative, territorial, and cultural considerations mean that ubicomp takes place in more or less discrete environments based, for example , on homes, rooms, or airport lounges. In other words, the world consists of ubiquitous systems rather than " the ubiquitous system. " So, from physical integration, we draw our …
AppIntent: analyzing sensitive data transmission in android for privacy leakage detection
Android phones often carry personal information, attracting malicious developers to embed code in Android applications to steal sensitive data. With known techniques in the literature, one may easily determine if sensitive data is being transmitted out of an Android phone. However, transmission of sensitive data in itself does not necessarily indicate privacy leakage; a better indicator may be whether the transmission is by user intention or not. When transmission is not intended by the user, it is more likely a privacy leakage. The problem is how to determine if transmission is user intended. As a first solution in this space, we present a new analysis framework called AppIntent. For each data transmission, AppIntent can efficiently provide a sequence of GUI manipulations corresponding to the sequence of events that lead to the data transmission, thus helping an analyst to determine if the data transmission is user intended or not. The basic idea is to use symbolic execution to generate the aforementioned event sequence, but straightforward symbolic execution proves to be too time-consuming to be practical. A major innovation in AppIntent is to leverage the unique Android execution model to reduce the search space without sacrificing code coverage. We also present an evaluation of AppIntent with a set of 750 malicious apps, as well as 1,000 top free apps from Google Play. The results show that AppIntent can effectively help separate the apps that truly leak user privacy from those that do not.
A Systematic Review of Information Security Frameworks in the Internet of Things (IoT)
By 2020, it is estimated that the number of connected devices is expected to grow exponentially to 50 billion. Internet of things has gained extensive attention, the deployment of sensors, actuators are increasing at a rapid pace around the world. There is tremendous scope for more streamlined living through an increase of smart services, but this coincides with an increase in security and privacy concerns. There is a need to perform a systematic review of Information security governance frameworks in the Internet of things (IoT). Objective – The aim of this paper to evaluate systematic review of information security management frameworks which are related to the Internet of things (IoT). It will also discuss different information security frameworks that cover IoT models and deployments across different verticals. These frameworks are classified according to the area of the framework, the security executives and senior management of any enterprise that plans to start using smart services needs to define a clear governance strategy concerning the security of their assets, this system review will help them to make a better decision for their investment for secure IoT deployments. Method – A set of standard criteria has been established to analyze which security framework will be the best fit among these classified security structures in particularly for Internet of Things (IoT). The first step to evaluate security framework by using standard criteria methodology is to identify resources, the security framework for IoT is selected to be assessed according to CCS. The second step is to develop a set of Security Targets (ST). The ST is the set of criteria to apply for the target of evaluation (TOE). The third step is data extraction, fourth step data synthesis, and final step is to write-up study as a report. Conclusion– After reviewing four information security risk frameworks, this study makes some suggestions related to information security risk governance in Internet of Things (IoT). The organizations that have decided to move to smart devices have to define the benefits and risks and deployment processes to manage security risk. The information security risk policies should comply with an organization's IT policies and standards to protect the confidentiality, integrity and availability of information security. The study observes some of the main processes that are needed to manage security risks. Moreover, the paper also drew attention on some suggestions that may assist companies which are associated with the information security framework in Internet of things (IoT).
5G-5 Dual-Annular-Ring CMUT Array for ForwardLooking IVUS Imaging
We investigate a dual-annular-ring CMUT array configuration for forward-looking intravascular ultrasound (FL-IVUS) imaging. The array consists of separate, concentric transmit and receive ring arrays built on the same silicon substrate. This configuration has the potential for independent optimization of each array and uses the silicon area more effectively without any particular drawback. We designed and fabricated a 1 mm diameter test array which consists of 24 transmit and 32 receive elements. We investigated synthetic phased array beamforming with a non-redundant subset of transmit-receive element pairs of the dual-annular-ring array. For imaging experiments, we designed and constructed a programmable FPGA-based data acquisition and phased array beamforming system. Pulse-echo measurements along with imaging simulations suggest that dual-ring-annular array should provide performance suitable for real-time FL-IVUS applications
Rapid tuning shifts in human auditory cortex enhance speech intelligibility
Experience shapes our perception of the world on a moment-to-moment basis. This robust perceptual effect of experience parallels a change in the neural representation of stimulus features, though the nature of this representation and its plasticity are not well-understood. Spectrotemporal receptive field (STRF) mapping describes the neural response to acoustic features, and has been used to study contextual effects on auditory receptive fields in animal models. We performed a STRF plasticity analysis on electrophysiological data from recordings obtained directly from the human auditory cortex. Here, we report rapid, automatic plasticity of the spectrotemporal response of recorded neural ensembles, driven by previous experience with acoustic and linguistic information, and with a neurophysiological effect in the sub-second range. This plasticity reflects increased sensitivity to spectrotemporal features, enhancing the extraction of more speech-like features from a degraded stimulus and providing the physiological basis for the observed 'perceptual enhancement' in understanding speech.
Electric polarizability of the neutron in dynamical quark ensembles
The background field method for measuring the electric polarizability of the neutron is adapted to the dynamical quark case, resulting in the calculation of (certain space-time integrals over) three- and four-point functions. Particular care is taken to disentangle polarizability effects from the effects of subjecting the neutron to a constant background gauge field; such a field is not a pure gauge on a finite lattice and engenders a mass shift of its own. At a pion mass of m_pi = 759 MeV, a small, slightly negative electric polarizability is found for the neutron.
Agent-Human Interactions in the Continuous Double Auction
The Continuous Double Auction (CDA) is the dominant market institution for real-world trading of equities, commodities, derivatives, etc. We describe a series of laboratory experiments that, for the first time, allow human subjects to interact with software bidding agents in a CDA. Our bidding agents use strategies based on extensions of the Gjerstad-Dickhaut and Zero-Intelligence-Plus algorithms. We find that agents consistently obtain significantly larger gains from trade than their human counterparts. This was unexpected because both humans and agents have approached theoretically perfect efficiency in prior all-human or allagent CDA experiments. Another unexpected finding is persistent far-from-equilibrium trading, in sharp contrast to the robust convergence observed in previous all-human or all-agent experiments. We consider possible explanations for our empirical findings, and speculate on the implications for future agent-human interactions in electronic markets.
Features of Similarity
The metric and dimensional assumptions that underlie the geometric representation of similarity are questioned on both theoretical and empirical grounds. A new set-theoretical approach to similarity is developed in which objects are represented as collections of features, and similarity is described as a featurematching process. Specifically, a set of qualitative assumptions is shown to imply the contrast model, which expresses the similarity between objects as a linear combination of the measures of their common and distinctive features. Several predictions of the contrast model are tested in studies of similarity with both semantic and perceptual stimuli. The model is used to uncover, analyze, and explain a variety of empirical phenomena such as the role of common and distinctive features, the relations between judgments of similarity and difference, the presence of asymmetric similarities, and the effects of context on judgments of similarity. The contrast model generalizes standard representations of similarity data in terms of clusters and trees. It is also used to analyze the relations of prototypicality and family resemblance.
Unsupervised adaptive transfer learning for Steady-State Visual Evoked Potential brain-computer interfaces
Recent advances in signal processing for the detection of Steady-State Visual Evoked Potentials (SSVEPs) have moved away from traditionally calibrationless methods, such as canonical correlation analysis, and towards algorithms that require substantial training data. In general, this has improved detection rates, but SSVEP-based brain-computer interfaces (BCIs) now suffer from the requirement of costly calibration sessions. Here, we address this issue by applying transfer learning techniques to SSVEP detection. Our novel Adaptive-C3A method incorporates an unsupervised adaptation algorithm that requires no calibration data. Our approach learns SSVEP templates for the target user and provides robust class separation in feature space leading to increased classification accuracy. Our method achieves significant improvements in performance over a standard CCA method as well as a transfer variant of the state-of-the art Combined-CCA method for calibrationless SSVEP detection.
An Image-based Deep Spectrum Feature Representation for the Recognition of Emotional Speech
The outputs of the higher layers of deep pre-trained convolutional neural networks (CNNs) have consistently been shown to provide a rich representation of an image for use in recognition tasks. This study explores the suitability of such an approach for speech-based emotion recognition tasks. First, we detail a new acoustic feature representation, denoted as deep spectrum features, derived from feeding spectrograms through a very deep image classification CNN and forming a feature vector from the activations of the last fully connected layer. We then compare the performance of our novel features with standardised brute-force and bag-of-audio-words (BoAW) acoustic feature representations for 2- and 5-class speech-based emotion recognition in clean, noisy and denoised conditions. The presented results show that image-based approaches are a promising avenue of research for speech-based recognition tasks. Key results indicate that deep-spectrum features are comparable in performance with the other tested acoustic feature representations in matched for noise type train-test conditions; however, the BoAW paradigm is better suited to cross-noise-type train-test conditions.
From the Semantic Web to social machines: A research challenge for AI on the World Wide Web
Article history: Received 24 September 2009 Received in revised form 1 October 2009 Accepted 1 October 2009 Available online 17 November 2009 The advent of social computing on the Web has led to a new generation of Web applications that are powerful and world-changing. However, we argue that we are just at the beginning of this age of “social machines” and that their continued evolution and growth requires the cooperation of Web and AI researchers. In this paper, we show how the growing Semantic Web provides necessary support for these technologies, outline the challenges we see in bringing the technology to the next level, and propose some starting places for the research. © 2009 Elsevier B.V. All rights reserved.
How monkeys see the eyes: cotton-top tamarins’ reaction to changes in visual attention and action
Among social species, the capacity to detect where another individual is looking is adaptive because gaze direction often predicts what an individual is attending to, and thus what its future actions are likely to be. We used an expectancy violation procedure to determine whether cotton-top tamarins (Saguinus oedipus oedipus) use the direction of another individual’s gaze to predict future actions. Subjects were familiarized with a sequence in which a human actor turned her attention toward one of two objects sitting on a table and then reached for that object. Following familiarization, subjects saw two test events. In one test event, the actor gazed at the new object and then reached for that object. From a human perspective, this event is considered consistent with the causal relationship between visual attention and subsequent action, that is, grabbing the object attended to. In the second test event, the actor gazed at the old object, but reached for the new object. This event is considered a violation of expectation. When the actor oriented with both her head-and-eyes, subjects looked significantly longer at the second test event in which the actor reached for the object to which she had not previously oriented. However, there was no difference in looking time between test events when the actor used only her eyes to orient. These findings suggest that tamarins are able to use some combination of head orientation and gaze direction, but not gaze direction alone, to predict the actions of a human agent.
SOI CMOS tunable capacitors for RF antenna aperture tuning
This paper provides a detailed analysis of a SOI CMOS tunable capacitor for antenna tuning. Design expressions for a switched capacitor network are given and quality factor of the whole network is expressed as a function of design parameters. Application to antenna aperture tuning is described by combining a 130 nm SOI CMOS tunable capacitor with a printed notch antenna. The proposed tunable multiband antenna can be tuned from 420 MHz to 790 MHz, with an associated radiation efficiency in the 33-73% range.
International taxation and multinational firm location decisions
Using a large international firm-level data set, we estimate separate effects of host and parent country taxation on the location decisions of multinational firms. Both types of taxation are estimated to have a negative impact on the location of new foreign subsidiaries. In fact, the impact of parent country taxation is estimated to be relatively large, possibly reflecting its international discriminatory nature. For the cross-section of multinational firms, we find that parent firms tend to be located in countries with a relatively low taxation of foreign-source income. Overall, our results show that parent-country taxation – despite the general possibility of deferral of taxation until income repatriation – is instrumental in shaping the structure of multinational enterprise. JEL Code: F23, G32, H25, R38.
Axiomatizing Kolmogorov Complexity
We revisit the axiomatization of Kolmogorov complexity given by Alexander Shen, currently available only in Russian language. We derive an axiomatization for conditional plain Kolmogorov complexity. Next we show that the axiomatic system given by Shen cannot be weakened (at least in any natural way). In addition we prove that the analogue of Shen’s axiomatic system fails to characterize prefix-free Kolmogorov complexity.
McLaren's Improved Snub Cube and Other New Spherical Designs in Three Dimensions
Evidence is presented to suggest that, in three dimensions, spherical 6-designs with N points exist for N = 24, 26, ≥ 28; 7-designs for N = 24, 30, 32, 34, ≥ 36; 8-designs for N = 36, 40, 42, ≥ 44; 9-designs for N = 48, 50, 52, ≥ 54; 10-designs for N = 60, 62, ≥ 64; 11-designs for N = 70, 72, ≥ 74; and 12-designs for N = 84, ≥ 86. The existence of some of these designs is established analytically, while others are given by very accurate numerical coordinates. The 24-point 7-design was first found by McLaren in 1963, and — although not identified as such by McLaren — consists of the vertices of an “improved” snub cube, obtained from Archimedes’ regular snub cube (which is only a 3-design) by slightly shrinking each square face and expanding each triangular face. 5-designs with 23 and 25 points are presented which, taken together with earlier work of Reznick, show that 5-designs exist for N = 12, 16, 18, 20, ≥ 22. It is conjectured, albeit with decreasing confidence for t ≥ 9, that these lists of t-designs are complete and that no others exist. One of the constructions gives a sequence of putative spherical t-designs with N = 12m points (m ≥ 2) where N = 1 2 t2(1 + o(1)) as t→∞. McLaren’s Improved Snub Cube and Other New Spherical Designs in Three Dimensions R. H. Hardin and N. J. A. Sloane Mathematical Sciences Research Center AT&T Bell Laboratories Murray Hill, NJ 07974 USA
Clustering by passing messages between data points.
Clustering data by identifying a subset of representative examples is important for processing sensory signals and detecting patterns in data. Such "exemplars" can be found by randomly choosing an initial subset of data points and then iteratively refining it, but this works well only if that initial choice is close to a good solution. We devised a method called "affinity propagation," which takes as input measures of similarity between pairs of data points. Real-valued messages are exchanged between data points until a high-quality set of exemplars and corresponding clusters gradually emerges. We used affinity propagation to cluster images of faces, detect genes in microarray data, identify representative sentences in this manuscript, and identify cities that are efficiently accessed by airline travel. Affinity propagation found clusters with much lower error than other methods, and it did so in less than one-hundredth the amount of time.
Reduction of Conjunctival Fibrosis After Trabeculectomy Using Topical α-Lipoic Acid in Rabbit Eyes
PURPOSE To evaluate the efficacy of α-lipoic acid (ALA) in reducing scarring after trabeculectomy. MATERIALS AND METHODS Eighteen adult New Zealand white rabbits underwent trabeculectomy. During trabeculectomy, thin sponges were placed between the sclera and Tenon's capsule for 3 minutes, saline solution, mitomycin-C (MMC) and ALA was applied to the control group (CG) (n=6 eyes), MMC group (MMCG) (n=6 eyes), and ALA group (ALAG) (n=6 eyes), respectively. After surgery, topical saline and ALA was applied for 28 days to the control and ALAGs, respectively. Filtrating bleb patency was evaluated by using 0.1% trepan blue. Hematoxylin and eosin and Masson trichrome staining for toxicity, total cellularity, and collagen organization; α-smooth muscle actin immunohistochemistry staining performed for myofibroblast phenotype identification. RESULTS Clinical evaluation showed that all 6 blebs (100%) of the CG had failed, whereas there were only 2 failures (33%) in the ALAG and no failures in the MMCG on day 28. Histologic evaluation showed significantly lower inflammatory cell infiltration in the ALAGs and CGs than the MMCG. Toxicity change was more significant in the MMCG than the control and ALAGs. Collagen was better organized in the ALAG than control and MMCGs. In immunohistochemistry evaluation, ALA significantly reduced the population of cells expressing α-smooth muscle action. CONCLUSIONS ΑLA prevents and/or reduces fibrosis by inhibition of inflammation pathways, revascularization, and accumulation of extracellular matrix. It can be used as an agent for delaying tissue regeneration and for providing a more functional-permanent fistula.
The Neural Basis of Somatosensory Remapping Develops in Human Infancy
When we sense a touch, our brains take account of our current limb position to determine the location of that touch in external space [1, 2]. Here we show that changes in the way the brain processes somatosensory information in the first year of life underlie the origins of this ability [3]. In three experiments we recorded somatosensory evoked potentials (SEPs) from 6.5-, 8-, and 10-month-old infants while presenting vibrotactile stimuli to their hands across uncrossed- and crossed-hands postures. At all ages we observed SEPs over central regions contralateral to the stimulated hand. Somatosensory processing was influenced by arm posture from 8 months onward. At 8 months, posture influenced mid-latency SEP components, but by 10 months effects were observed at early components associated with feed-forward stages of somatosensory processing. Furthermore, sight of the hands was a necessary pre-requisite for somatosensory remapping at 10 months. Thus, the cortical networks [4] underlying the ability to dynamically update the location of a perceived touch across limb movements become functional during the first year of life. Up until at least 6.5 months of age, it seems that human infants' perceptions of tactile stimuli in the external environment are heavily dependent upon limb position.
A Simple Digital Power-Factor Correction Rectifier Controller
This paper introduces a single-phase digital power-factor correction (PFC) control approach that requires no input voltage sensing or explicit current-loop compensation, yet results in low-harmonic operation over a universal input voltage range and loads ranging from high-power operation in continuous conduction mode down to the near-zero load. The controller is based on low-resolution A/D converters and digital pulsewidth modulator, requires no microcontroller or DSP programming, and is well suited for a simple, low-cost integrated-circuit realization, or as a hardware description language core suitable for integration with other power control and power management functions. Experimental verification results are shown for a 300-W boost PFC rectifier.
Improving Language Understanding by Generative Pre-Training
Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).
Simulation of e-commerce diffusion model based on Netlogo
E-commerce is a paradigm shift and a disruptive innovation that is radically changing the traditional way of doing business. Although the benefits of E-Commerce diffusion among adopters have been published, little research focuses on the adoption and diffusion algorithm of E-Commerce application has been reported. This paper firstly reviewed the related literature of E-Commerce diffusion. And the E-commerce diffusion model and algorithm was developed. Lastly, the dynamic process of E-Commerce diffusion algorithm is simulated based on the Netlogo. The simulation result of the paper can help E-commerce adopters improve the level of E-commerce adoption.
Comprehensive Review of Neural Network-Based Prediction Intervals and New Advances
This paper evaluates the four leading techniques proposed in the literature for construction of prediction intervals (PIs) for neural network point forecasts. The delta, Bayesian, bootstrap, and mean-variance estimation (MVE) methods are reviewed and their performance for generating high-quality PIs is compared. PI-based measures are proposed and applied for the objective and quantitative assessment of each method's performance. A selection of 12 synthetic and real-world case studies is used to examine each method's performance for PI construction. The comparison is performed on the basis of the quality of generated PIs, the repeatability of the results, the computational requirements and the PIs variability with regard to the data uncertainty. The obtained results in this paper indicate that: 1) the delta and Bayesian methods are the best in terms of quality and repeatability, and 2) the MVE and bootstrap methods are the best in terms of low computational load and the width variability of PIs. This paper also introduces the concept of combinations of PIs, and proposes a new method for generating combined PIs using the traditional PIs. Genetic algorithm is applied for adjusting the combiner parameters through minimization of a PI-based cost function subject to two sets of restrictions. It is shown that the quality of PIs produced by the combiners is dramatically better than the quality of PIs obtained from each individual method.
Group Sparse Coding
Bag-of-words document representations are often used in te xt, image and video processing. While it is relatively easy to determine a suitab le word dictionary for text documents, there is no simple mapping from raw images or videos to dictionary terms. The classical approach builds a dictionary usin g vector quantization over a large set of useful visual descriptors extracted from a training set, and uses a nearest-neighbor algorithm to count the number of occurren ces of each dictionary word in documents to be encoded. More robust approaches have been proposed recently that represent each visual descriptor as a sparse w eighted combination of dictionary words. While favoring a sparse representation at the level of visual descriptors, those methods however do not ensure that images h av sparse representation. In this work, we use mixed-norm regularization to ac hieve sparsity at the image level as well as a small overall dictionary. This appro ach can also be used to encourage using the same dictionary words for all the images in a class, providing a discriminative signal in the construction of image repres entations. Experimental results on a benchmark image classification dataset show t at when compact image or dictionary representations are needed for computa tional efficiency, the proposed approach yields better mean average precision in c lassification.
Classical grasp quality evaluation: New algorithms and theory
This paper investigates theoretical properties of a well-known L<sup>1</sup> grasp quality measure Q whose approximation Q<sup>-</sup><sub>l</sub> is commonly used for the evaluation of grasps and where the precision of Q<sup>-</sup><sub>l</sub> depends on an approximation of a cone by a convex polyhedral cone with l edges. We prove the Lipschitz continuity of Q and provide an explicit Lipschitz bound that can be used to infer the stability of grasps lying in a neighbourhood of a known grasp. We think of Q<sup>-</sup><sub>l</sub> as a lower bound estimate to Q and describe an algorithm for computing an upper bound Q<sup>+</sup>. We provide worst-case error bounds relating Q and Q<sup>-</sup><sub>l</sub>. Furthermore, we develop a novel grasp hypothesis rejection algorithm which can exclude unstable grasps much faster than current implementations. Our algorithm is based on a formulation of the grasp quality evaluation problem as an optimization problem, and we show how our algorithm can be used to improve the efficiency of sampling based grasp hypotheses generation methods.
Experimental Investigation on Outdoor Insulation for DC Transmission Line at High Altitudes
This paper presents the results of a study on the contamination characteristics of porcelain, glass and composite insulators in natural high altitude areas (1970 m). The test used the solid layer method, in which a contaminant solution consisting of sodium chloride, kaolin powder and water was used to coat the insulator surfaces. The insulators were wetted by steam fog in fog chamber. The 50% withstand voltage (U 50%) was determined by the up-and-down method. Experimental results indicate that insulator profiles have a significant effect on the flashover voltages. The flashover voltages for bell-type porcelain insulator are higher than tri-shed porcelain insulator. The glass insulator with bigger spacing has a higher flashover voltage than that with smaller spacing. The flashover characteristics of composite insulators can be greatly improved for about 20% by optimizing the shed parameters. In addition, the suspension patterns (I-, V- and Y-string) also have a significant influence on the porcelain insulator flashover voltages. The Y-string configuration appears to lead to much lower flashover voltages than I- and V-string insulators. The flashover voltages for V-string insulators are influenced by the angle between two insulator strings. The flashover voltage for V-strings of 120 ° is highest, which is higher than that of I-string for 7.5%.
Mantle Convection Modeling with Viscoelastic/Brittle Lithosphere: Numerical Methodology and Plate Tectonic Modeling
The earth’s tectonic plates are strong, viscoelastic shells which make up the outermost part of a thermally convecting, predominantly viscous layer. Brittle failure of the lithosphere occurs when stresses are high. In order to build a realistic simulation of the planet’s evolution, the complete viscoelastic/brittle convection system needs to be considered. A particle-in-cell finite element method is demonstrated which can simulate very large deformation viscoelasticity with a strain-dependent yield stress. This is applied to a plate-deformation problem. Numerical accuracy is demonstrated relative to analytic benchmarks, and the characteristics of the method are discussed.
Privado: Practical and Secure DNN Inference
Recently, cloud providers have extended support for trusted hardware primitives such as Intel SGX. Simultaneously, the field of deep learning is seeing enormous innovation and increase in adoption. In this paper, we therefore ask the question: “Can third-party cloud services use SGX to provide practical, yet secure DNN Inference-as-a-service? ” Our work addresses the three main challenges that SGX-based DNN inferencing faces, namely, security, ease-of-use, and performance. We first demonstrate that side-channel based attacks on DNN models are indeed possible. We show that, by observing access patterns, we can recover inputs to the DNN model. This motivates the need for PRIVADO, a system we have designed for secure inference-as-aservice. PRIVADO is input-oblivious: it transforms any deep learning framework written in C/C++ to be free of input-dependent access patterns. PRIVADO is fullyautomated and has a low TCB: with zero developer effort, given an ONNX description, it generates compact C code for the model which can run within SGX-enclaves. PRIVADO has low performance overhead: we have used PRIVADO with Torch, and have shown its overhead to be 20.77% on average on 10 contemporary networks.
Churn Prediction
The rapid growth of the market in every sector is leading to a bigger subscriber base for service providers. More competitors, new and innovative business models and better services are increasing the cost of customer acquisition. In this environment service providers have realized the importance of the retention of existing customers. Therefore, providers are forced to put more efforts for prediction and prevention of churn. This paper aims to present commonly used data mining techniques for the identification of churn. Based on historical data these methods try to find patterns which can point out possible churners. Well-known techniques used for this are Regression analysis, Decision Trees, Neural Networks and Rule based learning. In section 1 we give a short introduction describing the current state of the market, then in section 2 a definition of customer churn, its’ types and the imporance of identification of churners is being discussed. Section 3 reviews different techniques used, pointing out advantages and disadvantages. Finally, current state of research and new emerging algorithms are being presented.
The Influence of the Avatar on Online Perceptions of Anthropomorphism, Androgyny, Credibility, Homophily, and Attraction
It has become increasingly common for websites and computer media to provide computer generated visual images, called avatars, to represent users and bots during online interactions. In this study, participants (N=255) evaluated a series of avatars in a static context in terms of their androgyny, anthropomorphism, credibility, homophily, attraction, and the likelihood they would choose them during an interaction. The responses to the images were consistent with what would be predicted by uncertainty reduction theory. The results show that the masculinity or femininity (lack of androgyny) of an avatar, as well as anthropomorphism, significantly influence perceptions of avatars. Further, more anthropomorphic avatars were perceived to be more attractive and credible, and people were more likely to choose to be represented by them. Participants reported masculine avatars as less attractive than feminine avatars, and most people reported a preference for human avatars that matched their gender. Practical and theoretical implications of these results for users, designers, and researchers of avatars are discussed.
Measuring User Influence in Twitter: The Million Follower Fallacy
Directed links in social media could represent anything from intimate friendships to common interests, or even a passion for breaking news or celebrity gossip. Such directed links determine the flow of information and hence indicate a user’s influence on others—a concept that is crucial in sociology and viral marketing. In this paper, using a large amount of data collected from Twitter, we present an in-depth comparison of three measures of influence: indegree, retweets, and mentions. Based on these measures, we investigate the dynamics of user influence across topics and time. We make several interesting observations. First, popular users who have high indegree are not necessarily influential in terms of spawning retweets or mentions. Second, most influential users can hold significant influence over a variety of topics. Third, influence is not gained spontaneously or accidentally, but through concerted effort such as limiting tweets to a single topic. We believe that these findings provide new insights for viral marketing and suggest that topological measures such as indegree alone reveals very little about the influence of a user.
Japanese POEMS syndrome with Thalidomide (J-POST) Trial: study protocol for a phase II/III multicentre, randomised, double-blind, placebo-controlled trial
INTRODUCTION Polyneuropathy, organomegaly, endocrinopathy, M-protein and skin changes (POEMS) syndrome is a fatal systemic disorder associated with plasma cell dyscrasia and the overproduction of the vascular endothelial growth factor (VEGF). Recently, the prognosis of POEMS was substantially improved by introduction of therapeutic intervention for myeloma. However, no randomised clinical trial has been performed because of the rarity and severity of the disease. METHODS AND ANALYSIS The Japanese POEMS syndrome with Thalidomide (J-POST) Trial is a phase II/III multicentre, double-blinded, randomised, controlled trial that aims to evaluate the efficacy and safety of a 24-week treatment with thalidomide in POEMS syndrome, with an additional 48-week open-label safety study. Adults with POEMS syndrome who have no indication for transplantation are assessed for eligibility at 12 tertiary neurology centres in Japan. Patients who satisfy the eligibility criteria are randomised (1:1) to receive thalidomide (100-300 mg daily) plus dexamethasone (12 mg/m(2) on days 1-4 of a 28-day cycle) or placebo plus dexamethasone. Both treatments were administered for 24 weeks (six cycles; randomised comparative study period). Patients who complete the randomised study period or show subacute deterioration during the randomised period participate in the subsequent 48-week open-label safety study (long-term safety period). The primary end point of the study is the reduction rate of serum VEGF levels at 24 weeks. ETHICS AND DISSEMINATION The protocol was approved by the Institutional Review Board of each hospital. The trial was notified and registered at the Pharmaceutical and Medical Devices Agency, Japan (No. 22-1716). The J-POST Trial is currently ongoing and is due to finish in August 2015. The findings of this trial will be disseminated through peer-reviewed publications and conference presentations and will also be disseminated to participants. TRIAL REGISTRATION NUMBER UMIN000004179 and JMA-IIA00046.
Spring Framework : A Companion to JavaEE
This paper present the ideas of the Spring framework which is widely used in making enterprise applications .Considering the present situation where applications are developed using the traditional EJB model, Spring framework insists that ordinary java beans can be used with slight modifications. This framework can be used with J2EE to make it easier to develop application. This paper presents the architecture overview of spring along with the features of the framework that have made the framework useful. The integration of various frameworks for an E-commerce system has also been discussed in this paper. The Spring MVC framework is also discussed. This paper also proposes architecture for a website based on Spring, Hibernate and struts framework. KeywordsSpring, IoC, AOP, E-commerce, MVC
Efficacy and safety of 10-mg azilsartan compared with 8-mg candesartan cilexetil in Japanese patients with hypertension: a randomized crossover non-inferiority trial
We investigated whether 10 mg per day of azilsartan, one-half of the normal dosage, would be non-inferior to 8 mg per day of candesartan cilexetil for controlling blood pressure in Japanese patients with hypertension. In this open-label, randomized, crossover trial, 309 hypertensive Japanese adults treated with 8-mg candesartan cilexetil were randomized into two arms and received either 10-mg azilsartan or 8-mg candesartan cilexetil in a crossover manner. The primary efficacy outcome was systolic blood pressure, and the margin of non-inferiority was set to be 2.5 mm Hg. The participants were 67±11 years old, and 180 (58%) were male. The baseline systolic and diastolic blood pressure levels were 127.1±13.2 and 69.7±11.2 mm Hg, respectively. During the study period, the difference in systolic blood pressure between the treatments with 10-mg azilsartan and 8-mg candesartan cilexetil was −1.7 mm Hg, with the two-sided 95% confidence interval (CI) ranged from −3.2 to −0.2 mm Hg. The upper boundary of the 95% CI was below the margin of 2.5 mm Hg, confirming the non-inferiority of 10-mg azilsartan to 8-mg candesartan cilexetil. The difference also reached significance (P=0.037). The corresponding difference in diastolic blood pressure was −1.4 (95% CI: −2.4 to −0.4) mm Hg (P=0.006). Treatment with 10-mg azilsartan was similar to 8-mg candesartan cilexetil in its association with rare adverse events. In conclusion, 10-mg azilsartan was non-inferior to 8-mg candesartan cilexetil for controlling systolic blood pressure in Japanese hypertensive patients already being treated with 8-mg candesartan cilexetil.
The Neural Network Pushdown Automaton: Model, Stack and Learning Simulations
In order for neural networks to learn complex languages or grammars, they must have sufficient computational power or resources to recognize or generate such languages. Though many approaches have been discussed, one obvious approach to enhancing the processing power of a recurrent neural network is to couple it with an external stack memory in effect creating a neural network pushdown automata (NNPDA). This paper discusses in detail this NNPDA its construction, how it can be trained and how useful symbolic information can be extracted from the trained network. In order to couple the external stack to the neural network, an optimization method is developed which uses an error function that connects the learning of the state automaton of the neural network to the learning of the operation of the external stack. To minimize the error function using gradient descent learning, an analog stack is designed such that the action and storage of information in the stack are continuous. One interpretation of a continuous stack is the probabilistic storage of and action on data. After training on sample strings of an unknown source grammar, a quantization procedure extracts from the analog stack and neural network a discrete pushdown automata (PDA). Simulations show that in learning deterministic context-free grammars the balanced parenthesis language, 1 n0n, and the deterministic Palindrome the extracted PDA is correct in the sense that it can correctly recognize unseen strings of arbitrary length. In addition, the extracted PDAs can be shown to be identical or equivalent to the PDAs of the source grammars which were used to generate the training strings.
Comparison of HEVC coding schemes for tile-based viewport-adaptive streaming of omnidirectional video
Virtual reality applications make use of 360-degree panoramic or omnidirectional video with high resolution and high frame rate in order to create the immersive experience to the user. The user views only a portion of the captured 360-degree scene at each time instant, hence streaming the whole omnidirectional video in highest quality is not efficient. In order to alleviate the problem of bandwidth wastage, viewport-adaptive encoding and streaming schemes have been proposed. In these schemes, part of the captured scene that is within the viewer's field of view is delivered at highest quality while the rest of the scene in a lower quality. In this work, three tile-based viewport-adaptive methods using motion-constrained tile sets (MCTS), region-of-interest scalability and simulcast approach have been studied for streaming omnidirectional content. In the performed experiments with various tiling arrangements, MCTS-based scheme required highest bitrate compared to other methods. The scalable coding scheme provided the highest performance in terms of streaming bitrate saving on average up to 53% and 35% compared to streaming the whole omnidirectional video and MCTS-based method, respectively.
Seborrheic Keratoses as the First Sign of Bladder Carcinoma: Case Report of Leser-Trélat Sign in a Rare Association with Urinary Tract Cancer
Introduction. Skin disorders can be the first manifestation of occult diseases. The recognition of typical paraneoplastic dermatoses may anticipate the cancer diagnosis and improve its prognosis. Although rarely observed, the sudden appearance and/or rapid increase in number and size of seborrheic keratoses can be associated with malignant neoplasms, known as the sign of Leser-Trélat. The aim of this report is to unveil a case of a patient whose recently erupted seborrheic keratoses led to investigation and consequent diagnosis of bladder cancer. Case Presentation. A 67-year-old man was admitted to the intensive care unit due to an exacerbation of chronic obstructive pulmonary disease (COPD). On physical examination, multiple seborrheic keratoses on the back of the hands, elbows, and trunk were observed; the patient had a 4-month history of these lesions yet was asymptomatic. The possibility of Leser-Trélat syndrome justified the investigation for neoplasia, and a bladder carcinoma was detected by CT-scan. The patient denied previous hematuria or any other related symptoms. Many of the lesions regressed during oncologic treatment. Conclusion. Despite the critics on the validity of the sign of Leser-Trélat, our patient fulfills the description of the disease, though urinary malignancy is a rare association. That corroborates the need of further investigation when there is a possibility of paraneoplastic manifestation.