title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Rhinoplasty for Middle Eastern noses. | BACKGROUND
Rhinoplasty remains one of the most challenging operations, as exemplified in the Middle Eastern patient. The ill-defined, droopy tip, wide and high dorsum, and thick skin envelope mandate meticulous attention to preoperative evaluation and efficacious yet safe surgical maneuvers. The authors provide a systematic approach to evaluation and improvement of surgical outcomes in this patient population.
METHODS
A retrospective, 3-year review identified patients of Middle Eastern heritage who underwent primary rhinoplasty and those who did not but had nasal photographs. Photographs and operative records (when applicable) were reviewed. Specific nasal characteristics, component-directed surgical techniques, and aesthetic outcomes were delineated.
RESULTS
The Middle Eastern nose has a combination of specific nasal traits, with some variability, including thick/sebaceous skin (excess fibrofatty tissue), high/wide dorsum with cartilaginous and bony humps, ill-defined nasal tip, weak/thin lateral crura relative to the skin envelope, nostril-tip imbalance, acute nasolabial and columellar-labial angles, and a droopy/hyperdynamic nasal tip. An aggressive yet nondestructive surgical approach to address the nasal imbalance often requires soft-tissue debulking, significant cartilaginous framework modification (with augmentation/strengthening), tip refinement/rotation/projection, low osteotomies, and depressor septi nasi muscle treatment. The most common postoperative defects were related to soft-tissue scarring, thickened skin envelope, dorsum irregularities, and prolonged edema in the supratip/tip region.
CONCLUSIONS
It is critical to improve the strength of the cartilaginous framework with respect to the thick, noncontractile skin/soft-tissue envelope, particularly when moderate to large dorsal reduction is required. A multitude of surgical maneuvers are often necessary to address all the salient characteristics of the Middle Eastern nose and to produce the desired aesthetic result. |
Text extraction from texture images using masked signal decomposition | Text extraction is an important problem in image processing with applications from optical character recognition to autonomous driving. Most of the traditional text segmentation algorithms consider separating text from a simple background (which usually has a different color from texts). In this work we consider separating texts from a textured background, that has similar color to texts. We look at this problem from a signal decomposition perspective, and consider a more realistic scenario where signal components are overlaid on top of each other (instead of adding together). When the signals are overlaid, to separate signal components, we need to find a binary mask which shows the support of each component. Because directly solving the binary mask is intractable, we relax this problem to the approximated continuous problem, and solve it by alternating optimization method. We show that the proposed algorithm achieves significantly better results than other recent works on several challenging images. |
A New Local Distance-Based Outlier Detection Approach for Scattered Real-World Data | Detecting outliers which are grossly different from or inconsistent with the remaining dataset is a major challenge in real-world KDD applications. Existing outlier detection methods are ineffective on scattered real-world datasets due to implicit data patterns and parameter setting issues. We define a novel Local Distance-based Outlier Factor (LDOF) to measure the outlier-ness of objects in scattered datasets which addresses these issues. LDOF uses the relative location of an object to its neighbours to determine the degree to which the object deviates from its neighbourhood. Properties of LDOF are theoretically analysed including LDOF’s lower bound and its false-detection probability, as well as parameter settings. In order to facilitate parameter settings in real-world applications, we employ a top-n technique in our outlier detection approach, where only the objects with the highest LDOF values are regarded as outliers. Compared to conventional approaches (such as top-n KNN and top-n LOF), our method top-n LDOF is more effective at detecting outliers in scattered data. It is also easier to set parameters, since its performance is relatively stable over a large range of parameter values, as illustrated by experimental results on both real-world and synthetic datasets. |
Workshop track-ICLR 2018 A SPECT-BASED Q UESTION G ENERATION | Asking questions is an important ability for a chatbot. Although there are existing works on question generation with a piece of descriptive text, it remains to be a very challenging problem. In this paper, we consider a new question generation problem which also requires the input of a target aspect in addition to a piece of descriptive text. The key reason for this new problem is that it has been found from practical applications that useful questions need to be targeted toward some relevant aspects. One almost never asks a random question in a conversation. Due to the fact that given a descriptive text, it is often possible to ask many types of questions, generating a question without knowing what it is about is of limited use. in order to solve this problem, we propose a novel neural network which is able to generate aspect-based questions. One major advantage of this model is that it can be trained directly using a question-answering corpus without requiring any additional annotations like annotating aspects in the questions or answers. Experimental results show that our proposed model outperforms the state-of-theart question generation methods. |
PHP and SQL made simple | The book Build Your Own Database Driven Website Using PHP & MySQL by Kevin Yank provides a hands-on look at what's involved in building a database-driven Web site. The author does a good job of patiently teaching the reader how to install and configure PHP 5 and MySQL to organize dynamic Web pages and put together a viable content management system. At just over 350 pages, the book is rather small compared to a lot of others on the topic, but it contains all the essentials. The author employs excellent teaching techniques to set up the foundation stone by stone and then grouts everything solidly together later in the book. This book aims at intermediate and advanced Web designers looking to make the leap to server-side programming. The author assumes his readers are comfortable with simple HTML. He provides an excellent introduction to PHP and MySQL (including installation) and explains how to make them work together. The amount of material he covers guarantees that almost any reader will benefit. |
Computationally efficient link prediction in a variety of social networks | Online social networking sites have become increasingly popular over the last few years. As a result, new interdisciplinary research directions have emerged in which social network analysis methods are applied to networks containing hundreds of millions of users. Unfortunately, links between individuals may be missing either due to an imperfect acquirement process or because they are not yet reflected in the online network (i.e., friends in the real world did not form a virtual connection). The primary bottleneck in link prediction techniques is extracting the structural features required for classifying links. In this article, we propose a set of simple, easy-to-compute structural features that can be analyzed to identify missing links. We show that by using simple structural features, a machine learning classifier can successfully identify missing links, even when applied to a predicament of classifying links between individuals with at least one common friend. We also present a method for calculating the amount of data needed in order to build more accurate classifiers. The new Friends measure and Same community features we developed are shown to be good predictors for missing links. An evaluation experiment was performed on ten large social networks datasets: Academia.edu, DBLP, Facebook, Flickr, Flixster, Google+, Gowalla, TheMarker, Twitter, and YouTube. Our methods can provide social network site operators with the capability of helping users to find known, offline contacts and to discover new friends online. They may also be used for exposing hidden links in online social networks. |
ThingSeek: A Crawler and Search Engine for the Internet of Things | The rapidly growing paradigm of the Internet of Things (IoT) requires new search engines, which can crawl heterogeneous data sources and search in highly dynamic contexts. Existing search engines cannot meet these requirements as they are designed for traditional Web and human users only. This is contrary to the fact that things are emerging as major producers and consumers of information. Currently, there is very little work on searching IoT and a number of works claim the unavailability of public IoT data. However, it is dismissed that a majority of real-time web-based maps are sharing data that is generated by things, directly. To shed light on this line of research, in this paper, we firstly create a set of tools to capture IoT data from a set of given data sources. We then create two types of interfaces to provide real-time searching services on dynamic IoT data for both human and machine users. |
What Did You Mean? - Facing the Challenges of User-generated Software Requirements | Existing approaches towards service composition demand requirements of the customers in terms of service templates, service query profiles, or partial process models. However, addressed non-expert customers may be unable to fill-in the slots of service templates as requested or to describe, for example, preand postconditions, or even have difficulties in formalizing their requirements. Thus, our idea is to provide nonexperts with suggestions how to complete or clarify their requirement descriptions written in natural language. Two main issues have to be tackled: (1) partial or full inability (incapacity) of non-experts to specify their requirements correctly in formal and precise ways, and (2) problems in text analysis due to fuzziness in natural language. We present ideas how to face these challenges by means of requirement disambiguation and completion. Therefore, we conduct ontology-based requirement extraction and similarity retrieval based on requirement descriptions that are gathered from App marketplaces. The innovative aspect of our work is that we support users without expert knowledge in writing their requirements by simultaneously resolving ambiguity, vagueness, and underspecification in natural language. |
Early predictors of adolescent aggression and adult violence. | The Cambridge Study in Delinquent Development is a prospective longitudinal survey of 411 London males from ages 8 years old to 32 years old. This article investigates the prediction of adolescent aggression (ages 12-14 years old), teenage violence (ages 16-18 years old), adult violence (age 32 years old), and convictions for violence. Generally, the best predictors were measures of economic deprivation, family criminality, poor child-rearing, school failure, hyperactivity-impulsivity-attention deficit, and antisocial child behavior. Similar predictors applied to all four measures of aggression and violence. It is concluded that aggression and violence are elements of a more general antisocial tendency, and that the predictors of aggression and violence are similar to the predictors of antisocial and criminal behavior in general. |
Potential of Estimating Soil Moisture Under Vegetation Cover by Means of PolSAR | In this paper, the potential of using polarimetric SAR (PolSAR) acquisitions for the estimation of volumetric soil moisture under agricultural vegetation is investigated. Soil-moisture estimation by means of SAR is a topic that is intensively investigated but yet not solved satisfactorily. The key problem is the presence of vegetation cover which biases soil-moisture estimates. In this paper, we discuss the problem of soil-moisture estimation in the presence of agricultural vegetation by means of L-band PolSAR images. SAR polarimetry allows the decomposition of the scattering signature into canonical scattering components and their quantification. We discuss simple canonical models for surface, dihedral, and vegetation scattering and use them to model and interpret scattering processes. The performance and modifications of the individual scattering components are discussed. The obtained surface and dihedral components are then used to retrieve surface soil moisture. The investigations cover, for the first time, the whole vegetation-growing period for three crop types using SAR data and ground measurements acquired in the frame of the AgriSAR campaign. |
Set-Membership-Based Fault Detection and Isolation for Robotic Assembly of Electrical Connectors | This paper addresses the fault detection and isolation (FDI) problem for robotic assembly of electrical connectors in the framework of set-membership. Both the fault-free and faulty cases of assembly are modeled by different switched linear models with known switching sequences, bounded parameters, and external disturbances. The locations of switching points of each model are assumed to be inside some areas but the accurate positions are not clear. Given current input/output data, the feasible parameter set of fault-free switched linear model is obtained by sequentially calculating an optimal ellipsoid. If the pair of data is not consistent with any possible submodel, a fault is then detected. The isolation of fault is realized by checking the consistency between the data sequence and each possible fault model one by one. The robustness of the proposed FDI algorithms is proved. The effectiveness of these algorithms is verified by the robotic assembly experiments of mating electrical connectors.Note to Practitioners—In modern robotic assembly tasks, the industrial robots often need to manipulate tiny objects with complex structure. Electrical connectors are a typical kind of these objects and widely used in many industrial fields. To avoid damaging the fragile connectors and accelerate the assembly process, it is required to promptly detect and isolate the certain assembly fault in real time so that the robot can immediately implement an error recovery procedure according to the identified fault. The proposed set-membership-based fault detection and isolation (FDI) methodology satisfies both the timing and fault-isolation requirements for this kind of robotic assembly task. In terms of the set-membership theory, no false alarm will occur if there are sufficient training data for the proposed method. In addition, it turns out that the proposed method can signal an alarm faster than conventional residual-based FDI method from plentiful experiments. Although only the robotic assembly of electrical connectors is investigated, our FDI method can also be applied in the assembly task of other small and complex parts. This is especially useful for increasing the productivity and promoting the automation level of electronic industries. |
3D Traffic Scene Understanding From Movable Platforms | In this paper, we present a novel probabilistic generative model for multi-object traffic scene understanding from movable platforms which reasons jointly about the 3D scene layout as well as the location and orientation of objects in the scene. In particular, the scene topology, geometry, and traffic activities are inferred from short video sequences. Inspired by the impressive driving capabilities of humans, our model does not rely on GPS, lidar, or map knowledge. Instead, it takes advantage of a diverse set of visual cues in the form of vehicle tracklets, vanishing points, semantic scene labels, scene flow, and occupancy grids. For each of these cues, we propose likelihood functions that are integrated into a probabilistic generative model. We learn all model parameters from training data using contrastive divergence. Experiments conducted on videos of 113 representative intersections show that our approach successfully infers the correct layout in a variety of very challenging scenarios. To evaluate the importance of each feature cue, experiments using different feature combinations are conducted. Furthermore, we show how by employing context derived from the proposed method we are able to improve over the state-of-the-art in terms of object detection and object orientation estimation in challenging and cluttered urban environments. |
High-dose intravenous immunoglobulin treatment in cryptogenic West and Lennox-Gastaut syndrome; an add-on study | In an add-on pilot study, a group of 15 children with cryptogenic and intractable West syndrome (3) and Lennox-Gastaut syndrome (12) received intravenous immunoglobulin (IVIg, 0.4g/kg body weight per day for 5 consecutive days, followed by the same dose once every 2 weeks for 3 months). Five patients had been treated previously with ACTH without success. The reduction in clinical seizures averaged 70%. Electroencephalographic (EEG) recordings revealed a mean reduction in epileptic discharges of 40%. In all 15 patients, acceleration of EEG background activity occurred, and psychomotor development improved. Prior to IVIg administration, CSF examinations were normal. After IVIg administration, the serum total IgG concentration increased by an average of 76%, and the CSF IgG concentration by 44%. According to our data, IVIg crosses the blood-CSF barrier, and might be effective in the treatment of West syndrome and Lennox-Gastaut syndrome. We suggest it should be considered when other treatments, such as ACTH, have failed. |
SEAME: a Mandarin-English code-switching speech corpus in south-east asia | In Singapore and Malaysia, people often speak a mixture of Mandarin and English within a single sentence. We call such sentences intra-sentential code-switch sentences. In this paper, we report on the development of a Mandarin-English codeswitching spontaneous speech corpus: SEAME. The corpus is developed as part of a multilingual speech recognition project and will be used to examine how Mandarin-English codeswitch speech occurs in the spoken language in South-East Asia. Additionally, it can provide insights into the development of large vocabulary continuous speech recognition (LVCSR) for code-switching speech. The corpus collected consists of intra-sentential code-switching utterances that are recorded under both interview and conversational settings. This paper describes the corpus design and the analysis of collected corpus. |
EigenTrustp++: Attack resilient trust management | This paper argues that trust and reputation models should take into account not only direct experiences (local trust) and experiences from the circle of “friends”, but also be attack resilient by design in the presence of dishonest feedbacks and sparse network connectivity. We first revisit EigenTrust, one of the most popular reputation systems to date, and identify the inherent vulnerabilities of EigenTrust in terms of its local trust vector, its global aggregation of local trust values, and its eigenvector based reputation propagating model. Then we present EigenTrust++, an attack resilient trust management scheme. EigenTrust++ extends the eigenvector based reputation propagating model, the core of EigenTrust, and counters each of vulnerabilities identified with alternative methods that are by design more resilient to dishonest feedbacks and sparse network connectivity under four known attack models. We conduct extensive experimental evaluation on EigenTrust++, and show that EigenTrust++ can significantly outperform EigenTrust in terms of both performance and attack resilience in the presence of dishonest feedbacks and sparse network connectivity against four representative attack models. |
RASA: A New Task Scheduling Algorithm in Grid Environment | In this paper, a new task scheduling algorithm called RASA, considering the distribution and scalability characteristics of grid resources, is proposed. The algorithm is built through a comprehensive study and analysis of two well known task scheduling algorithms, Min-min and Max-min. RASA uses the advantages of the both algorithms and covers their disadvantages. To achieve this, RASA firstly estimates the completion time of the tasks on each of the available grid resources and then applies the Max-min and Min-min algorithms, alternatively. In this respect, RASA uses the Min-min strategy to execute small tasks before the large ones and applies the Max-min strategy to avoid delays in the execution of large tasks and to support concurrency in the execution of large and small tasks. Our experimental results of applying RASA on scheduling independent tasks within grid environments demonstrate the applicability of RASA in achieving schedules with comparatively lower makespan. |
Query-biased learning to rank for real-time twitter search | By incorporating diverse sources of evidence of relevance, learning to rank has been widely applied to real-time Twitter search, where users are interested in fresh relevant messages. Such approaches usually rely on a set of training queries to learn a general ranking model, which we believe that the benefits brought by learning to rank may not have been fully exploited as the characteristics and aspects unique to the given target queries are ignored. In this paper, we propose to further improve the retrieval performance of learning to rank for real-time Twitter search, by taking the difference between queries into consideration. In particular, we learn a query-biased ranking model with a semi-supervised transductive learning algorithm so that the query-specific features, e.g. the unique expansion terms, are utilized to capture the characteristics of the target query. This query-biased ranking model is combined with the general ranking model to produce the final ranked list of tweets in response to the given target query. Extensive experiments on the standard TREC Tweets11 collection show that our proposed query-biased learning to rank approach outperforms strong baseline, namely the conventional application of the state-of-the-art learning to rank algorithms. |
Treatment of Lower lip Mucocele with Diode Laser - A Novel Approach | Mucoceles are benign, mucus containing cystic lesions of the minor salivary glands. Most dental literature reports a higher incidence of mucocele in young patients, with trauma being a leading cause. The purpose of this report was to describe a clinical case of a 22 year old female with a 6 mm mucocele on the lower lip treated by a high-intensity diode laser. Diode laser surgery provided satisfactory results, which was rapid, bloodless, and well accepted by patients. Postoperative problems, discomfort, and scarring were minimal. The histopathological report confirmed the presurgical diagnosis. No relapse was observed upto one year after surgery. Annals of Dental Research (2013) 2 Suppl 1, 102-108 |
TOWARD A METABOLIC THEORY OF ECOLOGY | Metabolism provides a basis for using first principles of physics, chemistry, and biology to link the biology of individual organisms to the ecology of populations, communities, and ecosystems. Metabolic rate, the rate at which organisms take up, transform, and expend energy and materials, is the most fundamental biological rate. We have developed a quantitative theory for how metabolic rate varies with body size and temperature. Metabolic theory predicts how metabolic rate, by setting the rates of resource uptake from the environment and resource allocation to survival, growth, and reproduction, controls ecological processes at all levels of organization from individuals to the biosphere. Examples include: (1) life history attributes, including development rate, mortality rate, age at maturity, life span, and population growth rate; (2) population interactions, including carrying capacity, rates of competition and predation, and patterns of species diversity; and (3) ecosystem processes, including rates of biomass production and respiration and patterns of trophic dynamics. Data compiled from the ecological literature strongly support the theoretical predictions. Eventually, metabolic theory may provide a conceptual foundation for much of ecology, just as genetic theory provides a foundation for much of evolutionary biology. |
A novel control for active interphase transformer using in a 24-pulse converter | This paper proposes a new active interphase transformer for 24-pulse diode rectifier. The proposed scheme injects a compensation current into the secondary winding of either of the two first-stage interphase transformers. For only one of the first-stage interphase transformers being active, the inverter conducted the injecting current is with a lower kVA rating [1.26% pu (Po)] compared to conventional active interphase transformers. Moreover, the proposal scheme draws near sinusoidal input currents and the simulated and the experimental total harmonic distortion of overall line currents are only 1.88% and 2.27% respectively. When the inverter malfunctions, the input line current still can keep in the conventional 24-pulse situation. A digital-signal-processor (DSP) based digital controller is employed to calculate the desired compensation current and deals with the trigger signals needed for the inverter. Moreover, a 6kW prototype is built for test. Both simulation and experimental results demonstrate the validity of the proposed scheme. |
Integrating Text Plans for Conciseness and Coherence | Our experience with a critiquing system shows that when the system detects problems with the user's performance, multiple critiques are often produced. Analysis of a corpus of actual critiques revealed that even though each individual critique is concise and coherent, the set of critiques as a whole may exhibit several problems that detract from conciseness and coherence, and consequently assimilation. Thus a text planner was needed that could integrate the text plans for individual communicative goals to produce an overall text plan representing a concise, coherent message.
This paper presents our general rule-based system for accomplishing this task. The system takes as input a \emph{set} of individual text plans represented as RST-style trees, and produces a smaller set of more complex trees representing integrated messages that still achieve the multiple communicative goals of the individual text plans. Domain-independent rules are used to capture strategies across domains, while the facility for addition of domain-dependent rules enables the system to be tuned to the requirements of a particular domain. The system has been tested on a corpus of critiques in the domain of trauma care. |
Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data | YES is a simplified stroke-based method for sorting Chinese characters. It is free from stroke counting and grouping, and thus much faster and more accurate than the traditional method. This paper presents a collation element table built in YES for a large joint Chinese character set covering (a) all 20,902 characters of Unicode CJK Unified Ideographs, (b) all 11,408 characters in the Complete List of Chinese Characters Used by the Media in 2013, (c) all 13,000 plus characters in the latest versions of Xinhua Dictionary(v11) and Contemporary Chinese Dictionary(v6). Of the 20,902 Chinese characters in Unicode, 97.23% have one-to-one relationship with their stroke order codes in YES, comparing with 90.69% of the traditional method. Enhanced with the secondary and tertiary sorting levels of stroke layout and Unicode value, there is a guarantee of one-to-one relationship between the characters and collation elements. The collation element table has been successfully applied to sorting CC-CEDICT, a Chinese-English dictionary of over 112,000 word entries. |
Modeling Sustainable Food Systems | The processes underlying environmental, economic, and social unsustainability derive in part from the food system. Building sustainable food systems has become a predominating endeavor aiming to redirect our food systems and policies towards better-adjusted goals and improved societal welfare. Food systems are complex social-ecological systems involving multiple interactions between human and natural components. Policy needs to encourage public perception of humanity and nature as interdependent and interacting. The systemic nature of these interdependencies and interactions calls for systems approaches and integrated assessment tools. Identifying and modeling the intrinsic properties of the food system that will ensure its essential outcomes are maintained or enhanced over time and across generations, will help organizations and governmental institutions to track progress towards sustainability, and set policies that encourage positive transformations. This paper proposes a conceptual model that articulates crucial vulnerability and resilience factors to global environmental and socio-economic changes, postulating specific food and nutrition security issues as priority outcomes of food systems. By acknowledging the systemic nature of sustainability, this approach allows consideration of causal factor dynamics. In a stepwise approach, a logical application is schematized for three Mediterranean countries, namely Spain, France, and Italy. |
An introduction to understanding elevation-based topography: how elevation data are displayed - a review. | The increased frequency of refractive surgery and the shift towards the correction of higher-order aberrations necessitates a more detailed understanding of corneal shape. Early topography systems were based on Placido technology, as this was initially more intuitive for the general refractive surgeon. Newer computerized corneal modelling has increased our knowledge beyond what was previously possible. Elevation-based systems utilize a direct triangulation technique to measure the corneal surface. Elevation-based Scheimpflug imaging has advantages in that it allows for the measurement of both the anterior and posterior corneal surfaces. Posterior measurements are often the first indicators of future ectatic disease, in spite of completely normal anterior curvature. Examination of the posterior corneal surface can often reveal pathology that would otherwise be missed if one was relying on anterior analysis alone. Although there is little disagreement in diagnosing clinically evident keratoconus, agreement on what constitutes 'form fruste' or preclinical keratoconus remains elusive. The ability of elevation-based topography to analyse both anterior and posterior corneal surfaces adds significantly to our ability to identify eyes believed to be 'at risk'. As more knowledge is gained, it is appreciated that a full understanding of the workings of the human eye requires knowledge obtained from more than just one surface. |
Assessing EFL learners ’ interlanguage pragmatic knowledge : Implications for testers and teachers | Studies showed that interlanguage pragmatic knowledge is teachable. The necessity and importance of teaching pragmatics have also been recognized, but still foreign language teachers hesitate to teach pragmatics in their classrooms. The hesitation could be partly attributed to the lack of some valid methods for testing interlanguage pragmatic knowledge. This article explores ways to assess Chinese EFL learners’ pragmatic competence and meanwhile investigates whether learners of different EFL proficiency levels perform differently in pragmatics tests. Results showed that the test methods used in this study were reliable and valid in assessing Chinese EFL Learners’ interlanguage pragmatic knowledge. It is suggested that a combination of elicitation through both native speakers and non-native speakers should be a better and more practical way to construct interlanguage pragmatic test items. The two proficiency groups in this study were shown to differ significantly in terms of their English proficiency, but not on two of the three pragmatics tests, which indicated that the Chinese EFL learners’ interlanguage pragmatic knowledge did not seem to increase substantially with their language proficiency. The findings reconfirmed the importance of teaching pragmatic knowledge to Chinese EFL learners in classrooms. The pedagogical implications and applications for foreign language teachers and testers are also discussed. The paper concludes with the suggestion that EFL teachers should teach pragmatic knowledge in class and include interlanguage pragmatic knowledge in large-scale tests. |
An efficient brain tumor detection methodology using K-means clustering algoriftnn | Segmentation of images holds an important position in the area of image processing. It becomes more important while typically dealing with medical images where pre-surgery and post surgery decisions are required for the purpose of initiating and speeding up the recovery process. Computer aided detection of abnormal growth of tissues is primarily motivated by the necessity of achieving maximum possible accuracy. Manual segmentation of these abnormal tissues cannot be compared with modern day's high speed computing machines which enable us to visually observe the volume and location of unwanted tissues. A well known segmentation problem within MRI is the task of labeling the tissue type which include White Matter (WM), Grey Matter (GM), Cerebrospinal Fluid (CSF) and sometimes pathological tissues like tumor etc. This paper describes an efficient method for automatic brain tumor segmentation for the extraction of tumor tissues from MR images. In this method segmentation is carried out using K-means clustering algorithm for better performance. This enhances the tumor boundaries more and is very fast when compared to many other clustering algorithms. The proposed technique produce appreciative results. |
Kernelized Locality-Sensitive Hashing | Fast retrieval methods are critical for many large-scale and data-driven vision applications. Recent work has explored ways to embed high-dimensional features or complex distance functions into a low-dimensional Hamming space where items can be efficiently searched. However, existing methods do not apply for high-dimensional kernelized data when the underlying feature embedding for the kernel is unknown. We show how to generalize locality-sensitive hashing to accommodate arbitrary kernel functions, making it possible to preserve the algorithm's sublinear time similarity search guarantees for a wide class of useful similarity functions. Since a number of successful image-based kernels have unknown or incomputable embeddings, this is especially valuable for image retrieval tasks. We validate our technique on several data sets, and show that it enables accurate and fast performance for several vision problems, including example-based object classification, local feature matching, and content-based retrieval. |
The vehicle routing problem: State of the art classification and review | Over the past decades, the Vehicle Routing Problem (VRP) and its variants have grown ever more popular in the academic literature. Yet, the problem characteristics and assumptions vary widely and few literature reviews have made an effort to classify the existing articles accordingly. In this article, we present a taxonomic review of the VRP literature published between 2009 and 2013. Based on an adapted version of the comprehensive taxonomy suggested by Eksioglu et al. (2009), we classify 144 articles and analyze the trends in the VRP literature. This classification is the first to categorize the articles to this level of detail. |
On the Unnecessity of Multiple Overlaps in Completion Theorem Proving | Completion Theorem Proving is based on the observation that the task of proving a formula can be transformed into the task of solving a system of equations over a boolean polynomial ring. The latter can be accomplished by means of the completion of a set of rewrite rules obtained from the equational system. The central operation in this completion process is the generation of critical pairs comprising the computation of overlaps on products of atoms. This computation requires weak AC-unification, which is known to be NP-hard. Motivated by the THEOPOGLES system (Muller 1987), we show that Hsiang’s (1985) N-strategy can still be strengthened by precluding multiple overlaps. This results in a drastical reduction of the search space. Moreover, these systems manage with unification in a free theory, which can be performed in linear time. Since this special critical pair generation can be translated into a resolution step, provided the two rules correspond to clauses, our result also amounts to a generalization of Dietrich’s (1986) result on the translation from superposition steps into resolution steps. |
A novel planar metamaterial design for electromagnetically induced transparency and slow light. | A novel planar plasmonic metamaterial for electromagnetically induced transparency and slow light characteristic is presented in this paper, which consists of nanoring and nanorod compound structures. Two bright modes in the metamaterial are induced by the electric dipole resonance inside nanoring and nanorod, respectively. The coupling between two bright modes introduces transparency window and large group index. By adjusting the geometric parameters of metamaterial structure, the transmittance of EIT window at 385 THz is about 60%, and the corresponding group index and Q factor can reach up to 1.2 × 10³ and 97, respectively, which has an important application in slow-light device, active plasmonic switch, SERS and optical sensing. |
Factors associated with toxicity, final dose, and efficacy of methotrexate in patients with rheumatoid arthritis. | OBJECTIVE
To study factors associated with toxicity, final dose, and efficacy of methotrexate (MTX) in patients with rheumatoid arthritis (RA).
METHODS
Data were used from a randomised clinical 48 week trial on 411 patients with RA all treated with MTX, comparing folates and placebo. Logistic regression was used to study the relation between baseline variables and various dependent factors, including hepatotoxicity (alanine aminotransferase >/=3 x upper limit of normal), MTX withdrawal, final MTX dose >/=15 mg/week, and MTX efficacy.
RESULTS
Addition of folates to MTX treatment was strongly related to the lack of hepatotoxicity. Next to this, high body mass index was related to the occurrence of hepatotoxicity. Prior gastrointestinal (GI) events and younger age were related to the adverse event, diarrhoea. Hepatotoxicity and GI adverse events were the main reason for MTX withdrawal, which in turn was associated with the absence of folate supplementation, body mass index, prior GI events, and female sex. Renal function (creatinine clearance >/=50 ml/min) was not associated with toxicity. Reaching a final dose of MTX of >/=15 mg/week was related to folate supplementation and the absence of prior GI events. Efficacy of MTX treatment was associated with low disease activity at baseline, male sex, use of non-steroidal anti-inflammatory drugs (NSAIDs), and lower creatinine clearance.
CONCLUSIONS
MTX toxicity, final dose, and efficacy are influenced by folate supplementation. Baseline characteristics predicting the outcome of MTX treatment are mainly prior GI events, body mass index, sex, use of NSAIDs, and creatinine clearance. |
A Large Subcategorization Lexicon for Natural Language Processing Applications | We introduce a large computational subcategorization lexicon which includes subcategorization frame (SCF) and frequency information for 6,397 English verbs. This extensive lexicon was acquired automatically from five corpora and the Web using the current version of the comprehensive subcategorization acquisition system of Briscoe and Carroll (1997). The lexicon is provided freely for research use, along with a script which can be used to filter and build sub-lexicons suited for different natural language processing (NLP) purposes. Documentation is also provided which explains each sub-lexicon option and evaluates its accuracy. |
Real Time Vehicle Security System through Face Recognition | In this modern age there is rapid increase in number of vehicles and so is the number of car theft attempts, locally and internationally. With the invention of strong stealing techniques, owners are in fear of having their vehicles being stolen from common parking lot or from outside their home. Thus the protection of vehicles from theft becomes important due to insecure environment. Real time vehicle security system based on computer vision provides a solution to this problem. The proposed vehicle security system performs image processing based real time user authentication using face detection and recognition techniques and microprocessor based control system fixed on board with the vehicle. As the person enters the parked car overcoming the existing security features, the infrared sensor attached to the driver’s seat of the vehicle activates the hidden camera fixed in appropriate position inside the vehicle. As soon as the image is acquired from the activated camera, face of the person is detected using Viola Jones algorithm. The extracted face is recognized using the enhanced Linear Discriminant Analysis (LDA) algorithm which discriminates much of the features rather than looking for exact pattern based on Euclidean distance and also reliable to be used with large samples of data. Performing authorization involves setting the threshold value and comparing with that of Euclidean distance above which the person is not authenticated. The face of the person which is classified as unknown is sent to the mobile of the owner as a MMS through the operating GSM modem. The owner upon receiving the information commands the system and the fuel is regulated using the relay in accordance with the command of the owner. This would be C. Nandakumar et al 372 effective to authenticate the person under different environment and to have an efficient way of vehicle security. |
High-fidelity simulation for evaluating robotic vision performance | Robotic vision, unlike computer vision, typically involves processing a stream of images from a camera with time varying pose operating in an environment with time varying lighting conditions and moving objects. Repeating robotic vision experiments under identical conditions is often impossible, making it difficult to compare different algorithms. For machine learning applications a critical bottleneck is the limited amount of real world image data that can be captured and labelled for both training and testing purposes. In this paper we investigate the use of a photo-realistic simulation tool to address these challenges, in three specific domains: robust place recognition, visual SLAM and object recognition. For the first two problems we generate images from a complex 3D environment with systematically varying camera paths, camera viewpoints and lighting conditions. For the first time we are able to systematically characterise the performance of these algorithms as paths and lighting conditions change. In particular, we are able to systematically generate varying camera viewpoint datasets that would be difficult or impossible to generate in the real world. We also compare algorithm results for a camera in a real environment and a simulated camera in a simulation model of that real environment. Finally, for the object recognition domain, we generate labelled image data and characterise the viewpoint dependency of a current convolution neural network in performing object recognition. Together these results provide a multi-domain demonstration of the beneficial properties of using simulation to characterise and analyse a wide range of robotic vision algorithms. |
Insertion of FLT3 internal tandem duplication in the tyrosine kinase domain-1 is associated with resistance to chemotherapy and inferior outcome. | To evaluate internal tandem duplication (ITD) insertion sites and length as well as their clinical impact in younger adult patients with FLT3-ITD-positive acute myeloid leukemia (AML), sequencing after DNA-based amplification was performed in diagnostic samples from 241 FLT3-ITD-mutated patients. All patients were treated on 3 German-Austrian AML Study Group protocols. Thirty-four of the 241 patients had more than 1 ITD, leading to a total of 282 ITDs; the median ITD length was 48 nucleotides (range, 15-180 nucleotides). ITD integration sites were categorized according to functional regions of the FLT3 receptor: juxtamembrane domain (JMD), n = 148; JMD hinge region, n = 48; beta1-sheet of the tyrosine kinase domain-1 (TKD1), n = 73; remaining TKD1 region, n = 13. ITD length was strongly correlated with functional regions (P < .001). In multivariable analyses, ITD integration site in the beta1-sheet was identified as an unfavorable prognostic factor for achievement of a complete remission (odds ratio, 0.22; P = .01), relapse-free survival (hazard ratio, 1.86; P < .001), and overall survival (hazard ratio, 1.59; P = .008). ITD insertion site in the beta1-sheet appears to be an important unfavorable prognostic factor in young adult patients with FLT3-ITD-positive AML. The clinical trials described herein have been registered as follows: AML HD93 (already published in 2003), AML HD98A (NCT00146120; http://www.ClinicalTrials.gov), and AMLSG 07-04 (NCT00151242; http://www.ClinicalTrials.gov). |
Attention Correctness in Neural Image Captioning | of the Thesis Attention Correctness in Neural Image Captioning |
Happiness, income satiation and turning points around the world | Income is known to be associated with happiness 1 , but debates persist about the exact nature of this relationship 2,3 . Does happiness rise indefinitely with income, or is there a point at which higher incomes no longer lead to greater well-being? We examine this question using data from the Gallup World Poll, a representative sample of over 1.7 million individuals worldwide. Controlling for demographic factors, we use spline regression models to statistically identify points of ‘income satiation’. Globally, we find that satiation occurs at $95,000 for life evaluation and $60,000 to $75,000 for emotional well-being. However, there is substantial variation across world regions, with satiation occurring later in wealthier regions. We also find that in certain parts of the world, incomes beyond satiation are associated with lower life evaluations. These findings on income and happiness have practical and theoretical significance at the individual, institutional and national levels. They point to a degree of happiness adaptation 4,5 and that money influences happiness through the fulfilment of both needs and increasing material desires 6 . Jebb et al. use data from the Gallup World Poll to show that happiness does not rise indefinitely with income: globally, income satiation occurs at US$95,000 for life evaluation and US$60,000 to US$75,000 for emotional well-being. |
Artifacts in wearable photoplethysmographs during daily life motions and their reduction with least mean square based active noise cancellation method | Signal distortion of photoplethysmographs (PPGs) due to motion artifacts has been a limitation for developing real-time, wearable health monitoring devices. The artifacts in PPG signals are analyzed by comparing the frequency of the PPG with a reference pulse and daily life motions, including typing, writing, tapping, gesturing, walking, and running. Periodical motions in the range of pulse frequency, such as walking and running, cause motion artifacts. To reduce these artifacts in real-time devices, a least mean square based active noise cancellation method is applied to the accelerometer data. Experiments show that the proposed method recovers pulse from PPGs efficiently. |
New technologies and concepts for rehabilitation in the acute phase of stroke: a collaborative matrix. | The process of developing a successful stroke rehabilitation methodology requires four key components: a good understanding of the pathophysiological mechanisms underlying this brain disease, clear neuroscientific hypotheses to guide therapy, adequate clinical assessments of its efficacy on multiple timescales, and a systematic approach to the application of modern technologies to assist in the everyday work of therapists. Achieving this goal requires collaboration between neuroscientists, technologists and clinicians to develop well-founded systems and clinical protocols that are able to provide quantitatively validated improvements in patient rehabilitation outcomes. In this article we present three new applications of complementary technologies developed in an interdisciplinary matrix for acute-phase upper limb stroke rehabilitation - functional electrical stimulation, arm robot-assisted therapy and virtual reality-based cognitive therapy. We also outline the neuroscientific basis of our approach, present our detailed clinical assessment protocol and provide preliminary results from patient testing of each of the three systems showing their viability for patient use. |
How to Time the Commodity Market | Over the past few years, commodity prices have experienced the biggest boom in half a century. In this paper we investigate whether it is possible by active asset management to take advantage of the unique risk-return characteristics of commodities, while avoiding their excessive volatility. We show that observing (and learning from) the actions of different groups of market participants enables an active asset manager to successfully 'time' the commodities market. We focus on the information contained in the Commitment of Traders report, published by the CFTC. This report summarizes the size and direction of the positions taken by different types of traders in different markets. Our findings indicate that there is indeed significant informational content in this report, which can be exploited by an active portfolio manager. Our dynamically managed strategies exhibit superior out-of-sample performance, achieving Sharpe ratios in excess of 1.0 and annualized alphas relative to the S&P 500 of around 15%. |
Algorithms and complexity concerning the preemptive scheduling of periodic, real-time tasks on one processor | We investigate the preemptive scheduling of periodic, real-time task systems on one processor. First, we show that when all parameters to the system are integers, we may assume without loss of generality that all preemptions occur at integer time values. We then assume, for the remainder of the paper, that all parameters are indeed integers. We then give, as our main lemma, both necessary and sufficient conditions for a task system to be feasible on one processor. Although these conditions cannot, in general, be tested efficiently (unless P=NP), they do allow us to give efficient algorithms for deciding feasibility on one processor for certain types of periodic task systems. For example, we give a pseudo-polynomial-time algorithm for synchronous systems whose densities are bounded by a fixed constant less than 1. This algorithm represents an exponential improvement over the previous best algorithm. We also give a polynomial-time algorithm for systems having a fixed number of distinct types of tasks. Furthermore, we are able to use our main lemma to show that the feasibility problem for task systems on one processor is co-NP-complete in the strong sence. In order to show this last result, we first show the Simultaneous Congruences Problem to be NP-complete in the strong sense. Both of these last two results answer questions that have been open for ten years. We conclude by showing that for incomplete task systems, that is, task systems in which the start times are not specified, the feasibility problem is ∑ 2 p -complete. |
An Instrument for the Measure of Dabrowskian Overexcitabilities to Identify Gifted Elementary Students. | The ElemenOE is a Likert-scaled observation checklist developed in this study to measure 5 personality characteristics in elementary school children, with predictive validity for identifying giftedness. The characteristics, named “overexcitabilities,” are described within the context of Dabrowski’s Theory of Positive Disintegration. Five scholars of Dabrowski’s theory rated an initial 100 items for content validity. The 61 strongest items comprised the pilot instrument, which teachers used to describe 373 students. Exploratory factor analysis using varimax rotation found factors that related to the 5 OEs. Items with loadings of less than .5 were eliminated, thus creating the 30-item ElemenOE. Teachers used the ElemenOE to describe 171 gifted and nonidentified children. A discriminant analysis yielded one function that significantly discriminated between groups. The ElemenOE classified 76% of gifted students and 42% of nonidentified students as having similar OE profiles. These results indicate that, with revisions, the ElemenOE may be useful in identifying gifted students who are missed by traditional identification measures. P U T T I N G T H E R E S E A R C H T O U S E The use of the ElemenOE shows potential as an alternative assessment for identifying gifted students. It was not surprising that 76% of students a priori identified as gifted by traditional means shared personality characteristics of higher Intellectual overexcitability. The fact that 42.7% of the students who had not been identified as gifted also shared higher Intellectual overexcitability inspires questions about why those students were not participating in gifted programs. The 76% of gifted students with higher Intellectual overexcitability also shared the trait of lower Psychomotor overexcitability, a result that was surprising because prior research has found that gifted people tend to have higher overexcitability than nonidentified people. Is it possible that the more physically demonstrative or active students are overlooked for gifted programs? The research also points out the distinction between the innate characteristics of overexcitabilities that overlap and yet are distinct from the behaviors and achievements that tend to identify children as gifted. Further research needs to be done to refine the identification of Sensual, Imaginational, and Emotional overexcitabilities at the elementary school level. Until then, we can only theorize as to what extent those characteristics contribute—or detract from—identification as gifted. I N S T R U M E N T M E A S U R E D A B R O W S K I A N G I F T E D C H I L D Q U A R T E R L Y • F A L L 2 0 0 4 • V O L 4 8 N O 4 3 3 9 at UNIV CALGARY LIBRARY on March 4, 2011 gcq.sagepub.com Downloaded from I N S T R U M E N T M E A S U R E D A B R O W S K I A N demonstrate academic achievement either on the achievement tests or on the ability test items that require academic achievement, then they may not be identified as gifted. Because it is generally accepted that an IQ test alone is not an adequate means for identifying individuals (Clark, 1997), most programs for educating the gifted rely on multiple instruments to screen and identify gifted students (Clark; Gallagher & Gallagher, 1994; Tuttle, Becker, & Sousa, 1988). The multiple instruments assess a combination of aptitude and achievement as evidenced by behaviors and products. Recommendation forms and checklists scrutinize personality characteristics and student behavior to find evidence of abilities and accomplishments that standardized tests do not measure. This study describes the creation and testing of an instrument for the identification of gifted children that is not tied to academic achievement. Instead, the instrument is tied to the roots of giftedness, to behaviors that are indicative of “an advanced and accelerated development of functions within the brain [that] may express itself in high levels of cognitive, affective, physical sensing, and/or intuitive abilities” (Clark, 1997, p. 26). The abilities mentioned by Clark overlap completely with the Dabrowskian personality characteristics known as overexcitabilities. Although numerous recommendation forms and checklists exist and are in use today to aid in the identification of gifted students, none exist that are grounded in Dabrowski’s (1964) Theory of Positive Disintegration, which has proven helpful in illuminating the nature of, and needs associated with, giftedness (Gallagher, 1986; Lind & Daniels, 1998; Mendaglio & Pyryt, 1996; Piechowski, 1997; Piechowski & Colangelo, 1984; Piechowski & Cunningham, 1985; Silverman, 1993). Therefore, it is justifiable to include specific constructs of the theory among a constellation of characteristics to be used when identifying gifted students. The thrust of this research is to create such an instrument for the identification and measurement of the Dabrowskian constructs known as overexcitabilities (OEs) in order to enhance the identification of giftedness in elementary-aged students. T h e D a b r o w s k i a n P e r s p e c t i v e The construct of giftedness examined in this study is based on the theory of the Polish psychiatrist and psychologist Kazimierz Dabrowski. Developed from extensive clinical and biographical studies of artists, writers, saints, and gifted students, Dabrowski’s Theory of Positive Disintegration offers a promising framework for examining the components and developmental dynamics of giftedness (Nelson, 1989; Piechowski & Cunningham, 1985; Silverman, 1993). The theory considers the genetic and biological roots of giftedness, as does Clark’s (1997) definition of giftedness used in this study: Giftedness is a biologically rooted concept that serves as a label for a high level of intelligence and indicates an advanced and accelerated development of functions within the brain. Such development may express itself in high levels of cognitive, affective, physical sensing, and/or intuitive abilities, such as academic aptitude, insight and innovation, creative behavior, leadership, personal and/or interpersonal skill, or visual and performing arts. (p. 26) The theory has strong implications for teaching and counseling because it puts personality characteristics into the perspective of the person’s lifespan. Other perspectives of giftedness tend to dwell on childhood and the education of bright children. Dabrowski, on the other hand, wanted to understand why some very bright and creative people attained higher levels of emotional development and selfactualization than others, and so he looked at the lifespans of gifted individuals. His theory explores the personal characteristics and events that are indicators of the potential for higher levels of development. One important element of Dabrowski’s theory that is especially relevant to the identification and assessment of giftedness from this new perspective is the construct of overexcitabilities. The term, translated from the Polish napobudliwosc, means to be superstimulated (Falk, Piechowski, & Lind, 1994). Overexcitability refers to an innate supersensitivity to stimuli in any of five different areas: Psychomotor, Sensual, Imaginational, Intellectual, and Emotional. The term overexcitabilities, unfortunately, is sometimes misconstrued to mean hyperactivity; it also carries with it the negative connotation of meaning “too much.” It denotes a strong psychic reaction that appears to exceed the stimuli or is stronger than is normally expected. An individual with strong overexcitabilities will experience life more richly and process it more complexly than others with less or no overexcitability and who are exposed to the same life experiences. However, Dabrowskian theory explains that, although strong psychic reactions can be potentially negative, resulting in neuroses and existential crises, they are part of the neces3 4 0 G I F T E D C H I L D Q U A R T E R L Y • F A L L 2 0 0 4 • V O L 4 8 N O 4 at UNIV CALGARY LIBRARY on March 4, 2011 gcq.sagepub.com Downloaded from I N S T R U M E N T M E A S U R E D A B R O W S K I A N sary conditions that can lead to positive disintegration, which is the developmental process of moving from lower to higher levels of emotional and moral development. There are several good reasons for looking at giftedness from the perspective of overexcitabilities. Intelligence is one facet of a personality, while overexcitabilities include five innate characteristics that, to a large degree, describe the nature of the person’s gifts and talents. Giftedness, after all, is more than an unusually high score on a test of intellectual ability. Gifted individuals are also renowned for their highly sensitive and emotional nature, their imaginations, and their high energy levels (Piechowski, 1997; Torrance, 1965; Webb, Meckstroth, & Tolan, 1982). Because giftedness is composed of a constellation of characteristics, rather than a single factor of generalized intelligence (Guilford, 1979), a Dabrowskian perspective on identification and assessment can be helpful to a field that has been challenged to develop an instrument capable of capturing the multifaceted characteristics of the gifted individual (Silverman, 1993). Care must be taken, however, so that OEs are not equated with related abilities; that is, Intellectual OE must not be equated with intelligence, and Imaginational OE must not be equated with creativity. Overexcitabilities are not abilities; rather, they are modes of experiencing the world (Piechowski & Colangelo, 1984). Overexcitabilities have been likened to filters, in that a person with a strong OE captures more of the stimulation in that area than would a person who does not have that OE and, thus, is not exceptionally sensitive to that type of stimulation (Piechowski & Cunningham, 1985). As a less abstract example, two people who observe an identical stimulus, such as an encounter between two classmates, will experience the stimulus differently. The p |
A unified framework of density-based clustering for semi-supervised classification | Semi-supervised classification is drawing increasing attention in the era of big data, as the gap between the abundance of cheap, automatically collected unlabeled data and the scarcity of labeled data that are laborious and expensive to obtain is dramatically increasing. In this paper, we introduce a unified framework for semi-supervised classification based on building-blocks from density-based clustering. This framework is not only efficient and effective, but it is also statistically sound. Experimental results on a large collection of datasets show the advantages of the proposed framework. |
A chart review study of the inattentive and combined types of ADHD. | Studies of the clinical correlates of the subtypes of Attention-Deficit/Hyperactivity Disorder (ADHD) have identified differences in the representation of age, gender, prevalence, comorbidity, and treatment. We report retrospective chart review data detailing the clinical characteristics of the Inattentive (IA) and Combined (C) subtypes of ADHD in 143 cases of ADHD-IA and 133 cases of ADHD-C. The children with ADHD-IA were older, more likely to be female, and had more comorbid internalizing disorders and learning disabilities. Individuals in the ADHD-IA group were two to five times as likely to have a referral for speech and language problems. The children with ADHD-IA were rated as having less overall functional impairment, but did have difficulty with academic achievement. Children with ADHD-IA were less likely to be treated with stimulants. One eighth of the children with ADHD-IA still had significant symptoms of hyperactivity/impulsivity, but did not meet the DSM-IV threshold for diagnosis of ADHD-Combined Type. The ADHD-IA subtype includes children with no hyperactivity and children who still manifest clinically significant hyperactive symptomatology but do not meet DSM-IV criteria for Combined Type. ADHD-IA children are often seen as having speech and language problems, and are less likely to receive medication treatment, but respond to medical treatment with improvement both in attention and residual hyperactive/impulsive symptoms. |
Review of Unmanned Aircraft System ( UAS ) | Unmanned Aircraft Systems (UAS) is an emerging technology with a tremendous potential to revolutionize warfare and to enable new civilian applications. It is integral part of future urban civil and military applications. It technologically matures enough to be integrated into civil society. The importance of UAS in scientific applications has been thoroughly demonstrated in recent years (DoD, 2010). Whatever missions are chosen for the UAS, their number and use will significantly increase in the future. UAS today play an increasing role in many public missions such as border surveillance, wildlife surveys, military training, weather monitoring, and local law enforcement. Challenges such as the lack of an on-board pilot to see and avoid other aircraft and the wide variation in unmanned aircraft missions and capabilities must be addressed in order to fully integrate UAS operations in the NAS in the Next Gen time frame. UAVs are better suited for dull, dirty, or dangerous missions than manned aircraft. UAS are mainly used for intelligence, surveillance and reconnaissance (ISR), border security, counter insurgency, attack and strike, target identification and designation, communications relay, electronic attack, law enforcement and security applications, environmental monitoring and agriculture, remote sensing, aerial mapping and meteorology. Although armed forces around the world continue to strongly invest in researching and developing technologies with the potential to advance the capabilities of UAS. |
SOME OPEN PROBLEMS IN THE THEORY OF INFINITE DIMENSIONAL ALGEBRAS | We will discuss some very old and some new open problems concerning infinite dimensional algebras. All these problems have been inspired by combinatorial group theory. I. The Burnside and Kurosh problems In 1902 W. Burnside formulated his famous problems for torsion groups: (1) let G be a finitely generated torsion group, that is, for an arbitrary element g ∈ G there exists n = n(g) > 1, such that g = 1. Does it imply that G is finite? (2) Let a group G be finitely generated and torsion of bounded degree, that is, there exists n > 1 such that for an arbitrary element g ∈ G g = 1. Does it imply that G is finite? W. Burnside [7] and I. Schur [43] proved (1) for linear groups. The positive answer for (2) is known for n = 2, 3 (W. Burnside, [6]), n = 4 (I. N. Sanov, [42]) and n = 6 (M. Hall, [17]). In 1964 E. S. Golod and I. R. Shafarevich ([12], [13]) constructed a family of infinite finitely generated p–groups (for an arbitrary element g there exists n = n(g) > 1 such that gpn = 1) for an arbitrary prime p. This was a negative answer to the question (1). Other finitely generated torsion groups were constructed by S. V. Alyoshin [1], R. I. Grigorchuk [14], N. Gupta –S. Sidki [16], V. I. Sushchansky [48]. In 1968 P. S. Novikov and S. I. Adian constructed infinite finitely generated groups of bounded odd degree n > 4381. In 1994 S. Ivanov [19] extended this to n = 2, k > 32, so now we can say that the question (2) has negative solution for all sufficiently large n. Remark though that all the counterexamples above are not finitely presented. The following important problem still remains open. Problem 1. Do there exist infinite finitely presented torsion groups? Received October 28, 2006. 2000 Mathematics Subject Classification. 17, 20. |
Bandwidth-aware divisible task scheduling for cloud computing | Task scheduling is a fundamental issue in achieving high efficiency in cloud computing. However, it is a big challenge for efficient scheduling algorithm design and implementation (as general scheduling problem is NP-complete). Most existing task-scheduling methods of cloud computing only consider task resource requirements for CPU and memory, without considering bandwidth requirements. In order to obtain better performance, in this paper, we propose a bandwidth-aware algorithm for divisible task scheduling in cloudcomputing environments. A nonlinear programming model for the divisible task-scheduling problem under the bounded multi-port model is presented. By solving this model, the optimized allocation scheme that determines proper number of tasks assigned to each virtual resource node is obtained. On the basis of the optimized allocation scheme, a heuristic algorithm for divisible load scheduling, called bandwidth-aware task-scheduling (BATS) algorithm, is proposed. The performance of algorithm is evaluated using CloudSim toolkit. Experimental result shows that, comparedwith the fair-based task-scheduling algorithm, the bandwidthonly task-scheduling algorithm, and the computation-only task-scheduling algorithm, the proposed algorithm (BATS) has better performance. Copyright © 2012 John Wiley & Sons, Ltd. |
Daylight Sensing LED Lighting System | Adaptation of artificial light based on the available amount of daylight is known to be effective for energy savings. To achieve such daylight control, the state-of-the-art lighting systems use external photodetectors. The photodetector measures the combined contribution of artificial light and daylight, and closed-loop control schemes are used to determine the dimming levels of luminaires to produce the right amount of artificial light. In this paper, we propose LED luminaires that can perform the dual function of illumination and daylight sensing, obviating the need of additional photodetectors. We present a daylight sensing luminaire prototype and consider two driving protocols for sensing and illumination. An open-loop control scheme is then considered for daylight control. The proposed system is shown to be more robust to reflectance changes in comparison with a photodetector-based closed-loop lighting control system. |
Depth CNNs for RGB-D scene recognition: learning from scratch better than transferring from RGB-CNNs | Scene recognition with RGB images has been extensively studied and has reached very remarkable recognition levels, thanks to convolutional neural networks (CNN) and large scene datasets. In contrast, current RGB-D scene data is much more limited, so often leverages RGB large datasets, by transferring pretrained RGB CNN models and fine-tuning with the target RGB-D dataset. However, we show that this approach has the limitation of hardly reaching bottom layers, which is key to learn modality-specific features. In contrast, we focus on the bottom layers, and propose an alternative strategy to learn depth features combining local weakly supervised training from patches followed by global fine tuning with images. This strategy is capable of learning very discriminative depthspecific features with limited depth images, without resorting to Places-CNN. In addition we propose a modified CNN architecture to further match the complexity of the model and the amount of data available. For RGB-D scene recognition, depth and RGB features are combined by projecting them in a common space and further leaning a multilayer classifier, which is jointly optimized in an end-to-end network. Our framework achieves state-of-the-art accuracy on NYU2 and SUN RGB-D in both depth only and combined RGB-D data. |
The MATLAB ODE Suite | This paper describes mathematical and software developments for a suite of programs for solving ordinary differential equations in Matlab. |
Comparison of the response of doubly fed and fixed-speed induction generator wind turbines to changes in network frequency | Synchronous and fixed-speed induction generators release the kinetic energy of their rotating mass when the power system frequency is reduced. In the case of doubly fed induction generator (DFIG)-based wind turbines, their control system operates to apply a restraining torque to the rotor according to a predetermined curve with respect to the rotor speed. This control system is not based on the power system frequency and there is negligible contribution to the inertia of the power system. A DFIG control system was modified to introduce inertia response to the DFIG wind turbine. Simulations were used to show that with the proposed control system, the DFIG wind turbine can supply considerably greater kinetic energy than a fixed-speed wind turbine. |
The Role of Asymmetric Dimethylarginine (ADMA) in Endothelial Dysfunction and Cardiovascular Disease | Endothelium plays a crucial role in the maintenance of vascular tone and structure. Endothelial dysfunction is known to precede overt coronary artery disease. A number of cardiovascular risk factors, as well as metabolic diseases and systemic or local inflammation cause endothelial dysfunction. Nitric oxide (NO) is one of the major endothelium derived vaso-active substances whose role is of prime importance in maintaining endothelial homeostasis. Low levels of NO are associated with impaired endothelial function. Asymmetric dimethylarginine (ADMA), an analogue of L-arginine, is a naturally occurring product of metabolism found in human circulation. Elevated levels of ADMA inhibit NO synthesis and therefore impair endothelial function and thus promote atherosclerosis. ADMA levels are increased in people with hypercholesterolemia, atherosclerosis, hypertension, chronic heart failure, diabetes mellitus and chronic renal failure. A number of studies have reported ADMA as a novel risk marker of cardiovascular disease. Increased levels of ADMA have been shown to be the strongest risk predictor, beyond traditional risk factors, of cardiovascular events and all-cause and cardiovascular mortality in people with coronary artery disease. Interventions such as treatment with L-arginine have been shown to improve endothelium-mediated vasodilatation in people with high ADMA levels. However the clinical utility of modifying circulating ADMA levels remains uncertain. |
Lanicemine: a low-trapping NMDA channel blocker produces sustained antidepressant efficacy with minimal psychotomimetic adverse effects | Ketamine, an N-methyl-D-aspartate receptor (NMDAR) channel blocker, has been found to induce rapid and robust antidepressant-like effects in rodent models and in treatment-refractory depressed patients. However, the marked acute psychological side effects of ketamine complicate the interpretation of both preclinical and clinical data. Moreover, the lack of controlled data demonstrating the ability of ketamine to sustain the antidepressant response with repeated administration leaves the potential clinical utility of this class of drugs in question. Using quantitative electroencephalography (qEEG) to objectively align doses of a low-trapping NMDA channel blocker, AZD6765 (lanicemine), to that of ketamine, we demonstrate the potential for NMDA channel blockers to produce antidepressant efficacy without psychotomimetic and dissociative side effects. Furthermore, using placebo-controlled data, we show that the antidepressant response to NMDA channel blockers can be maintained with repeated and intermittent drug administration. Together, these data provide a path for the development of novel glutamatergic-based therapeutics for treatment-refractory mood disorders. |
Towards Understanding the Invertibility of Convolutional Neural Networks | Several recent works have empirically observed that Convolutional Neural Nets (CNNs) are (approximately) invertible. To understand this approximate invertibility phenomenon and how to leverage it more effectively, we focus on a theoretical explanation and develop a mathematical model of sparse signal recovery that is consistent with CNNs with random weights. We give an exact connection to a particular model of model-based compressive sensing (and its recovery algorithms) and random-weight CNNs. We show empirically that several learned networks are consistent with our mathematical analysis and then demonstrate that with such a simple theoretical framework, we can obtain reasonable reconstruction results on real images. We also discuss gaps between our model assumptions and the CNN trained for classification in practical scenarios. |
A Bottom-Up Approach to Sentence Ordering for Multi-Document Summarization | Ordering information is a difficult but important task for applications generating naturallanguage texts such as multi-document summarization, question answering, and conceptto-text generation. In multi-document summarization, information is selected from a set of source documents. However, improper ordering of information in a summary can confuse the reader and deteriorate the readability of the summary. Therefore, it is vital to properly order the information in multi-document summarization. We present a bottom-up approach to arrange sentences extracted for multi-document summarization. To capture the association and order of two textual segments (e.g. sentences), we define four criteria: chronology, topical-closeness, precedence, and succession. These criteria are integrated into a criterion by a supervised learning approach. We repeatedly concatenate two textual segments into one segment based on the criterion, until we obtain the overall segment with all sentences arranged. We evaluate the sentence orderings produced by the proposed method and numerous baselines using subjective gradings as well as automatic evaluation measures. We introduce the average continuity, an automatic evaluation measure of sentence ordering in a summary, and investigate its appropriateness for this task. |
Bitcoin and Beyond: Exclusively Informational Monies | The famous new money Bitcoin is classified as a technical informational money (TIM). Besides introducing the idea of a TIM, a more extreme notion of informational money will be developed: exclusively informational money (EXIM). The informational coins (INCOs) of an EXIM can be in control of an agent but are not owned by any agent. INCOs of an EXIM cannot be stolen, but they can be lost, or thrown away. The difference between an EXIM and a TIM shows up when considering a user perspective on security matters. Security for an EXIM user is discussed in substantial detail, with the remarkable conclusion that computer security (security models, access control, user names, passwords, firewalls etc.) is not always essential for an EXIM, while the application of cryptography based information security is unavoidable for the use of an EXIM. Bitcoin seems to meet the criteria of an EXIM, but the assertion that “Bitcoin is an EXIM”, might also be considered problematic. As a thought experiment we will contemplate Bitguilder, a hypothetical copy of Bitcoin, cast as an EXIM and its equally hypothetical extension BitguilderPlus. A business ethics assessment of Bitcoin is made which reveals a number of worries. By combining Bitguilder with a so-called technical informational near-money (TINM) a dual money system, having two units with a fluctuating rate, may be obtained. It seems that a dual money can remedy some, but not all, of the ethical worries that arise when contemplating Bitcoin after hypothetically having become a dominant form of money. The contributions that Bitcoin’s designers can potentially make to the evolution of EXIMs and TIMs is analyzed in terms of the update of the portfolio of money related natural kinds that comes with Bitcoin. |
The file drawer problem and tolerance for null results | For any given research area, one cannot tell how many studies have been conducted but never reported. The extreme view of the "file drawer problem" is that journals are filled with the 5% of the studies that show Type I errors, while the file drawers are filled with the 95% of the studies that show nonsignificant results. Quantitative procedures for computing the tolerance for filed and future null results are reported and illustrated, and the implications are discussed. |
Tenofovir DF/emtricitabine and efavirenz combination therapy for HIV infection in patients treated for tuberculosis: the ANRS 129 BKVIR trial. | BACKGROUND
HIV-infected patients with TB need simplified, effective and well-tolerated antiretroviral regimens.
METHODS
The French ANRS 129 BKVIR open trial evaluated the once-daily tenofovir DF/emtricitabine and efavirenz combination, started within 12 weeks after TB treatment initiation, in antiretroviral-naive HIV-1-infected patients. Success was defined as an HIV-1 RNA <50 copies/mL and TB cure at 48 weeks.
RESULTS
TB was confirmed microbiologically (90%) or histologically (10%) in 69 patients (71% male; median age 43 years; 54% born in Africa). The median time between TB treatment initiation and antiretroviral therapy was 8 weeks (range 1-22 weeks). At baseline, median HIV-1 RNA was 5.4 log10 copies/mL and median CD4 cell count 74 cells/mm(3). In the ITT analysis, combined success at week 48 was achieved in 57/69 patients (83%, 95% CI 74-92). Twelve patients did not achieve virological success, and TB was not cured in one of them. Among the 47 patients who fully adhered to the strategy, the success rate was 96% (95% CI 90-100) and was not affected by low rifampicin and isoniazid serum concentrations. Forty-nine serious adverse events were reported in 31 patients (45%), and 11 led to antiretroviral drug interruption. All adverse events resolved. The immune reconstitution inflammatory syndrome occurred in 23 patients (33%, 95% CI 22-44), and was associated with a low baseline BMI (P = 0.03) and a low haemoglobin level (P = 0.02).
CONCLUSION
These results support the use of tenofovir DF/emtricitabine and efavirenz combination therapy for HIV infection in patients with TB. |
From Managerialism to Entrepreneurialism : The Transformation in Urban Governance in Late Capitalism | In recent years, urban governance has become Increasingly preoccupied with the exploration of new ways in which to foster and encourage local development and employment growth. Such an entrepreneurial stance contrasts with the managerial practices of earlier decades which primarily focussed on the local provision of services, facilities and benefits to urban populations. This paper explores the context of this shift from managerialism to entrepreneurialism in urban governance and seeks to show how mechanisms of inter-urban competition shape outcomes and generate macroeconomic consequences. The relations between urban change and economic development are thereby brought into focus in a period characterised by considerable economic and political instability. A centerpiece of my academic concerns these last two decades has been to unravel the role of urbanisation in social change, in particular under conditions of capitalist social relations and accumulation (Harvey, 1973; 1982; 1985a; 1985b; 1989a). This project has necessitated deeper enquiry into the manner in which capitalism produces a distinctive historical geography. When the physical and social landscape of urbanisation is shaped according to distinctively capitalist criteria, constraints are put on the future paths of capitalist development. This implies that though urban processes under capitalism are shaped by the logic of capital circulation and accumulation, they in turn shape the conditions and circumstances of capital accumulation at later points in time and space. Put another way, capitalists, like everyone else, may struggle to make their own historical geography but, also like everyone else, they do not do so under historical and geographical circumstances of their own individual choosing even when they have played an important and even determinant collective role in shaping those circumstances. This two way relation of reciprocity and domination (in which capitalists, like workers, find themselves dominated and constrained by their own creations) Geograflska Annaler . 71 B (1989) . 1 can best be captured theoretically in dialectical terms. It is from such a standpoint that I seek more powerful insights into that process of city making that is both product and condition of ongoing social processes of transformation in the most recent phase of capitalist development. Enquiry into the role of urbanisation in social dynamics is, of course. nothing new. From time to time the issue flourishes as a focus of major debates, though more often than not with regard to particular historical-geographical circumstances in which, for some reason or other, the role of urbanisation and of cities appears particularly salient. The part that city formation played in the rise of civilization has long been discussed, as has the role of the city in classical Greece and Rome. The significance of cities to the transition from feudalism to capitalism is an arena of continuing controversy, having sparked a remarkable and revealing literature over the years. A vast array of evidence can now likewise be brought to bear on the significance of urbanization to nineteenth century industrial, cultural and political development as well as to the subsequent spread of capitalist social relations to lesser developed countries (which now support some of the most dramatically growing cities in the world). All too frequently, however, the study of urbanization becomes separated from that of social change and economic development, as if it can somehow be regarded either as a side-show or as a passive side-product to more important and fundamental social changes. The successive revolutions in technology, space relations, social relations, consumer habits, lifestyles, and the like that have so characterised capitalist history can, it is sometimes suggested, be understood without any deep enquiry into the roots and nature of urban processes. True, this judgement is by and large made tacitly, by virtue of sins of omission rather than commission. But the antiurban bias in studies |
Creatinine clearance following cimetidine for estimation of glomerular filtration rate | Simultaneous inulin (C in) and creatinine clearance (C Cr) studies were performed on 53 pediatric renal patients using a cimetidine protocol. Since cimetidine blocks the tubular secretion of creatinine, it was hypothesized that C Cr measured following cimetidine would closely approximate the C in. C in was compared with C Cr with the latter calculated from: (1) a 24-h urine collection, (2) plasma creatinine, height, and a proportionality constant, (3) the same plasma and urine specimens used for calculating C in, and (4) from the plasma and urine specimens of the four 30-min clearance periods treated as a single 2-h clearance. The C in was very closely approximated by the C Cr calculated from the same specimens used for the C in and by the 2-h clearance. The cimetidine protocol, with C Cr derived from a 2-h urine collection obtained under supervision in the office or clinic, provides a convenient and inexpensive procedure for estimation of glomerular filtration rate in a clinical setting. |
Vaccine-Induced IgG Antibodies to V1V2 Regions of Multiple HIV-1 Subtypes Correlate with Decreased Risk of HIV-1 Infection | UNLABELLED
In the RV144 HIV-1 vaccine efficacy trial, IgG antibody (Ab) binding levels to variable regions 1 and 2 (V1V2) of the HIV-1 envelope glycoprotein gp120 were an inverse correlate of risk of HIV-1 infection. To determine if V1V2-specific Abs cross-react with V1V2 from different HIV-1 subtypes, if the nature of the V1V2 antigen used to asses cross-reactivity influenced infection risk, and to identify immune assays for upcoming HIV-1 vaccine efficacy trials, new V1V2-scaffold antigens were designed and tested. Protein scaffold antigens carrying the V1V2 regions from HIV-1 subtypes A, B, C, D or CRF01_AE were assayed in pilot studies, and six were selected to assess cross-reactive Abs in the plasma from the original RV144 case-control cohort (41 infected vaccinees, 205 frequency-matched uninfected vaccinees, and 40 placebo recipients) using ELISA and a binding Ab multiplex assay. IgG levels to these antigens were assessed as correlates of risk in vaccine recipients using weighted logistic regression models. Levels of Abs reactive with subtype A, B, C and CRF01_AE V1V2-scaffold antigens were all significant inverse correlates of risk (p-values of 0.0008-0.05; estimated odds ratios of 0.53-0.68 per 1 standard deviation increase). Thus, levels of vaccine-induced IgG Abs recognizing V1V2 regions from multiple HIV-1 subtypes, and presented on different scaffolds, constitute inverse correlates of risk for HIV-1 infection in the RV144 vaccine trial. The V1V2 antigens provide a link between RV144 and upcoming HIV-1 vaccine trials, and identify reagents and methods for evaluating V1V2 Abs as possible correlates of protection against HIV-1 infection.
TRIAL REGISTRATION
ClinicalTrials.gov NCT00223080. |
SPINS : Security Protocols for Sensor Networks | • Adrian Perrig, Robert Szewczyk, Victor Wen, David Culler, J. D. Tygar. SPINS: Security Protocols for Sensor Networks, Mobicom 2001. • Chris Karlof and David Wagner, Secure Routing in Sensor Networks: Attacks and Countermeasures • Sasha Slijepcevic and Miodrag Potkonjak and Vlasios Tsiatsis and Scott Zimbeck and Mani B. Srivastava. On Communication Security in Wireless Ad-Hoc Sensor Networks. 11th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises. • Jiejun Kong, Petros Zerfos, Haiyun Luo, Songwu Lu, Lixia Zhang. Providing Robust and Ubiquitous Security Support for Mobile Ad-Hoc Networks. 9th International Conference on Network Protocols, Nov 2001. • Y.W. Law, S. Dulman, S. Etalle and P. Havinga. Assessing Security-Critical Energy-Efficient Sensor, Department of Computer Science, University of Twente, Technical Report TRCTIT-02-18, Jul 2002. • Anthony D. Wood, John A. Stankovic, Denial of Service in Sensor Networks References |
Exploring Two Teacher Education Online Learning Designs: A Classroom of One or Many?. | Online learning is rapidly becoming a permanent feature of higher education. In a preponderance of instances, online learning is designed using conventional educational practices: lecture, grades, group discussion, and the like. Concerns with traditional pedagogy instantiated by course management systems raise questions about the quality of learner’s online experiences. There is a need to reconsider the design of learning opportunities in light of emerging online delivery modes. This study compared learner perceptions of two online courses—one using the more traditional approach capitalizing on the affordances of Blackboard and one using the COPLS one-on-one model (Norton, 2003). Results revealed that both environments were perceived as providing a high quality learning experience. In addition, results point to the importance of self-regulation, the role of the instructor/facilitator/mentor, and the role of the group as factors influencing learners’ perception of the quality of their learning experience, positive aspects of their learning experience, and challenges that influenced their learning experience. ( |
Short-Range Optical Wireless Communications | It is commonly agreed that the next generation of wireless communication systems, usually referred to as 4G systems, will not be based on a single access technique but it will encompass a number of different complementary access technologies. The ultimate goal is to provide ubiquitous connectivity, integrating seamlessly operations in most common scenarios, ranging from fixed and low-mobility indoor environments in one extreme to high-mobility cellular systems in the other extreme. Surprisingly, perhaps the largest installed base of short-range wireless communications links are optical, rather than RF, however. Indeed, ‘point and shoot’ links corresponding to the Infra-Red Data Association (IRDA) standard are installed in 100 million devices a year, mainly digital cameras and telephones. In this paper we argue that optical wireless communications (OW) has a part to play in the wider 4G vision. An introduction to OW is presented, together with scenarios where optical links can enhance the performance of wireless networks. |
Position Sensorless Control Without Phase Shifter for High-Speed BLDC Motors With Low Inductance and Nonideal Back EMF | This paper presents a novel method for position sensorless control of high-speed brushless DC motors with low inductance and nonideal back electromotive force (EMF) in order to improve the reliability of the motor system of a magnetically suspended control moment gyro for space application. The commutation angle error of the traditional line-to-line voltage zero-crossing points detection method is analyzed. Based on the characteristics measurement of the nonideal back EMF, a two-stage commutation error compensation method is proposed to achieve the high-reliable and high-accurate commutation in the operating speed region of the proposed sensorless control process. The commutation angle error is compensated by the transformative line voltages, the hysteresis comparators, and the appropriate design of the low-pass filters in the low-speed and high-speed region, respectively. High-precision commutations are achieved especially in the high-speed region to decrease the motor loss in steady state. The simulated and experimental results show that the proposed method can achieve an effective compensation effect in the whole operating speed region. |
A randomized trial of testosterone therapy in males with rheumatoid arthritis. | Thirty-five male patients, aged 34-79 yr, with definite rheumatoid arthritis (RA) were recruited from out-patient clinics and randomized to receive monthly injections of testosterone enanthate 250 mg or placebo as an adjunct therapy for 9 months. Endpoints included disease activity parameters and bone mineral density (BMD). At baseline, there were negative correlations between the ESR and serum testosterone (r = -0.42, P < 0.01) and BMD (hip, r = -0.65, P < 0.01). A total of 29.6% of all patients had at least one vertebral fracture, most having multiple fractures. Back pain, however, was not more prevalent in fracture patients (55% vs 50%). Disease activity was significantly higher in the fracture group (joint score P < 0.05, rheumatoid factor P < 0.01). Thirty patients completed the trial, 15 receiving testosterone and 15 receiving placebo. There were significant rises in serum testosterone, dihydrotestosterone and oestradiol in the treatment group. There was no significant effect of treatment on disease activity overall, five patients receiving testosterone underwent a "flare'. Differences in mean BMD following testosterone or placebo were non-significant (spine: +1.2% vs -1.1%; femur: -0.3% vs +0.3%). There was no suggestion of a positive effect of testosterone on disease activity in men with RA. |
Inner Space Preserving Generative Pose Machine | Image-based generative methods, such as generative adversarial networks (GANs) have already been able to generate realistic images with much context control, specially when they are conditioned. However, most successful frameworks share a common procedure which performs an image-to-image translation with pose of figures in the image untouched. When the objective is reposing a figure in an image while preserving the rest of the image, the state-of-the-art mainly assumes a single rigid body with simple background and limited pose shift, which can hardly be extended to the images under normal settings. In this paper, we introduce an image “inner space” preserving model that assigns an interpretable low-dimensional pose descriptor (LDPD) to an articulated figure in the image. Figure reposing is then generated by passing the LDPD and the original image through multi-stage augmented hourglass networks in a conditional GAN structure, called inner space preserving generative pose machine (ISP-GPM). We evaluated ISP-GPM on reposing human figures, which are highly articulated with versatile variations. Test of a state-of-the-art pose estimator on our reposed dataset gave an accuracy over 80% on PCK0.5 metric. The results also elucidated that our ISP-GPM is able to preserve the background with high accuracy while reasonably recovering the area blocked by the figure to be reposed. |
Data Mining : Concepts and Techniques | Association rule mining was first proposed by Agrawal, Imielinski, and Swami [AIS93]. The Apriori algorithm discussed in Section 5.2.1 for frequent itemset mining was presented in Agrawal and Srikant [AS94b]. A variation of the algorithm using a similar pruning heuristic was developed independently by Mannila, Tiovonen, and Verkamo [MTV94]. A joint publication combining these works later appeared in Agrawal, Mannila, Srikant, Toivonen, and Verkamo [AMS96]. A method for generating association rules from frequent itemsets is described in Agrawal and Srikant [AS94a]. |
Intuitive Interaction Applied to Interface Design | Intuitive interaction involves utilising knowledge gained through other products or experience(s). Therefore, products that people use intuitively are those with features they have encountered before. This position has been supported by experimental studies. The findings suggest that relevant past experience is transferable between products, and probably also between contexts, and performance is affected by a person’s level of familiarity with similar technologies. Appearance (shape, size and labelling of features) seems to be the variable that most affects time on task and intuitive uses. Using familiar labels and icons and possibly positions for buttons helps people to use a product quickly and intuitively the first time they encounter it. Three principles have been developed to help designers develop interfaces which are intuitive to use. Principle one; use familiar symbols and/or words for well-known functions, put them in a familiar or expected position and make the function comparable with similar functions users have seen before. Principles one involves utilizing existing features, labels or icons that users have seen before in similar products that perform the same function. This is the simplest level of applying intuitive use. Principle two; make it obvious what less well-known functions will do by using familiar things as metaphors to demonstrate their function. Principle two requires the use of metaphor to make something completely new familiar by relating it to something already existing. Principle three; increase consistency so that function, location and appearance of features are consistent between different parts of the design and throughout each part. Principle three allows users to apply the same knowledge and metaphors across all parts of the interface. The implications and application of these principles are discussed in the context of the design of function, location and appearance of product and interface features. Applying these principles will allow designers to draw on users past experience in order to develop products which facilitate intuitive interaction and ready acceptance of new technologies. Key words; Intuitive interaction, Interface design, Human factors |
Exploring differences in referrals to a hospice at home service in two socio-economically distinct areas of Manchester, UK. | In order to provide equitable access to hospice at home palliative care services, it is important to identify the socio-economic factors associated with poorer access. In this population-based study we aimed to test the inverse care law by exploring how socio-economic status and other key demographic indicators were associated with referral rates in two distinct areas (Salford and Trafford) served by the same service. Secondary data from the UK National Census 2001, North West Cancer Intelligence Service (2004) and hospice at home service referral data (2004-06) was collated for both areas. Descriptive analysis profiled electoral ward characteristics whilst simple correlations and regression modelling estimated associations with referral rates. Referral rates were lower and cancer mortality higher in the most deprived areas (Salford). Referral rates were significantly associated with deprivation, particularly multiple deprivation, but not significantly associated with cancer mortality (service model and resources available were held constant). At the population level, the socio-economic characteristics of those referred to hospice at home rather than service provision strongly predicted referral rates. This has implications for the allocation and targeting of resources and contributes important findings to future work exploring equitable access at organizational and professional levels. |
School health guidelines to prevent unintentional injuries and violence. | Approximately two thirds of all deaths among children and adolescents aged 5-19 years result from injury-related causes: motor-vehicle crashes, all other unintentional injuries, homicide, and suicide. Schools have a responsibility to prevent injuries from occurring on school property and at school-sponsored events. In addition, schools can teach students the skills needed to promote safety and prevent unintentional injuries, violence, and suicide while at home, at work, at play, in the community, and throughout their lives. This report summarizes school health recommendations for preventing unintentional injury, violence, and suicide among young persons. These guidelines were developed by CDC in collaboration with specialists from universities and from national, federal, state, local, and voluntary agencies and organizations. They are based on an in-depth review of research, theory, and current practice in unintentional injury, violence, and suicide prevention; health education; and public health. Every recommendation is not appropriate or feasible for every school to implement. Schools should determine which recommendations have the highest priority based on the needs of the school and available resources. The guidelines include recommendations related to the following eight aspects of school health efforts to prevent unintentional injury, violence, and suicide: a social environment that promotes safety; a safe physical environment; health education curricula and instruction; safe physical education, sports, and recreational activities; health, counseling, psychological, and social services for students; appropriate crisis and emergency response; involvement of families and communities; and staff development to promote safety and prevent unintentional injuries, violence, and suicide. |
Virtual network security: threats, countermeasures, and challenges | Network virtualization has become increasingly prominent in recent years. It enables the creation of network infrastructures that are specifically tailored to the needs of distinct network applications and supports the instantiation of favorable environments for the development and evaluation of new architectures and protocols. Despite the wide applicability of network virtualization, the shared use of routing devices and communication channels leads to a series of security-related concerns. It is necessary to provide protection to virtual network infrastructures in order to enable their use in real, large scale environments. In this paper, we present an overview of the state of the art concerning virtual network security. We discuss the main challenges related to this kind of environment, some of the major threats, as well as solutions proposed in the literature that aim to deal with different security aspects. |
Skill-based Mission Generation: A Data-driven Temporal Player Modeling Approach | Games often interweave a story and series of skill-based events into a complete sequence---a mission. An automated mission generator for skill-based games is one way to synthesize designer requirements with player differences to create missions tailored to each player. We argue for the need for predictive, data-driven player models that meet the requirements of: (1) predictive power, (2) accounting for temporal changes in player abilities, (3) accuracy in the face of little or missing player data, (4) efficiency with large sets of data, and (5) sufficiency for algorithmic generation. We present a tensor factorization approach to modeling and predicting player performance on skill-based tasks that meets the above requirements and a combinatorial optimization approach to mission generation to interweave an author's preferred story structures and an author's preferred player performance over a mission---a kind of difficulty curve---with modeled player performance. |
Quality of Service Based Routing: A Performance Perspective | Recent studies provide evidence that Quality of Service (QoS) routing can provide increased network utilization compared to routing that is not sensitive to QoS requirements of traffic. However, there are still strong concerns about the increased cost of QoS routing, both in terms of more complex and frequent computations and increased routing protocol overhead. The main goals of this paper are to study these two cost components, and propose solutions that achieve good routing performance with reduced processing cost. First, we identify the parameters that determine the protocol traffic overhead, namely (a) policy for triggering updates, (b) sensitivity of this policy, and (c) clamp down timers that limit the rate of updates. Using simulation, we study the relative significance of these factors and investigate the relationship between routing performance and the amount of update traffic. In addition, we explore a range of design options to reduce the processing cost of QoS routing algorithms, and study their effect on routing performance. Based on the conclusions of these studies, we develop extensions to the basic QoS routing, that can achieve good routing performance with limited update generation rates. The paper also addresses the impact on the results of a number of secondary factors such as topology, high level admission control, and characteristics of network traffic. |
Lisinopril versus atenolol: Decrease in systolic versus diastolic blood pressure with converting enzyme inhibition | In a multicenter, parallel, double-blind study, lisinopril, a new converting enzyme inhibitor, was compared with atenolol in the treatment of mild to moderate essential hypertension. Four hundred ninety patients were randomized to once-a-day treatment with lisinopril 20 mg or atenolol 50 mg for 4 weeks, and the doses of lisinopril or atenolol were increased at 4-week intervals up to 80 mg or 200 mg, respectively, if sitting diastolic blood pressure (SDBP) was not well controlled. Lisinopril and atenolol reduced SDBP to a similar extent. All reductions from baseline in sitting diastolic and systolic blood pressure were significant (p < 0.01). Lisinopril produced a significantly greater reduction (p < 0.01) in sitting systolic blood pressure (SSBP) than atenolol. The predominant reduction in SSBP could not be explained on the basis of age, race, or severity of hypertension. It is suggested that the increase in arterial compliance reported for converting enzyme inhibitors could explain the predominant decrease in systolic blood pressure. |
Hierarchical Amharic Base Phrase Chunking Using HMM with Error Pruning | Segmentation of a text into non-overlapping syntactic units (chunks) has become an essential component of many applications of natural language processing. This paper presents Amharic base phrase chunker that groups syntactically correlated words at different levels using HMM. Rules are used to correct chunk phrases incorrectly chunked by the HMM. For the identification of the boundary of the phrases IOB2 chunk specification is selected and used in this work. To test the performance of the system, corpus was collected from Amharic news outlets and books. The training and testing datasets were prepared using the 10-fold cross validation technique. Test results on the corpus showed an average accuracy of 85.31% before applying the rule for error correction and an average accuracy of 93.75% after applying rules. |
Transformerless micro-inverter for grid-connected photovoltaic systems | The leakage currents caused by high-frequency common-mode (CM) voltage have become a major concern in transformerless photovoltaic (PV) inverters. This paper addresses to a review on dc-ac converters applied to PV systems that can avoid the circulation of leakage currents. Looking for a lower cost and higher reliability solution, a 250 W PV transformerless micro-inverter prototype based on the bipolar full-bridge topology was built and tested. As it is confirmed by experimental results, this topology is able to maintain the CM voltage constant and prevent the circulation of CM currents through the circuit. |
An Efficient and Fast Li-Ion Battery Charging System Using Energy Harvesting or Conventional Sources | This paper presents a multi-input battery charging system that is capable of increasing the charging efficiency of lithium-ion (Li-ion) batteries. The proposed battery charging system consists of three main building blocks: a pulse charger, a step-down dc–dc converter, and a power path controller. The pulse charger allows charging via a wall outlet or an energy harvesting system. It implements charge techniques that increase the battery charge efficiency of a Li-ion battery. The power path controller (PPC) functions as a power monitor and selects the optimal path for charging either via an energy harvesting system or an ac adapter. The step-down dc–dc converter provides an initial supply voltage to start up the energy harvesting system. The integrated circuit design is implemented on a CMOS 0.18 μm technology process. Experimental results verify that the proposed pulse charger reduces the charging time of 100 mAh and 45 mAh Li-ion batteries respectively by 37.35% and 15.56% and improves the charge efficiency by 3.15% and 3.27% when compared to the benchmark constant current-constant voltage charging technique. The step-down dc–dc converter has a maximum efficiency of 90% and the operation of the PPC is also verified by charging the battery via a thermoelectric energy harvesting system. |
Risk Factors for Rapid Glaucoma Disease Progression. | PURPOSE
To determine the intraocular and systemic risk factor differences between a cohort of rapid glaucoma disease progressors and nonrapid disease progressors.
DESIGN
Retrospective case-control study.
METHODS
Setting: Five private ophthalmology clinics.
STUDY POPULATION
Forty-eight rapidly progressing eyes (progression ≥1 dB mean deviation [MD]/year) and 486 non-rapidly progressing eyes (progression <1 dB MD/year). Patients were eligible if they had a diagnosis of glaucoma from their ophthalmologist and if they had greater than or equal to 5 Humphrey visual fields (24-2) conducted. Patients were excluded if their sequential visual fields showed an improvement in MD or if they had greater than 5 dB MD variation in between visits. Patients with obvious neurologic fields were excluded.
OBSERVATION PROCEDURE
Clinical and demographic data (age, sex, central corneal thickness [CCT], intraocular pressure [IOP], refraction, medications), as well as medical, surgical, and ocular histories, were collected.
MAIN OUTCOME MEASURES
Risk factor differences between the cohorts were measured using the independent t test, Wald χ2, and binomial regression analysis.
RESULTS
Rapid progressors were older, had significantly lower CCT and baseline IOPs, and were more likely to have pseudoexfoliation, disc haemorrhages, ocular medication changes, and IOP-lowering surgery. They also had significantly higher rates of cardiovascular disease and hypotension. Subjects with cardiovascular disease were 2.33 times more likely to develop rapidly progressive glaucoma disease despite significantly lower mean and baseline IOPs.
CONCLUSION
Cardiovascular disease is an important risk factor for rapid glaucoma disease progression irrespective of IOP control. |
CO-CREATION EXPERIENCES : THE NEXT PRACTICE IN VALUE CREATION | CREATION onsumers today have more choices of products and services than ever before, but they seem dissatisfied. Firms invest in greater product variety but are less able to differentiate themselves. Growth and value creation have become the dominant themes for managers. In this paper, we explain this paradox. The meaning of value and the process of value creation are rapidly shifting from a product-and firm-centric view to personalized consumer experiences. Informed, networked, empowered, and active consumers are increasingly co-creating value with the firm. The interaction between the firm and the consumer is becoming the locus of value creation and value extraction. As value shifts to experiences, the market is becoming a forum for conversation and interactions between consumers, consumer communities, and firms. It is this dialogue, access, transparency, and understanding of risk-benefits that is central to the next practice in value creation. |
DeepTrend: A Deep Hierarchical Neural Network for Traffic Flow Prediction | In this paper, we consider the temporal pattern in traffic flow time series, and implement a deep learning model for traffic flow prediction. Detrending based methods decompose original flow series into trend and residual series, in which trend describes the fixed temporal pattern in traffic flow and residual series is used for prediction. Inspired by the detrending method, we propose DeepTrend, a deep hierarchical neural network used for traffic flow prediction which considers and extracts the time-variant trend. DeepTrend has two stacked layers: extraction layer and prediction layer. Extraction layer, a fully connected layer, is used to extract the time-variant trend in traffic flow by feeding the original flow series concatenated with corresponding simple average trend series. Prediction layer, an LSTM layer, is used to make flow prediction by feeding the obtained trend from the output of extraction layer and calculated residual series. To make the model more effective, DeepTrend needs first pre-trained layer-by-layer and then fine-tuned in the entire network. Experiments show that DeepTrend can noticeably boost the prediction performance compared with some traditional prediction models and LSTM with detrending based methods. |
Dynamic thermal management of air cooled data centers | Increases in server power dissipation time placed significant pressure on traditional data center thermal management systems. Traditional systems utilize computer room air conditioning (CRAC) units to pressurize a raised floor plenum with cool air that is passed to equipment racks via ventilation tiles distributed throughout the raised floor. Temperature is typically controlled at the hot air return of the CRAC units away from the equipment racks. Due primarily to a lack of distributed environmental sensing, these CRAC systems are often operated conservatively resulting in reduced computational density and added operational expense. This paper introduces a data center environmental control system that utilizes a distributed sensor network to manipulate conventional CRAC units within an air-cooled environment. The sensor network is attached to standard racks and provides a direct measurement of the environment in close proximity to the computational resources. A calibration routine is used to characterize the response of each sensor in the network to individual CRAC actuators. A cascaded control algorithm is used to evaluate the data from the sensor network and manipulate supply air temperature and flow rate from individual CRACs to ensure thermal management while reducing operational expense. The combined controller and sensor network has been deployed in a production data center environment. Results from the algorithm will be presented that demonstrate the performance of the system and evaluate the energy savings compared with conventional data center environmental control architecture |
Flotation Frothers : Review of Their Classifications , Properties and Preparation | The importance of frothers in flotation is widely acknowledged, particularly in terms of their role with respect to bubble size, and the stability and mobility of the froth phase. These factors play a significant role in the kinetic viability of the process, and the overall recovery and the grade that can be achieved from a flotation cell or circuit. About 60 years ago, Wrobel [1] presented a comprehensive review of flotation frothers, their action, properties and structures. However, during recent six decades flotation reagent technology has undergone considerable evolutions and innovations. The aim of the present review is to provide an updated, comprehensive database comprising recent developments in flotation frother technology as well as some historical aspects. Different types of flotation frothers are discussed regarding to their classifications, properties, and preparation methods. New classification schemes have been suggested in parallel with introducing new characterization criteria. New classes of frothers are actually conventional frothers modified by incorporating certain functional “group” or “atoms” into their molecular structure. Like particle surface modifiers, frothers are also influenced by biotechnology, leading to introducing a new class of frothers named as “biofrothers”. |
Serum insulin-like growth factor-I in diabetic retinopathy | PURPOSE
To assess the relationship between serum insulin-like growth factor I (IGF-I) and diabetic retinopathy.
METHODS
This was a clinic-based cross-sectional study conducted at the Emory Eye Center. A total of 225 subjects were classified into four groups, based on diabetes status and retinopathy findings: no diabetes mellitus (no DM; n=99), diabetes with no background diabetic retinopathy (no BDR; n=42), nonproliferative diabetic retinopathy (NPDR; n=41), and proliferative diabetic retinopathy (PDR; n=43). Key exclusion criteria included type 1 diabetes and disorders that affect serum IGF-I levels, such as acromegaly. Subjects underwent dilated fundoscopic examination and were tested for hemoglobin A1c, serum creatinine, and serum IGF-I, between December 2009 and March 2010. Serum IGF-I levels were measured using an immunoassay that was calibrated against an international standard.
RESULTS
Between the groups, there were no statistical differences with regards to age, race, or sex. Overall, diabetic subjects had similar serum IGF-I concentrations compared to nondiabetic subjects (117.6 µg/l versus 122.0 µg/l; p=0.497). There was no significant difference between serum IGF-I levels among the study groups (no DM=122.0 µg/l, no BDR=115.4 µg/l, NPDR=118.3 µg/l, PDR=119.1 µg/l; p=0.897). Among the diabetic groups, the mean IGF-I concentration was similar between insulin-dependent and non-insulin-dependent subjects (116.8 µg/l versus 118.2 µg/l; p=0.876). The univariate analysis of the IGF-I levels demonstrated statistical significance in regard to age (p=0.002, r=-0.20), body mass index (p=0.008, r=-0.18), and race (p=0.040).
CONCLUSIONS
There was no association between serum IGF-I concentrations and diabetic retinopathy in this large cross-sectional study. |
Radar micro-Doppler feature extraction using the spectrogram and the cepstrogram | The radar micro-Doppler signature of a target is determined by parts of the target moving or rotating in addition to the main body motion. The relative motion of parts is characteristic for different classes of targets, e.g. the flapping motion of a bird's wings vs. the spinning of propeller blades. In the present study, the micro-Doppler signature is exploited to discriminate birds and small unmanned aerial vehicles (UAVs). Emphasis is on micro-Doppler features that can be extracted from spectrograms and cepstrograms, enabling the human eye or indeed automatic classification algorithms to make a quick distinction between man-made objects and bio-life. In addition, in case of man-made objects, it is desired to further characterize the type of mini-UAV to aid the threat assessment. Also this characterization is done on the basis of micro-Doppler features. |
Abortion and subsequent depressive symptoms: an analysis of the National Longitudinal Study of Adolescent Health. | BACKGROUND
Twenty states currently require that women seeking abortion be counseled on possible psychological responses, with six states stressing negative responses. The majority of research finds that women whose unwanted pregnancies end in abortion do not subsequently have adverse mental health outcomes; scant research examines this relationship for young women.
METHODS
Four waves of data from the National Longitudinal Study of Adolescent Health were analyzed. Population-averaged lagged logistic and linear regression models were employed to test the relationship between pregnancy resolution outcome and subsequent depressive symptoms, adjusting for prior depressive symptoms, history of traumatic experiences, and sociodemographic covariates. Depressive symptoms were measured using a nine-item version of the Center for Epidemiologic Studies Depression scale. Analyses were conducted among two subsamples of women whose unwanted first pregnancies were resolved in either abortion or live birth: (1) 856 women with an unwanted first pregnancy between Waves 2 and 3; and (2) 438 women with an unwanted first pregnancy between Waves 3 and 4 (unweighted n's).
RESULTS
In unadjusted and adjusted linear and logistic regression analyses for both subsamples, there was no association between having an abortion after an unwanted first pregnancy and subsequent depressive symptoms. In fully adjusted models, the most recent measure of prior depressive symptoms was consistently associated with subsequent depressive symptoms.
CONCLUSIONS
In a nationally representative, longitudinal dataset, there was no evidence that young women who had abortions were at increased risk of subsequent depressive symptoms compared with those who give birth after an unwanted first pregnancy. |
Beyond GDP: The Quest for a Measure of Social Welfare | Part I The first part of the paper is devoted to the monetary indicators of social welfare. It is shown which methods of quantitative estimating the aggregate wealth and well-being are available in the modern economic theory apart from the traditional GDP measure. The limitations of the methods are also discussed. The author shows which measures of welfare are adequate in the dynamic context: he considers the problems of intertemporal welfare analysis using the Net National Product (NNP) for the sustainability policy and in the context of concern for well-being of the future generations. |
Export of DOC from forested catchments on the Precambrian Shield of Central Ontario: Clues from13C and 14C | Export of dissolved organic carbon (DOC) from forested catchments is governed by competing processes of production, decomposition, sorption and flushing. To examine the sources of DOC, carbon isotopes (C and C) were analyzed in DOC from surface waters, groundwaters and soils in a small forested catchment on the Canadian Shield in central Ontario. A significant fraction (greater than 50%) of DOC in major inflows to the lake is composed of carbon incorporated into organic matter, solubilized and flushed into the stream within the last 40 years. In contrast, C in groundwater DOC was old indicating extensive recycling of forest floor derived organic carbon in the soil column before elution to groundwater in the lower B and C soil horizons. A small upland basin had a wide range in C from old groundwater values at baseflow under dry basin conditions to relatively modern values during high flow or wetter antecedent conditions. Wetlands export mainly recently fixed carbon with little seasonal range. DOC in streams entering the small lake may be composed of two pools; an older recalcitrant pool delivered by groundwater and a young labile pool derived from recent organic matter. The relative proportion of these two pools changes seasonally due the changes in the water flowpaths and organic carbon dynamics. Although changes in local climate (temperature and/or precipitation) may alter the relative proportions of the old and young pools, the older pool is likely to be more refractory to sedimentation and decomposition in the lake setting. Delivery of older pool DOC from the catchment and susceptibility of this older pool to photochemical decomposition may consequently be important in governing the minimum DOC concentration limit in lakes. |
The production data-based similarity coefficient versus Jaccard's similarity coefficient | This paper compares the performance of Jaccard's similarity coefficient with the producti on data-based simi 1arity coeffi cient. A number of machine-component charts taken from the literature or randomly generated are used to form machine-component groups. Then, the sum of intercellular and intracellular material handling costs for each machine-component group is calculated and used as a basis for performance evalua tion of the two similarity coefficients. INTRODUCTI ON The machine-component grouping process is a basic step in the implementation of cellular manufacturing in which part families are processed in dedicated machine cells each capable of processing one or more part-families. Cellular manufacturing im proves productivity through reduction in setup times [2,5]. There are a number of different ap proaches to the machine-component grouping problem among them the similarity coeffi cient method is more effective in forming machine cells [3,4,6,7,12]. In this method, a measure of similarity (similarity coeffi cient) is defined between two machines (parts) and a clustering algorithm is used to group machines into machine cells [6], The similarity coefficient between two machi nes is defi ned as the number of parts visiting both machines divided by the number of parts visiting either of the two mach1fles {fl, 7]. |
The acute management of intracerebral hemorrhage: a clinical review. | Intracerebral hemorrhage (ICH) is a devastating disease with high rates of mortality and morbidity. The major risk factors for ICH include chronic arterial hypertension and oral anticoagulation. After the initial hemorrhage, hematoma expansion and perihematoma edema result in secondary brain damage and worsened outcome. A rapid onset of focal neurological deficit with clinical signs of increased intracranial pressure is strongly suggestive of a diagnosis of ICH, although cranial imaging is required to differentiate it from ischemic stroke. ICH is a medical emergency and initial management should focus on urgent stabilization of cardiorespiratory variables and treatment of intracranial complications. More than 90% of patients present with acute hypertension, and there is some evidence that acute arterial blood pressure reduction is safe and associated with slowed hematoma growth and reduced risk of early neurological deterioration. However, early optimism that outcome might be improved by the early administration of recombinant factor VIIa (rFVIIa) has not been substantiated by a large phase III study. ICH is the most feared complication of warfarin anticoagulation, and the need to arrest intracranial bleeding outweighs all other considerations. Treatment options for warfarin reversal include vitamin K, fresh frozen plasma, prothrombin complex concentrates, and rFVIIa. There is no evidence to guide the specific management of antiplatelet therapy-related ICH. With the exceptions of placement of a ventricular drain in patients with hydrocephalus and evacuation of a large posterior fossa hematoma, the timing and nature of other neurosurgical interventions is also controversial. There is substantial evidence that management of patients with ICH in a specialist neurointensive care unit, where treatment is directed toward monitoring and managing cardiorespiratory variables and intracranial pressure, is associated with improved outcomes. Attention must be given to fluid and glycemic management, minimizing the risk of ventilator-acquired pneumonia, fever control, provision of enteral nutrition, and thromboembolic prophylaxis. There is an increasing awareness that aggressive management in the acute phase can translate into improved outcomes after ICH. |
A cross-program investigation of students' perceptions of agile methods | Research was conducted on using agile methods in software engineering education. This paper explores the perceptions of students from five different academic levels of agile practices. Information has been gathered through the collection of quantitative and qualitative data over three academic years, and analysis reveals student experiences, mainly positive but also some negative. Student opinions indicate the preference to continue to use agile practices at the workplace if allowed. A way these findings may potentially be extrapolated to the industrial settings is discussed. Finally, this report should encourage other academics considering adoption of agile methods in their computer science or software engineering curricula. |
The Butterfly Methodology : A Gateway-free Approach for Migrating Legacy Information Systems | The problems posed by mission-critical legacy systems e.g., brittleness, inflexibility, isolation, non-extensibility, lack of openness are well known, but practical solutions have been slow to emerge. Generally, organisations attempt to keep their legacy systems operational, while developing mechanisms which allow the legacy systems to interoperate with new, modern systems which provide additional functionality. The most mature approach employs gateways to provide this interoperability. However, gateways introduce considerable complexity in their attempt to maintain consistency between the legacy and target systems. This paper presents an innovative gateway-free approach to migrating legacy information systems in a mission-critical environment : the Butterfly Methodology. The fundamental premise of this methodology is to question the need for the parallel operation of the legacy and target systems during migration. |
Manifold surface reconstruction of an environment from sparse Structure-from-Motion data | The majority of methods for the automatic surface reconstruction of an environment from an image sequence have two steps: Structure-from-Motion and dense stereo. From the computational standpoint, it would be interesting to avoid dense stereo and to generate a surface directly from the sparse cloud of 3D points and their visibility information provided by Structure-fromMotion. The previous attempts to solve this problem are currently very limited: the surface is non-manifold or has zero genus, the experiments are done on small scenes or objects using a few dozens of images. Our solution does not have these limitations. Furthermore, we experiment with hand-held or helmet-held catadioptric cameras moving in a city and generate 3D models such that the camera trajectory can be longer than one kilometer. |
An Adapted Lesk Algorithm for Word Sense Disambiguation Using WordNet | This paper presents an adaptation of Lesk’s dictionary– based word sense disambiguation algorithm. Rather than using a standard dictionary as the source of glosses for our approach, the lexical database WordNet is employed. This provides a rich hierarchy of semantic relations that our algorithm can exploit. This method is evaluated using the English lexical sample data from the Senseval-2 word sense disambiguation exercise, and attains an overall accuracy of 32%. This represents a significant improvement over the 16% and 23% accuracy attained by variations of the Lesk algorithm used as benchmarks during the Senseval-2 comparative exercise among word sense disambiguation |
Virtual reality exposure therapy. | It has been proposed that virtual reality (VR) exposure may be an alternative to standard in vivo exposure. Virtual reality integrates real-time computer graphics, body tracking devices, visual displays, and other sensory input devices to immerse a participant in a computer-generated virtual environment. Virtual reality exposure is potentially an efficient and cost-effective treatment of anxiety disorders. VR exposure therapy reduced the fear of heights in the first controlled study of virtual reality in treatment of a psychiatric disorder. A case study supported the efficacy of VR exposure therapy for the fear of flying. The potential for virtual reality exposure treatment for these and other disorders is explored, and therapeutic issues surrounding the delivery of VR exposure are discussed. |
The impact of mobile monitoring technologies on glycosylated hemoglobin in diabetes: a systematic review. | BACKGROUND
A new development in the field of telehealth is the use of mobile health technologies (mhealth) to help patients record and track medical information. Mhealth appears particularly advantageous for conditions that require intense and ongoing monitoring, such as diabetes, and where people are of working age and not disabled. This review aims to evaluate the evidence for the effectiveness of mhealth interventions in diabetes management on glycosylated hemoglobin.
METHOD
A comprehensive search strategy was developed and applied to eight electronic databases to identify studies that investigated the clinical effectiveness of mobile-based applications that allowed patients to record and send their blood glucose readings to a central server. The eligibility of 8543 papers was assessed against the selection criteria, and 24 papers were reviewed. All studies reviewed were assessed for quality using a standardized quality assessment tool.
RESULTS
Results for patients with type 1 and type 2 diabetes were examined separately. Study variability and poor reporting made comparison difficult, and most studies had important methodological weaknesses. Evidence on the effectiveness of mhealth interventions for diabetes was inconsistent for both types of diabetes and remains weak. |
Influence of an algal triacylglycerol containing docosahexaenoic acid (22 : 6n-3) and docosapentaenoic acid (22 : 5n-6) on cardiovascular risk factors in healthy men and women. | The intake of long-chain n-3 PUFA, including DHA (22 : 6n-3), is associated with a reduced risk of CVD. Schizochytrium sp. are an important primary source of DHA in the marine food chain but they also provide substantial quantities of the n-6 PUFA docosapentaenoic acid (22 : 5n-6; DPA). The effect of this oil on cardiovascular risk factors was evaluated using a double-blind randomised placebo-controlled parallel-design trial in thirty-nine men and forty women. Subjects received 4 g oil/d for 4 weeks; the active treatment provided 1.5 g DHA and 0.6 g DPA. Active treatment increased plasma concentrations of arachidonic acid, adrenic acid, DPA and DHA by 21, 11, 11 and 88 mg/l respectively and the proportions of DPA and DHA in erythrocyte phospholipids by 78 and 27 % respectively. Serum total, LDL- and HDL-cholesterol increased by 0.33 mmol/l (7.3 %), 0.26 mmol/l (10.4 %) and 0.14 mmol/l (9.0 %) compared with placebo (all P < or =0.001). Factor VII (FVII) coagulant activity increased by 12 % following active treatment (P = 0.006). There were no significant differences between treatments in LDL size, blood pressure, plasma glucose, serum C-reactive protein, plasma FVII antigen, FVII activated, fibrinogen, von Willebrand factor, tocopherol or carotenoid concentrations, plasminogen activator inhibitor-1, creatine kinase or troponin-I activities, haematology or liver function tests or self-reported adverse effects. Overall, the oil was well tolerated and did not adversely affect cardiovascular risk. |
Automatic Forgery of Cryptographically Consistent Messages to Identify Security Vulnerabilities in Mobile Services | Most mobile apps today require access to remote services, and many of them also require users to be authenticated in order to use their services. To ensure the security between the client app and the remote service, app developers often use cryptographic mechanisms such as encryption (e.g., HTTPS), hashing (e.g., MD5, SHA1), and signing (e.g., HMAC) to ensure the confidentiality and integrity of the network messages. However, these cryptographic mechanisms can only protect the communication security, and server-side checks are still needed because malicious clients owned by attackers can generate any messages they wish. As a result, incorrect or missing server side checks can lead to severe security vulnerabilities including password brute-forcing, leaked password probing, and security access token hijacking. To demonstrate such a threat, we present AUTOFORGE, a tool that can automatically forge valid request messages from the client side to test whether the server side of an app has ensured the security of user accounts with sufficient checks. To enable these security tests, a fundamental challenge lies in how to forge a valid cryptographically consistent message such that it can be consumed by the server. We have addressed this challenge with a set of systematic techniques, and applied them to test the server side implementation of 76 popular mobile apps (each of which has over 1,000,000 installs). Our experimental results show that among these apps, 65 (86%) of their servers are vulnerable to password brute-forcing attacks, all (100%) are vulnerable to leaked password probing attacks, and 9 (12%) are vulnerable to Facebook access token hijacking attacks. |
A Computer Vision Technique to Detect Phishing Attacks | Phishing refers to cybercrime that use social engineering and technical subterfuge techniques to fool online users into revealing sensitive information such as username, password, bank account number or social security number. In this paper, we propose a novel solution to defend zero-day phishing attacks. Our proposed approach is a combination of white list and visual similarity based techniques. We use computer vision technique called SURF detector to extract discriminative key point features from both suspicious and targeted websites. Then they are used for computing similarity degree between the legitimate and suspicious pages. Our proposed solution is efficient, covers a wide range of websites phishing attacks and results in less false positive rate. |
Brain Tumor Type Classification via Capsule Networks | Brain tumor is considered as one of the deadliest and most common form of cancer both in children and in adults. Consequently, determining the correct type of brain tumor in early stages is of significant importance to devise a precise treatment plan and predict patient's response to the adopted treatment. In this regard, there has been a recent surge of interest in designing Convolutional Neural Networks (CNNs) for the problem of brain tumor type classification. However, CNNs typically require large amount of training data and can not properly handle input transformations. Capsule networks (referred to as CapsNets) are brand new machine learning architectures proposed very recently to overcome these shortcomings of CNNs, and posed to revolutionize deep learning solutions. Of particular interest to this work is that Capsule networks are robust to rotation and affine transformation, and require far less training data, which is the case for processing medical image datasets including brain Magnetic Resonance Imaging (MRI) images. In this paper, we focus to achieve the following four objectives: (i) Adopt and incorporate CapsNets for the problem of brain tumor classification to design an improved architecture which maximizes the accuracy of the classification problem at hand; (ii) Investigate the over-fitting problem of CapsNets based on a real set of MRI images; (iii) Explore whether or not CapsNets are capable of providing better fit for the whole brain images or just the segmented tumor, and; (iv) Develop a visualization paradigm for the output of the CapsNet to better explain the learned features. Our results show that the proposed approach can successfully overcome CNNs for the brain tumor classification problem. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.