title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Combined orthodontic and orthognathic surgical treatment for the correction of skeletal anterior open-bite malocclusion: a systematic review on vertical stability. | PURPOSE
To evaluate vertical stability after combined orthodontic surgical treatment of skeletal anterior open-bite malocclusion.
MATERIALS AND METHODS
A literature search was performed to locate studies pertaining to vertical stability after combined orthodontic surgical treatment of skeletal anterior open-bite malocclusion. Data from the identified studies were extracted and assessed for quality. Short-term and long-term changes in the following variables were evaluated: overbite; mandibular plane, palatal plane, and intermaxillary angles; and anterior facial height.
RESULTS
Nine studies, all retrospective, were appropriate for inclusion after review. The postoperative follow-up period ranged from 1 to 18 years. A wide variation was present for post-treatment changes and relapse. Dentally, overbite changes showed a wide variation, with more long-term relapse observed in patients after Le Fort I osteotomy. Skeletally, the mandibular plane and intermaxillary angles showed greater long-term relapse after bimaxillary surgery than after Le Fort I osteotomy. The same trend was seen for the post-treatment increase in anterior facial height. In contrast, the palatal plane seemed to remain rather stable.
CONCLUSIONS
Vertical relapse is a characteristic in a certain number of patients after combined orthodontic surgical treatments regardless of surgery type. This can be observed dentally by an opening of the bite and skeletally by an increase in the mandibular plane and intermaxillary angles during long-term follow-up. Long-term skeletal relapse seems to be more common after bimaxillary surgery. |
Desmoplastic Melanoma of the Periorbital Region. | Desmoplastic melanoma (DM) is a rare subtype of melanoma and an even smaller proportion of periocular melanomas. Here, the authors report 2 cases of DM in the periocular region. Staged according to the American Joint Committee on Cancer (AJCC) eighth edition classification, patient 1 presented with a stage IIIC (pT4apN1cM0) DM in the left lateral canthus with upper and lower eyelid and patient 2 presented with a stage IIIB (T4aN1bM0) DM in the left brow and supraorbital region with a parotid lymph node metastasis. In both patients, the lesions were amelanotic, with inflammatory appearance, and had been noted for several years before the correct diagnosis was made. In both patients, wide excision led to large surgical defects, and perineural invasion prompted adjuvant radiation therapy postoperatively. Patient 2 was treated with an immune checkpoint inhibitor for his parotid metastasis. Ophthalmologists should be aware of DM, its neurotrophic nature, and potential to metastasize with locally advanced lesions. |
A survey on device-to-device (D2D) communication: Architecture and security issues | The number of devices is expected to radically increase in near future, with an estimate of above 50 billion connected devices by 2020. The subscribers demand improved data rates, with reduced latency and increased system capacity. To endure the rising demands, cellular networks need to undergo suitable changes. For fulfillment of the rising needs of users and efficient utilization of the available scarce resources, device-to-device (D2D) communication is being looked upon as an important emerging technology for present and future cellular networks. It allows peer-to-peer communication between users, with improved spectral efficiency, energy efficiency and system throughput. In this paper, a detailed survey on device-to-device (D2D) communications has been offered, along with the challenges which exist for D2D (like resource allocation, security, interference management etc.) to become a successful paradigm of wireless networks. In order to fulfill the subscriber needs, architecture has been proposed which assures overcoming the various implementation challenges of D2D communication. The paper largely focuses on security in D2D communication and the possible attacks to which the direct links are susceptible to. For ensuring a secure D2D communication, solution has been proposed, based upon Internet Protocol Security (IP Sec). |
A neighborhood graph based approach to regional co-location pattern discovery: a summary of results | Regional co-location patterns (RCPs) represent collections of feature types frequently located together in certain localities. For example, RCP < (Bar, Alcohol -- Crimes), Downtown >suggests that a co-location pattern involving alcohol-related crimes and bars is often localized to downtown regions. Given a set of Boolean feature types, their geo-located instances, a spatial neighbor relation, and a prevalence threshold, the RCP discovery problem finds all prevalent RCPs (pairs of co-locations and their prevalence localities). RCP discovery is important in many societal applications, including public safety, public health, climate science and ecology. The RCP discovery problem involves three major challenges: (a) an exponential number of subsets of feature types, (b) an exponential number of candidate localities and (c) a tradeoff between accurately modeling pattern locality and achieving computational efficiency. Related work does not provide computationally efficient methods to discover all interesting RCPs with their natural prevalence localities. To address these limitations, this paper proposes a neighborhood graph based approach that discovers all interesting RCPs and is aware of a pattern's prevalence localities. We identify partitions based on the pattern instances and neighbor graph. We introduce two new interest measures, a regional participation ratio and a regional participation index to quantify the strength of RCPs. We present two new algorithms, Pattern Space (PS) enumeration and Maximal Locality (ML) enumeration and show that they are correct and complete. Experiments using real crime datasets show that ML pruning outperforms PS enumeration. |
STRUCTURAL ORDER IN LIQUIDS INDUCED BY INTERFACES WITH CRYSTALS | ▪ Abstract Interfaces between solids and liquids are important for a range of materials processes, including soldering and brazing, liquid-phase sintering, crystal growth, and lubrication. There is a wealth of fundamental studies on solid-liquid interfaces in materials, primarily focused on thermodynamics (relative interface energies and segregation effects) from high-temperature wetting experiments, which is often applied to processing design. Less is known about the atomistic structure at solid-liquid interfaces, mainly because of the difficulty involved in obtaining such information experimentally. This work reviews both theoretical and experimental studies of atomistic configurations at solid-liquid interfaces, focusing on the issue of ordering in the liquid adjacent to crystalline solids. |
Review of Modelling Techniques for In Vivo Muscle Force Estimation in the Lower Extremities during Strength Training | BACKGROUND
Knowledge of the musculoskeletal loading conditions during strength training is essential for performance monitoring, injury prevention, rehabilitation, and training design. However, measuring muscle forces during exercise performance as a primary determinant of training efficacy and safety has remained challenging.
METHODS
In this paper we review existing computational techniques to determine muscle forces in the lower limbs during strength exercises in vivo and discuss their potential for uptake into sports training and rehabilitation.
RESULTS
Muscle forces during exercise performance have almost exclusively been analysed using so-called forward dynamics simulations, inverse dynamics techniques, or alternative methods. Musculoskeletal models based on forward dynamics analyses have led to considerable new insights into muscular coordination, strength, and power during dynamic ballistic movement activities, resulting in, for example, improved techniques for optimal performance of the squat jump, while quasi-static inverse dynamics optimisation and EMG-driven modelling have helped to provide an understanding of low-speed exercises.
CONCLUSION
The present review introduces the different computational techniques and outlines their advantages and disadvantages for the informed usage by nonexperts. With sufficient validation and widespread application, muscle force calculations during strength exercises in vivo are expected to provide biomechanically based evidence for clinicians and therapists to evaluate and improve training guidelines. |
Effective graph classification based on topological and label attributes | Graph classification is an important data mining task, and various graph kernel methods have been proposed recently for this task. These methods have proven to be effective, but they tend to have high computational overhead. In this paper, we propose an alternative approach to graph classification that is based on feature vectors constructed from different global topological attributes, as well as global label features. The main idea is that the graphs from the same class should have similar topological and label attributes. Our method is simple and easy to implement, and via a detailed comparison on real benchmark datasets, we show that our topological and label feature-based approach delivers competitive classification accuracy, with significantly better results on those datasets that have large unlabeled graph instances. Our method is also substantially faster than most other graph kernels. © 2012 Wiley Periodicals, Inc. Statistical Analysis and Data Mining 5: 265–283, 2012 |
Topologies of single-phase inverters for small distributed power generators: an overview | This paper presents an overview of single-phase inverters developed for small distributed power generators. The functions of inverters in distributed power generation (DG) systems include dc-ac conversion, output power quality assurance, various protection mechanisms, and system controls. Unique requirements for small distributed power generation systems include low cost, high efficiency and tolerance for an extremely wide range of input voltage variations. These requirements have driven the inverter development toward simpler topologies and structures, lower component counts, and tighter modular design. Both single-stage and multiple-stage inverters have been developed for power conversion in DG systems. Single-stage inverters offer simple structure and low cost, but suffer from a limited range of input voltage variations and are often characterized by compromised system performance. On the other hand, multiple-stage inverters accept a wide range of input voltage variations, but suffer from high cost, complicated structure and low efficiency. Various circuit topologies are presented, compared, and evaluated against the requirements of power decoupling and dual-grounding, the capabilities for grid-connected or/and stand-alone operations, and specific DG applications in this paper, along with the identification of recent development trends of single-phase inverters for distributed power generators. |
TopicViz: interactive topic exploration in document collections | Existing methods for searching and exploring large document collections focus on surface-level matches to user queries, ignoring higher-level semantic structure. In this paper we show how topic modeling - a technique for identifying latent themes across a large collection of documents - can support semantic exploration. We present TopicViz: an interactive environment which combines traditional search and citation-graph exploration with a force-directed layout that links documents to the latent themes discovered by the topic model. We describe usage scenarios in which TopicViz supports rapid sensemaking on large document collections. |
Water quality in sustainable water management | Water pollution is a serious problem as almost 70% of India’s surface water resources and a growing number of its groundwater reserves have been contaminated by biological, organic and inorganic pollutants. Pollution of surface and groundwater resources occurs through point and diffuse sources. Examples of point source pollution are effluents from industries and from sewage-treatment plants. Typical examples of diffuse pollution sources are agricultural runoffs due to inorganic fertilizers and pesticides and natural contamination of groundwater by fluoride, arsenic and dissolved salts due to geo-chemical activities. In pursuit of measures to achieve sustainability in water management, the Centre for Sustainable Technologies (CST) at the Indian Institute of Science (IISc) has begun to address treatment of fluoride-contaminated groundwater for potable requirements. The fluorosis problem is severe in India as almost 80% of the rural population depends on untreated groundwater for potable water supplies. A new method to treat fluoride-contaminated water using magnesium oxide has been developed at IISc. The IISc method relies on precipitation, sedimentation, and filtration techniques and is efficient for a range of groundwater chemistry conditions. |
Time-Delay Compensation by Communication Disturbance Observer for Bilateral Teleoperation Under Time-Varying Delay | This paper presents the effectiveness of a time-delay compensation method based on the concept of network disturbance and communication disturbance observer for bilateral teleoperation systems under time-varying delay. The most efficient feature of the compensation method is that it works without time-delay models (model-based time-delay compensation approaches like Smith predictor usually need time-delay models). Therefore, the method is expected to be widely applied to network-based control systems, in which time delay is usually unknown and time varying. In this paper, the validity of the time-delay compensation method in the cases of both constant delay and time-varying delay is verified by experimental results compared with Smith predictor. |
Incremental Learning of Concept Drift in Nonstationary Environments | We introduce an ensemble of classifiers-based approach for incremental learning of concept drift, characterized by nonstationary environments (NSEs), where the underlying data distributions change over time. The proposed algorithm, named Learn++.NSE, learns from consecutive batches of data without making any assumptions on the nature or rate of drift; it can learn from such environments that experience constant or variable rate of drift, addition or deletion of concept classes, as well as cyclical drift. The algorithm learns incrementally, as other members of the Learn++ family of algorithms, that is, without requiring access to previously seen data. Learn++.NSE trains one new classifier for each batch of data it receives, and combines these classifiers using a dynamically weighted majority voting. The novelty of the approach is in determining the voting weights, based on each classifier's time-adjusted accuracy on current and past environments. This approach allows the algorithm to recognize, and act accordingly, to the changes in underlying data distributions, as well as to a possible reoccurrence of an earlier distribution. We evaluate the algorithm on several synthetic datasets designed to simulate a variety of nonstationary environments, as well as a real-world weather prediction dataset. Comparisons with several other approaches are also included. Results indicate that Learn++.NSE can track the changing environments very closely, regardless of the type of concept drift. To allow future use, comparison and benchmarking by interested researchers, we also release our data used in this paper. |
LSTMs Exploit Linguistic Attributes of Data | While recurrent neural networks have found success in a variety of natural language processing applications, they are general models of sequential data. We investigate how the properties of natural language data affect an LSTM’s ability to learn a nonlinguistic task: recalling elements from its input. We find that models trained on natural language data are able to recall tokens from much longer sequences than models trained on non-language sequential data. Furthermore, we show that the LSTM learns to solve the memorization task by explicitly using a subset of its neurons to count timesteps in the input. We hypothesize that the patterns and structure in natural language data enable LSTMs to learn by providing approximate ways of reducing loss, but understanding the effect of different training data on the learnability of LSTMs remains an open question. |
Striving for the moral self: the effects of recalling past moral actions on future moral behavior. | People's desires to see themselves as moral actors can contribute to their striving for and achievement of a sense of self-completeness. The authors use self-completion theory to predict (and show) that recalling one's own (im)moral behavior leads to compensatory rather than consistent moral action as a way of completing the moral self. In three studies, people who recalled their immoral behavior reported greater participation in moral activities (Study 1), reported stronger prosocial intentions (Study 2), and showed less cheating (Study 3) than people who recalled their moral behavior. These compensatory effects were related to the moral magnitude of the recalled event, but they did not emerge when people recalled their own positive or negative nonmoral behavior (Study 2) or others' (im)moral behavior (Study 3). Thus, the authors extend self-completion theory to the moral domain and use it to integrate the research on moral cleansing (remunerative moral strivings) and moral licensing (relaxed moral strivings). |
Robustness Analysis of Artificial Neural Networks and Support Vector Machine in Making Prediction | This This study aims to investigate the robustness of prediction model by comparing artificial neural networks (ANNs), and support vector machine (SVMs) model. The study employs ten years monthly data of six types of macroeconomic variables as independent variables and the average rate of return of one-month time deposit of Indonesian Islamic banks (RR) as dependent variable. Finally, the performance is evaluated through graph analysis, statistical parameters and accuracy rate measurement. This research found that ANNs outperforms SVMs empirically resulted from the training process and overall data prediction. This is indicating that ANNs model is better in the context of capturing all data pattern and explaining the volatility of RR. |
Talking about the Environmental Impact Assessment of Underground Water in Mining Area | This paper introduces the general situation of the environmental impact assessment of mining area,expounds the methods,targets and index system of the environmental impact assessment of underground water in mining area,and expounds the main contents of the environmental impact assessment of water in mining area from three aspects. |
Predictors of associated autoimmune diseases in families with type 1 diabetes: results from the Type 1 Diabetes Genetics Consortium. | BACKGROUND
Type 1 diabetes (T1D) is a clinically heterogeneous disease. The presence of associated autoimmune diseases (AAIDs) may represent a distinct form of autoimmune diabetes, with involvement of specific mechanisms. The aim of this study was to find predictors of AAIDs in the Type 1 Diabetes Genetics Consortium data set.
METHODS
Three thousand two hundred and sixty-three families with at least two siblings with T1D were included. Clinical information was obtained using questionnaires, anti-GAD (glutamic acid decarboxylase) and anti-protein tyrosine phosphatase (IA-2) were measured and human leukocyte antigen (HLA) genotyping was performed. Siblings with T1D with and without AAIDs were compared and a multivariate regression analysis was performed to find predictors of AAIDs. T1D-associated HLA haplotypes were defined as the four most susceptible and protective, respectively.
RESULTS
One or more AAIDs were present in 14.4% of the T1D affected siblings. Age of diabetes onset, current age and time since diagnosis were higher, there was a female predominance and more family history of AAIDs in the group with AAIDs, as well as more frequent anti-GAD and less frequent anti-IA-2 antibodies. Risk and protective HLA haplotype distributions were similar, though DRB1*0301-DQA1*0501-DQB1*0201 was more frequent in the group with AAIDs. In the multivariate analysis, female gender, age of onset, family history of AAID, time since diagnosis and anti-GAD positivity were significantly associated with AAIDs.
CONCLUSIONS
In patients with T1D, the presence of AAIDs is associated with female predominance, more frequent family history of AAIDs, later onset of T1D and more anti-GAD antibodies, despite longer duration of the disease. The predominance of certain HLA haplotypes suggests that specific mechanisms of disease may be involved. |
Five-year results of a randomized clinical trial comparing a polypropylene mesh with a poliglecaprone and polypropylene composite mesh for inguinal hernioplasty | The aim of this study was to assess whether partially absorbable monofilament mesh could influence postoperative pain and recurrence after Lichtenstein hernioplasty over the long term. Patients were randomized into two groups that were treated with lightweight (LW) or heavyweight (HW) mesh in 15 centers in Poland. A modified suture technique was used in the lightweight mesh group. Clinical examination was performed. A pain questionnaire was completed five years after the surgery. Of the 392 patients who underwent surgery, 161 (90.81 %) of 177 in the HW group and 195 (90.69 %) of 215 in the LW group were examined according to protocol, a median of 62 (range 57–66) months after hernia repair. There was no difference in the recurrence rate (1.9 % LW vs. 0.6 % HW; P = 0.493). There were 24 deaths in the follow-up period, but these had no connection to the surgery. The patients treated with LW mesh reported less pain in the early postoperative period. After five years of follow-up, the intensity and the presence of pain did not differ between groups (5 patients in the LW and 4 patients in the HW group). Average pain, (VAS score), was also similar in the LW and HW group (2.25 vs. 2.4) at the fifth year postoperatively. The use of partially absorbable mesh reduced postoperative pain during the short-term postoperative period. No difference in pain or recurrence rate was observed at 60 months. |
The terminalization of supply chains : reassessing the role of terminals in port / hinterland logistical relationships | The paper discusses how logistics service providers are using terminals in their supply chains. It argues that an increasing „terminalization‟ of supply chains is unfolding, whereby seaport and inland terminals are taking up a more active role in supply chains by increasingly confronting market players with operational considerations such as imposing berthing windows, dwell time charges, truck slots, all this to increase throughput, optimize terminal capacity and make the best use of available land. With the development of inland terminals, a new dimension is being added: logistics players are now making best use of the free time available in seaports terminals and inland terminals, thereby optimizing the terminal buffer function. As a result, transport terminals are achieving an additional level of integration within supply chains that goes beyond their conventional transshipment role. Given increasing levels of vertical integration in the market and an increasing pressure on port capacity, a further terminalization of supply chains is likely to occur, which will strengthen the active role of terminals in logistics. |
Evaluating Personalization and Persuasion in E-Commerce | The use of personalization and persuasion has been shown to optimize customers’ shopping experience in e-commerce. This study aims to identify the personalization methods and persuasive principles that make an ecommerce company successful. Using Amazon as a case study, we evaluated the personalization methods implemented using an existing process framework. We also applied the PSD model to Amazon to evaluate the persuasive principles it uses. Our results show that all the principles of the PSD model were implemented in Amazon. This study can serve as a guide to e-commerce businesses and software developers for building or improving existing e-commerce platforms. |
Fast and Accurate Entity Recognition with Iterated Dilated Convolutions | Today when many practitioners run basic NLP on the entire web and large-volume traffic, faster methods are paramount to saving time and energy costs. Recent advances in GPU hardware have led to the emergence of bi-directional LSTMs as a standard method for obtaining pertoken vector representations serving as input to labeling tasks such as NER (often followed by prediction in a linear-chain CRF). Though expressive and accurate, these models fail to fully exploit GPU parallelism, limiting their computational efficiency. This paper proposes a faster alternative to Bi-LSTMs for NER: Iterated Dilated Convolutional Neural Networks (ID-CNNs), which have better capacity than traditional CNNs for large context and structured prediction. Unlike LSTMs whose sequential processing on sentences of length N requires O(N) time even in the face of parallelism, ID-CNNs permit fixed-depth convolutions to run in parallel across entire documents. We describe a distinct combination of network structure, parameter sharing and training procedures that enable dramatic 14-20x testtime speedups while retaining accuracy comparable to the Bi-LSTM-CRF. Moreover, ID-CNNs trained to aggregate context from the entire document are even more accurate while maintaining 8x faster test time speeds. |
LabCaS: labeling calpain substrate cleavage sites from amino acid sequence using conditional random fields. | The calpain family of Ca(2+) -dependent cysteine proteases plays a vital role in many important biological processes which is closely related with a variety of pathological states. Activated calpains selectively cleave relevant substrates at specific cleavage sites, yielding multiple fragments that can have different functions from the intact substrate protein. Until now, our knowledge about the calpain functions and their substrate cleavage mechanisms are limited because the experimental determination and validation on calpain binding are usually laborious and expensive. In this work, we aim to develop a new computational approach (LabCaS) for accurate prediction of the calpain substrate cleavage sites from amino acid sequences. To overcome the imbalance of negative and positive samples in the machine-learning training which have been suffered by most of the former approaches when splitting sequences into short peptides, we designed a conditional random field algorithm that can label the potential cleavage sites directly from the entire sequences. By integrating the multiple amino acid features and those derived from sequences, LabCaS achieves an accurate recognition of the cleave sites for most calpain proteins. In a jackknife test on a set of 129 benchmark proteins, LabCaS generates an AUC score 0.862. The LabCaS program is freely available at: http://www.csbio.sjtu.edu.cn/bioinf/LabCaS. Proteins 2013. © 2012 Wiley Periodicals, Inc. |
A Novel Lifecycle Framework for Semantic Web Service Annotation Assessment and Optimization | Semantic annotation plays an important role for semantic-aware web service discovery, recommendation and composition. In recent years, many approaches and tools have emerged to assist in semantic annotation creation and analysis. However, the Quality of Semantic Annotation (QoSA) is largely overlooked despite of its significant impact on the effectiveness of semantic-aware solutions. Moreover, improving the QoSA is time-consuming and requires significant domain knowledge. Therefore, how to verify and improve the QoSA has become a critical issue for semantic web services. In order to facilitate this process, this paper presents a novel lifecycle framework aiming at QoSA assessment and optimization. The QoSA is formally defined as the success rate of web service invocations, associated with a verification framework. Based on a local instance repository constructed from the execution information of the invocations, a two-layer optimization method including a local-feedback strategy and a global-feedback one is proposed to improve the QoSA. Experiments on real-world web services show that our framework can gain 65.95%~148.16% improvement in QoSA, compared with the original annotation without optimization. |
The German Version of the Hopkins Symptoms Checklist-25 (HSCL-25) --factorial structure, psychometric properties, and population-based norms. | PURPOSE
The Hopkins Symptom Checklist-25 (HSCL-25) has often been used in cross-cultural settings and in studies focussing on asylum seekers, refugees etc. It is available in a number of languages. The present study investigates the psychometric properties of the German version of the HSCL-25 and delivers population-based norms.
METHODS
Psychometric properties are investigated in a population-based representative sample of the German general population (N=2516). Seven different factorial models are compared using confirmatory factor analysis.
RESULTS
Two out of the seven models show the best model fit. Because of the high inter-correlations of the factors of the tripartite model, the bifactor model is the preferable factor solution. The internal consistencies (Cronbach's alpha) were 0.84, 0.92, and 0.94 for the anxiety, the depression and the total score, respectively. The correlations of both subscales of this model with the subscales of the Brief-Symptom-Inventory-18 or the Patient Health Questionnaire-4 point out, that there is only marginal differential information of the subscales.
CONCLUSION
Considering the third ("general") factor of the bifactor model with all items loading on it and the absence of differential correlations of the subscales with the external criteria (PHQ-4, BSI-18) the HSCL-25 seems to assess something like "mental distress" with a focus on symptoms of depression and anxiety. The population-based norms support the application of the HSCL-25 for individual diagnostics as well as for the comparison of specific samples with the general population. |
Chromosome painting: a useful art. | Chromosome 'painting' refers to the hybridization of fluorescently labeled chromosome-specific, composite probe pools to cytological preparations. Chromosome painting allows the visualization of individual chromosomes in metaphase or interphase cells and the identification of both numerical and structural chromosomal aberrations in human pathology with high sensitivity and specificity. In addition to human chromosome-specific probe pools, painting probes have become available for an increasing range of different species. They can be applied to cross-species comparisons as well as to the study of chromosomal rearrangements in animal models of human diseases. The simultaneous hybridization of multiple chromosome painting probes, each tagged with a specific fluorochrome or fluorochrome combination, has resulted in the differential color display of human (and mouse) chromosomes, i.e. color karyotyping. In this review, we will summarize recent developments of multicolor chromosome painting, describe applications in basic chromosome research and cytogenetic diagnostics, and discuss limitations and future directions. |
Vastus lateralis surface and single motor unit electromyography during shortening, lengthening and isometric contractions corrected for mode-dependent differences in force-generating capacity. | AIM
Knee extensor neuromuscular activity, rectified surface electromyography (rsEMG) and single motor unit EMG was investigated during isometric (60 degrees knee angle), shortening and lengthening contractions (50-70 degrees, 10 degrees s(-1)) corrected for force-velocity-related differences in force-generating capacity. However, during dynamic contractions additional factors such as shortening-induced force losses and lengthening-induced force gains may also affect force capacity and thereby neuromuscular activity. Therefore, even after correction for force-velocity-related differences in force capacity we expected neuromuscular activity to be higher and lower during shortening and lengthening, respectively, compared to isometric contractions.
METHODS
rsEMG of the three superficial muscle heads was obtained in a first session [10 and 50% maximal voluntary contraction (MVC)] and additionally EMG of (46) vastus lateralis motor units was recorded during a second session (4-76% MVC). Using superimposed electrical stimulation, force-generating capacity for shortening and lengthening contractions was found to be 0.96 and 1.16 times isometric (Iso) force capacity respectively. Therefore, neuromuscular activity during submaximal shortening and lengthening was compared with isometric contractions of respectively 1.04Iso (=1/0.96) and 0.86Iso (=1/1.16). rsEMG and discharge rates were normalized to isometric values.
RESULTS
rsEMG behaviour was similar (P > 0.05) during both sessions. Shortening rsEMG (1.30 +/- 0.11) and discharge rate (1.22 +/- 0.13) were higher (P < 0.05) than 1.04Iso values (1.05 +/- 0.05 and 1.03 +/- 0.04 respectively), but lengthening rsEMG (1.05 +/- 0.12) and discharge rate (0.90 +/- 0.08) were not lower (P > 0.05) than 0.86Iso values (0.76 +/- 0.04 and 0.91 +/- 0.07 respectively).
CONCLUSION
When force-velocity-related differences in force capacity were taken into account, neuromuscular activity was not lower during lengthening but was still higher during shortening compared with isometric contractions. |
Multicriteria User Modeling in Recommender Systems | The paper mentions that a hybrid recommender systems framework creates user-profile groups before applying a collaborative-filtering algorithm by incorporating techniques from the multiple-criteria decision-analysis (MCDA) field. |
Optimising Deep Belief Networks by hyper-heuristic approach | Deep Belief Networks (DBN) have been successful in classification especially image recognition tasks. However, the performance of a DBN is often highly dependent on settings in particular the combination of runtime parameter values. In this work, we propose a hyper-heuristic based framework which can optimise DBNs independent from the problem domain. It is the first time hyper-heuristic entering this domain. The framework iteratively selects suitable heuristics based on a heuristic set, apply the heuristic to tune the DBN to better fit with the current search space. Under this framework the setting of DBN learning is adaptive. Three well-known image reconstruction benchmark sets were used for evaluating the performance of this new approach. Our experimental results show this hyper-heuristic approach can achieve high accuracy under different scenarios on diverse image sets. In addition state-of-the-art meta-heuristic methods for tuning DBN were introduced for comparison. The results illustrate that our hyper-heuristic approach can obtain better performance on almost all test cases. |
Integer dilation and contraction for quadtrees and octrees | ~ Integer dilation and contraction are functions used in conjunction with quadtree and octree mapping systems. Dilation is the process of inserting a number of zeros before each bit in a word and contraction is the process of removing those zeros. Present methods of dilation and contraction involve lookup tables which consume considerable amounts of memory for mappings of large or high resolution display devices but are very fast under practical limits. A method is proposed which rivals the speed of the tabular methods but eliminates the tables, thereby eliminating the associated memory consumption. The proposed method is applicable to both dilation and contraction for both quadtrees and octrees. |
Cognitive behavioral psychotherapeutic treatment at a psychiatric trauma clinic for refugees: description and evaluation. | INTRODUCTION
Cognitive behavioural therapy (CBT) with trauma focus is the most evidence supported psychotherapeutic treatment of PTSD, but few CBT treatments for traumatized refugees have been described in detail.
PURPOSE
To describe and evaluate a manualized cognitive behavioral therapy for traumatized refugees incorporating exposure therapy, mindfulness and acceptance and commitment therapy.
MATERIAL AND METHODS
85 patients received six months' treatment at a Copenhagen Trauma Clinic for Refugees and completed self-ratings before and after treatment. The treatment administered to each patient was monitored in detail. The changes in mental state and the treatment components associated with change in state were analyzed statistically.
RESULTS
Despite the low level of functioning and high co-morbidity of patients, 42% received highly structured CBT, which was positively associated with all treatment outcomes. The more methods used and the more time each method was used, the better the outcome. The majority of patients were able to make homework assignments and this was associated with better treatment outcome. Correlation analysis showed no association between severity of symptoms at baseline and the observed change.
CONCLUSION
The study suggests that CBT treatment incorporating mindfulness and acceptance and commitment therapy is promising for traumatized refugees and punctures the myth that this group of patients are unable to participate fully in structured CBT. However, treatment methods must be adapted to the special needs of refugees and trauma exposure should be further investigated. |
A new pulse duplicator with a passive fill ventricle for analysis of cardiac dynamics | A new pulse duplicator was designed for evaluation of the performance of ventricular assist devices through pressure–volume (P–V) diagrams of the native heart. A linear drive system in combination with a pusher-plate mechanism was designed as a drive system to implement the passive fill mechanism during diastole of the mock ventricle. The compliances of the native heart during both diastole and systole were simulated by placing a ventricle sack made of soft latex rubber in a sealed chamber and by varying the air-to-fluid volume ratio inside the chamber. The ratio of the capacities of the systemic venous and pulmonary circuits was adjusted to properly reflect the effects of volume shift between them. As the air-to-fluid volume ratio was varied from 1:12.3 to 1:1.58, the contractility of the ventricle expressed by E max varied from 1.75 to 0.56 mmHg/ml with the mean V 0 of 4.58 ml closely mimicking those of native hearts (p < 0.05). Because the E max value of the normal human heart ranges from 1.3 to 1.6, with a value below 1.0 indicating heart failure, the mock ventricle is applicable in simulating the dynamics of the normal heart and the sick heart. The P–V diagram changes seen with rotary blood pump assistance revealed changes similar to those reported by other workers. The effects of the ventricular assist device, either pulsatile or continuous flow, on cardiac dynamics can be easily simulated with this system to derive design criteria for clinical circulatory assist devices. |
Principal Component Analysis | Principal component analysis (PCA) is a multivariate technique that analyzes a data table in which observations are described by several inter-correlated quantitative dependent variables. Its goal is to extract the important information from the table, to represent it as a set of new orthogonal variables called principal components, and to display the pattern of similarity of the observations and of the variables as points in maps. The quality of the PCA model can be evaluated using cross-validation techniques such as the bootstrap and the jackknife. PCA can be generalized as correspondence analysis (CA) in order to handle qualitative variables and as multiple factor analysis (MFA) in order to handle heterogeneous sets of variables. Mathematically, PCA depends upon the eigen-decomposition of positive semidefinite matrices and upon the singular value decomposition (SVD) of rectangular matrices. 2010 John Wiley & Sons, Inc. WIREs Comp Stat 2010 2 433–459 |
Reinforcement learning utilizes proxemics: An avatar learns to manipulate the position of people in immersive virtual reality | A reinforcement learning (RL) method was used to train a virtual character to move participants to a specified location. The virtual environment depicted an alleyway displayed through a wide field-of-view head-tracked stereo head-mounted display. Based on proxemics theory, we predicted that when the character approached within a personal or intimate distance to the participants, they would be inclined to move backwards out of the way. We carried out a between-groups experiment with 30 female participants, with 10 assigned arbitrarily to each of the following three groups: In the Intimate condition the character could approach within 0.38m and in the Social condition no nearer than 1.2m. In the Random condition the actions of the virtual character were chosen randomly from among the same set as in the RL method, and the virtual character could approach within 0.38m. The experiment continued in each case until the participant either reached the target or 7 minutes had elapsed. The distributions of the times taken to reach the target showed significant differences between the three groups, with 9 out of 10 in the Intimate condition reaching the target significantly faster than the 6 out of 10 who reached the target in the Social condition. Only 1 out of 10 in the Random condition reached the target. The experiment is an example of applied presence theory: we rely on the many findings that people tend to respond realistically in immersive virtual environments, and use this to get people to achieve a task of which they had been unaware. This method opens up the door for many such applications where the virtual environment adapts to the responses of the human participants with the aim of achieving particular goals. |
DFPS: Distributed FP-growth algorithm based on Spark | Frequent Itemset Mining (FIM) is the most important and time-consuming step of association rules mining. With the increment of data scale, many efficient single-machine algorithms of FIM, such as FP-growth and Apriori, cannot accomplish the computing tasks within reasonable time. As a result of the limitation of single-machine methods, researchers presented some distributed algorithms based on MapReduce and Spark, such as PFP and YAFIM. Nevertheless, the heavy disk I/O cost at each MapReduce operation makes PFP not efficient enough. YAFIM needs to generate candidate frequent itemsets in each iterative step. It makes YAFIM time-consuming. And if the scale of data is large enough, YAFIM algorithm will not work due to the limitation of memory since the candidate frequent itemsets need to be stored in the memory. And the size of candidate itemsets is very large especially facing the massive data. In this work, we propose a distributed FP-growth algorithm based on Spark, we call it DFPS. DFPS partitions computing tasks in such a way that each computing node builds the conditional FP-tree and adopts a pattern fragment growth method to mine the frequent itemsets independently. DFPS doesn't need to pass messages between nodes during mining frequent itemsets. Our performance study shows that DFPS algorithm is more excellent than YAFIM, especially when the length of transactions is long, the number of items is large and the data is massive. And DFPS has an excellent scalability. The experimental results show that DFPS is more than 10 times faster than YAFIM for T10I4D100K dataset and Pumsb_star dataset. |
Effective Capacitive Power Transfer | Capacitive power transfer (CPT) systems have up to date been used for very low power delivery due to a number of limitations. A fundamental treatment of the problem is carried out and a CPT system is presented that achieves many times higher power throughput into low-impedance loads than traditional systems with the same interface capacitance and frequency of operation and with reasonable ratings for the switching devices. The development and analysis of the system is based well on the parameters of the capacitive interface and a design procedure is provided. The validity of the concept has been verified by an experimental CPT system that delivered more than 25W through a combined interface capacitance of 100 pF, at an operating frequency of only 1 MHz, with efficiency exceeding 80%. |
Reinforcement Learning of Local Shape in the Game of Go | We explore an application to the game of Go of a reinforcement learning approach based on a linear evaluation function and large numbers of binary features. This strategy has proved effective in game playing programs and other reinforcement learning applications. We apply this strategy to Go by creating over a million features based on templates for small fragments of the board, and then use temporal difference learning and self-play. This method identifies hundreds of low level shapes with recognisable significance to expert Go players, and provides quantitive estimates of their values. We analyse the relative contributions to performance of templates of different types and sizes. Our results show that small, translation-invariant templates are surprisingly effective. We assess the performance of our program by playing against the Average Liberty Player and a variety of computer opponents on the 9×9Computer Go Server. Our linear evaluation function appears to outperform all other static evaluation functions that do not incorporate substantial domain knowledge. |
Monalytics: online monitoring and analytics for managing large scale data centers | To effectively manage large-scale data centers and utility clouds, operators must understand current system and application behaviors. This requires continuous monitoring along with online analysis of the data captured by the monitoring system. As a result, there is a need to move to systems in which both tasks can be performed in an integrated fashion, thereby better able to drive online system management. Coining the term 'monalytics' to refer to the combined monitoring and analysis systems used for managing large-scale data center systems, this paper articulates principles for monalytics systems, describes software approaches for implementing them, and provides experimental evaluations justifying principles and implementation approach. Specific technical contributions include consideration of scalability across both 'space' and 'time', the ability to dynamically deploy and adjust monalytics functionality at multiple levels of abstraction in target systems, and the capability to operate across the range of application to hypervisor layers present in large-scale data center or cloud computing systems. Our monalytics implementation targets virtualized systems and cloud infrastructures, via the integration of its functionality into the Xen hypervisor. |
A multisite randomized trial of the effects of physician education and organizational change in chronic-asthma care: health outcomes of the Pediatric Asthma Care Patient Outcomes Research Team II Study. | BACKGROUND
Traditional primary care practice change approaches have not led to full implementation of national asthma guidelines.
OBJECTIVE
To evaluate the effectiveness of 2 asthma care improvement strategies in primary care.
DESIGN
Two-year randomized controlled clinical trial.
SETTING
Forty-two primary care pediatric practices affiliated with 4 managed care organizations.
PARTICIPANTS
Children aged 3 to 17 years with mild to moderate persistent asthma enrolled in primary care practices affiliated with managed care organizations.
INTERVENTIONS
Peer leader education consisted of training 1 physician per practice in asthma guidelines and peer teaching methods. Planned care combined the peer leader program with nurse-mediated organizational change through planned visits with assessments, care planning, and self-management support, in collaboration with physicians. Analyses compared each intervention with usual care.
MAIN OUTCOME MEASURES
Annualized asthma symptom days, asthma-specific functional health status (Children's Health Survey for Asthma), and frequency of brief oral steroid courses (bursts).
RESULTS
Six hundred thirty-eight children completed baseline evaluations, representing 64% of those screened and eligible. Mean +/- SD age was 9.4 +/- 3.5 years; 60% were boys. Three hundred fifty (55%) were taking controller medication. Mean +/- SD annualized asthma symptom days was 107.4 +/- 122 days. Children in the peer leader arm had 6.5 fewer symptom days per year (95% confidence interval [CI], - 16.9 to 3.6), a nonsignificant difference, but had a 36% (95% CI, 11% to 54%) lower oral steroid burst rate per year compared with children receiving usual care. Children in the planned care arm had 13.3 (95% CI, - 24.7 to -2.1) fewer symptom days annually (-12% from baseline; P =.02) and a 39% (95% CI, 11% to 58%) lower oral steroid burst rate per year relative to usual care. Both interventions showed small, statistically significant effects for 2 of 5 Children's Health Survey for Asthma scales. Planned care subjects had greater controller adherence (parent report) compared with usual care subjects (rate ratio, 1.05 [95% CI, 1.00 to 1.09]).
CONCLUSIONS
Planned care (nurse-mediated organizational change plus peer leader education) is an effective model for improving asthma care in the primary care setting. Peer leader education on its own may also serve as a useful model for improving asthma care, although it is less comprehensive and the treatment effect less pronounced. |
Programming by Examples: PL Meets ML | Programming by Examples (PBE) involves synthesizing intended programs in an underlying domain-specific language from examplebased specifications. PBE systems are already revolutionizing the application domain of data wrangling and are set to significantly impact several other domains including code refactoring. There are three key components in a PBE system. (i) A search algorithm that can efficiently search for programs that are consistent with the examples provided by the user. We leverage a divide-and-conquerbased deductive search paradigm that inductively reduces the problem of synthesizing a program expression of a certain kind that satisfies a given specification into sub-problems that refer to sub-expressions or sub-specifications. (ii) Program ranking techniques to pick an intended program from among the many that satisfy the examples provided by the user. We leverage features of the program structure as well of the outputs generated by the program on test inputs. (iii) User interaction models to facilitate usability and debuggability. We leverage active-learning techniques based on clustering inputs and synthesizing multiple programs. Each of these PBE components leverage both symbolic reasoning and heuristics. We make the case for synthesizing these heuristics from training data using appropriate machine learning methods. This can not only lead to better heuristics, but can also enable easier development, maintenance, and even personalization of a PBE system. |
Long-term clinical effect of Tangyiping Granules () on patients with impaired glucose tolerance. | OBJECTIVE
To evaluate the long-term clinical effect of Tangyiping Granules (, TYP) on patients with impaired glucose tolerance (IGT) to achieve normal glucose tolerance (NGT) and hence preventing them from conversion to diabetes mellitus (DM).
METHODS
In total, 127 participants with IGT were randomly assigned to the control (63 cases, 3 lost to follow-up) and treatment groups (64 cases, 4 lost to follow-up) according to the random number table. The control group received lifestyle intervention alone, while the patients in the treatment group took orally 10 g of TYP twice daily in addition to lifestyle intervention for 12 weeks. The rates of patients achieving NGT or experiencing conversion to DM as main outcome measure were observed at 3, 12, and 24 months after TYP treatment. The secondary outcome measures included fasting plasma glucose (FPG), 2-h postprandial plasma glucose (2hPG), glycosylated hemoglobin (HbA1c), fasting insulin (FINS), 2-h insulin (2hINS), homeostatic model assessment of insulin resistance (HOMA-IR), blood lipid and patients' complains of Chinese medicine (CM) symptoms before and after treatment.
RESULTS
A higher proportion of the treatment group achieved NGT compared with the control group after 3-, 12- and 24-month follow-up (75.00% vs. 43.33%, 58.33% vs. 35.00%, 46.67% vs. 26.67%, respectively, P<0.05). The IGT to DM conversion rate of the treatment group was significantly lower than that of the control group at the end of 24-month follow-up (16.67% vs. 31.67%, P<0.05). Before treatment, FPG, 2hPG, HbA1c, FINS, 2hINS, HOMA-IR, triglyceride (TG), total cholesterol, low- and high-density lipoprotein cholesterol levels had no statistical difference between the two groups (P>0.05). After treatment, the 2hPG, HbA1c, HOMA-IR, and TG levels of the treatment group decreased significantly compared with those of the control group (P<0.05). CM symptoms such as exhaustion, irritability, chest tightness and breathless, spontaneous sweating, constipation, and dark thick and greasy tongue were significantly improved in the treatment group as compared with the control group (P<0.05). No severe adverse events occurred.
CONCLUSION
TYP administered at the IGT stage with a disciplined lifestyle delayed IGT developing into type 2 DM. |
Identifying hallmarks of consciousness in non-mammalian species | Most early studies of consciousness have focused on human subjects. This is understandable, given that humans are capable of reporting accurately the events they experience through language or by way of other kinds of voluntary response. As researchers turn their attention to other animals, "accurate report" methodologies become increasingly difficult to apply. Alternative strategies for amassing evidence for consciousness in non-human species include searching for evolutionary homologies in anatomical substrates and measurement of physiological correlates of conscious states. In addition, creative means must be developed for eliciting behaviors consistent with consciousness. In this paper, we explore whether necessary conditions for consciousness can be established for species as disparate as birds and cephalopods. We conclude that a strong case can be made for avian species and that the case for cephalopods remains open. Nonetheless, a consistent effort should yield new means for interpreting animal behavior. |
Time-series data mining | In almost every scientific field, measurements are performed over time. These observations lead to a collection of organized data called time series. The purpose of time-series data mining is to try to extract all meaningful knowledge from the shape of data. Even if humans have a natural capacity to perform these tasks, it remains a complex problem for computers. In this article we intend to provide a survey of the techniques applied for time-series data mining. The first part is devoted to an overview of the tasks that have captured most of the interest of researchers. Considering that in most cases, time-series task relies on the same components for implementation, we divide the literature depending on these common aspects, namely representation techniques, distance measures, and indexing methods. The study of the relevant literature has been categorized for each individual aspects. Four types of robustness could then be formalized and any kind of distance could then be classified. Finally, the study submits various research trends and avenues that can be explored in the near future. We hope that this article can provide a broad and deep understanding of the time-series data mining research field. |
Behavior of Machine Learning Algorithms in Adversarial Environments | Behavior of Machine Learning Algorithms in Adversarial Environments |
Duke Activity Status Index for Cardiovascular Diseases: Validation of
the Portuguese Translation | BACKGROUND
The Duke Activity Status Index (DASI) assesses the functional capacity of patients with cardiovascular disease (CVD), but there is no Portuguese version validated for CVD.
OBJECTIVES
To translate and adapt cross-culturally the DASI for the Portuguese-Brazil language, and to verify its psychometric properties in the assessment of functional capacity of patients with CVD.
METHODS
The DASI was translated into Portuguese, then checked by back-translation into English and evaluated by an expert committee. The pre-test version was first evaluated in 30 subjects. The psychometric properties and correlation with exercise testing was performed in a second group of 67 subjects. An exploratory factor analyses was performed in all 97 subjects to verify the construct validity of the DASI.
RESULTS
The intraclass correlation coefficient for test-retest reliability was 0.87 and for the inter-rater reliability was 0.84. Cronbach's α for internal consistency was 0.93. The concurrent validity was verified by significant positive correlations of DASI scores with the VO2max (r = 0.51, p < 0.001). The factor analysis yielded two factors, which explained 54% of the total variance, with factor 1 accounting for 40% of the variance. Application of the DASI required between one and three and a half minutes per patient.
CONCLUSIONS
The Brazilian version of the DASI appears to be a valid, reliable, fast and easy to administer tool to assess functional capacity among patients with CVD. |
Polymyalgia rheumatica--a delayed sequelae of Borrelia infection? | |
Robotics in my work and life | Margaret Atwood is a giant of modern literature who refuses to rest on her laurels. She has anticipated, satirized, and even changed the popular pre-conceptions of our time, and is the rare writer whose work is adored by the public, acclaimed by the critics, and read on university campuses. On stage, Atwood is both serious minded and wickedly funny. A winner of many international literary awards, including the prestigious Booker Prize, Margaret Atwood is the author of more than thirty volumes of poetry, children's literature, fiction, and non-fiction. She is perhaps best known for her novels, which include The Edible Woman, The Handmaid's Tale, The Robber Bride, Alias Grace, The Blind Assassin, Oryx and Crake, and The Year of the Flood. Her non-fiction book Payback: Debt and the Shadow Side of Wealth, part of the Massey Lecture series, was recently made into a documentary. Her new book, Madaddam (the third novel in the Oryx and Crake trilogy), has received rave reviews: "An extraordinary achievement" (The Independent); "A fitting and joyous conclusion" (The New York Times).
Atwood's work has been published in more than forty languages, including Farsi, Japanese, Turkish, Finnish, Korean, Icelandic and Estonian. In 2004, she co-invented the LongPen, a remote signing device that allows someone to write in ink anywhere in the world via tablet PC and the internet. She is also a popular personality on Twitter, with over 300,000 followers.
Atwood was born in 1939 in Ottawa and grew up in northern Ontario, Quebec, and Toronto. She received her undergraduate degree from Victoria College at the University of Toronto and her master's degree from Radcliffe College. |
High-Voltage Modular Switched Capacitor Pulsed Power Generator | Utilization of power electronics converters in pulsed power applications introduced a new series of reliable, long life, and cost-effective pulse generators. However, these converters suffer from the limited power ratings of semiconductor switches, which necessitate introduction of a new family of modular topologies. This paper proposes a modular power electronics converter based on voltage multiplier as a high voltage pulsed power generator. This modular circuit is able to generate flexible high output voltage from low input voltage sources. Circuit topology and operational principles of proposed topology are verified via experimental and simulation results as well as theoretical analysis. |
An 8$\,\times\,$ 8 Butler Matrix in 0.13-$\mu{\hbox {m}}$ CMOS for 5–6-GHz Multibeam Applications | This paper presents a miniature 5-6-GHz 8 × 8 Butler matrix in a 0.13-μm CMOS implementation. The 8 × 8 design results in an insertion loss of 3.5 dB at 5.5 GHz with a bandwidth of 5-6 GHz and no power consumption. The chip area is 2.5 × 1.9 mm2 including all pads. The 8 × 8 matrix is mounted on a Teflon board with eight antennas, and the measured patterns agree well with theory and show an isolation of >; 12 dB at 5-6 GHz. CMOS Butler matrices offer a simple and low-power alternative to replace eight-element phased-array systems for high gain transceivers. The applications areas are in high data-rate communications at 5-6 and at 57-66 GHz. They can also be excellent candidates for multiple-input-multiple-output systems. |
Cognitive Consequences of Trilingualism. | Aims and Objectives
The objectives of the present research were to examine the cognitive consequences of trilingualism and explain them relative to the cognitive consequences of bilingualism.
Approach
A comparison of cognitive abilities in trilinguals and bilinguals was conducted. In addition, we proposed a cognitive plasticity framework to account for cognitive differences and similarities between trilinguals and bilinguals.
Data and Analysis
Three aspects of cognition were analyzed: (1) cognitive reserve in older adults, as measured by age of onset of Alzheimer's disease and mild cognitive impairment; (2) inhibitory control in children and younger adults, as measured by response times on behavioral Simon and flanker tasks; and (3) memory generalization in infants and toddlers, as measured by accuracy on behavioral deferred imitation tasks. Results were considered within a framework of cognitive plasticity, which took into account several factors that may affect plasticity, including the age of learning a third language and the extent to which additional cognitive resources are needed to learn the third language.
Findings
A mixed pattern of results was observed. In some cases, such as cognitive reserve in older adults, trilinguals showed larger advantages than bilinguals. On other measures, for example inhibitory control in children and younger adults, trilinguals were found to exhibit the same advantages as bilinguals. In still other cases, like memory generalization in infants and toddlers, trilinguals did not demonstrate the advantages seen in bilinguals.
Originality
This study is the first comprehensive analysis of how learning a third language affects the cognitive abilities that are modified by bilingual experience, and the first to propose a cognitive plasticity framework that can explain and predict trilingual-bilingual differences.
Significance
This research shows that the cognitive consequences of trilingualism are not simply an extension of bilingualism's effects; rather, trilingualism has distinct consequences, with theoretical implications for our understanding of linguistic and cognitive processes and their plasticity, as well as applied-science implications for using second and third language learning in educational and rehabilitative contexts to foster successful cognitive development and aging. |
Variation on a 36-Ma-old theme: length, intensity and rhythm of volcanism. A record from the Hocheifel (Germany) | Abstract K-Ar eruption ages of Tertiary alkali-basalt occurrences in the Hocheifel form the basis of a frequency distribution, whose significance and geological meaning are discussed. Volcanic activity in the area occurred between 45 Ma and 24 Ma, reaching a maximum at 36.0 Ma. The curve of volcanic intensity is periodically modulated, with a period of 4.1 ± 0.5 Ma. This rhythm appears simultaneously in all the volcanic rock types, making a correlation between rock chemistry and age impossible. Volcanism is considered to have been stationary throughout the entire eruptive period, as the geographic centres of all eruptive phases coincide. |
Characterization of the compact Hokuyo URG-04LX 2D laser range scanner | This paper presents a detailed characterization of the Hokuyo URG-04LX 2D laser range finder. While the sensor specifications only provide a rough estimation of the sensor accuracy, the present work analyzes issues such as time drift effects and dependencies on distance, target properties (color, brightness and material) as well as incidence angle. Since the sensor is intended to be used for measurements of a tubelike environment on an inspection robot, the characterization is extended by investigating the influence of the sensor orientation and dependency on lighting conditions. The sensor characteristics are compared to those of the Sick LMS 200 which is commonly used in robotic applications when size and weight are not critical constraints. The results show that the sensor accuracy is strongly depending on the target properties (color, brightness, material) and that it is consequently difficult to establish a calibration model. The paper also identifies cases for which the sensor returns faulty measurements, mainly when the surface has low reflectivity (dark surfaces, foam) or for high incidence angles on shiny surfaces. On the other hand, the repeatability of the sensor seems to be competitive with the LMS 200. |
User Evaluation of Websites: From First Impression to Recommendation | Content, usability, and aesthetics are core constructs in users’ perception and evaluation of websites, but little is known about their interplay in different use phases. In a first study web users (N=330) stated content as most relevant, followed by usability and aesthetics. In study 2 tests with four websites were performed (N=300), resulting data were modeled in path analyses. In this model aesthetics had the largest influence on first impressions, while all three constructs had an impact on first and overall impressions. However, only content contributed significantly to the intention to revisit or recommend a website. Using data from a third study (N=512, 42 websites), we were able to replicate this model. As before, perceived usability affected first and overall impressions, while content perception was important for all analyzed website use phases. In addition, aesthetics also had a small but significant impact on the participants’ intentions to revisit or recommend. |
Deep Learning for Real Time Crime Forecasting | Accurate real time crime prediction is a fundamental issue for public safety, but remains a challenging problem for the scientific community. Crime occurrences depend on many complex factors. Compared to many predictable events, crime is sparse. At different spatiotemporal scales, crime distributions display dramatically different patterns. These distributions are of very low regularity in both space and time. In this work, we adapt the state-of-the-art deep learning spatio-temporal predictor, ST-ResNet [Zhang et al, AAAI, 2017], to collectively predict crime distribution over the Los Angeles area. Our models are two staged. First, we preprocess the raw crime data. This includes regularization in both space and time to enhance predictable signals. Second, we adapt hierarchical structures of residual convolutional units to train multifactor crime prediction models. Experiments over a half year period in Los Angeles reveal highly accurate predictive power of our models. |
Vesp’R: design and evaluation of a handheld AR device | This paper focuses on the design of devices for handheld spatial interaction. In particular, it addresses the requirements and construction of a new platform for interactive AR, described from an ergonomics stance, prioritizing human factors of spatial interaction. The result is a multi-configurable platform for spatial interaction, evaluated in two AR application scenarios. The user tests validate the design with regards to grip, weight balance and control allocation, and provide new insights on the human factors involved in handheld spatial interaction. |
An evaluation of avian influenza diagnostic methods with domestic duck specimens. | Monitoring of poultry, including domestic ducks, for avian influenza (AI) virus has increased considerably in recent years. However, the current methods validated for the diagnosis and detection of AI virus infection in chickens and turkeys have not been evaluated for performance with samples collected from domestic ducks. In order to ensure that methods for the detection of AI virus or AI virus antibody will perform acceptably well with these specimens, samples collected from domestic ducks experimentally infected with a U.S. origin low pathogenicity AI virus, A/Avian/NY/31588-3/00 (H5N2), were evaluated. Oropharyngeal (OP) and cloacal swabs were collected at 1, 2, 3, 4, 5, 7, 10, 14, and 21 days postinoculation (PI) for virus detection by virus isolation, which was considered the reference method, and real-time RT-PCR. In addition, two commercial antigen immunoassays were used to test swab material collected 2-7 days PI. Virus isolation and real-time RT-PCR performed similarly; however, the antigen immunoassays only detected virus during the peak of shed, 2-4 days PI, and both kits detected virus in fewer than half of the samples that were positive by virus isolation. Cloacal swabs yielded more positives than OP swabs with all virus detection tests. To evaluate AI virus antibody detection serum was collected from the ducks at 7, 14, and 21 days PI and was tested by agar gel immunodiffusion (AGID) assay, a commercial blocking enzyme-linked immunosorbent assay (ELISA), and homologous hemagglutination inhibition (HI) assay, which was used as the reference method. Results for the ELISA and HI assay were almost identical with serum collected at 7 and 14 days PI; however, by 21 days PI 100% of the samples were positive by HI assay and only 65% were positive by ELISA. At all time points AGID detected antibody in substantially fewer samples than either ELISA or HI assay. |
Electromyography guides toward subgroups of mutations in muscle channelopathies. | Myotonic syndromes and periodic paralyses are rare disorders of skeletal muscle characterized mainly by muscle stiffness or episodic attacks of weakness. Familial forms are caused by mutations in genes coding for skeletal muscle voltage-gated ion channels. Exercise is known to trigger, aggravate, or relieve the symptoms. Therefore, exercise can be used as a functional test in electromyography to improve the diagnosis of these muscle disorders. Abnormal changes in the compound muscle action potential can be disclosed using different exercise tests. We report the outcome of an inclusive electromyographic survey of a large population of patients with identified ion channel gene defects. Standardized protocols comprising short and long exercise tests were applied on 41 unaffected control subjects and on 51 case patients with chloride, sodium, or calcium channel mutations known to cause myotonia or periodic paralysis. These tests disclosed significant changes of compound muscle action potential, which generally matched the clinical symptoms. Combining the responses to the different tests defined five electromyographic patterns (I-V) that correlated with subgroups of mutations and may be used in clinical practice as guides for molecular diagnosis. We hypothesize that mutations are segregated into the different electromyographic patterns according to the underlying pathophysiological mechanisms. |
DYNAMIC BEHAVIOR OF TALL BUILDINGS UNDER WIND : INSIGHTS FROM FULL-SCALE MONITORING | The wind-induced response of tall buildings is inherently sensitive to structural dynamic properties like frequency and damping ratio. The latter parameter in particular is fraught with uncertainty in the design stage and may result in a built structure whose acceleration levels exceed design predictions. This reality has motivated the need to monitor tall buildings in full-scale. This paper chronicles the authors’ experiences in the analysis of full-scale dynamic response data from tall buildings around the world, including full-scale datasets from high rises in Boston, Chicago, and Seoul. In particular, this study focuses on the effects of coupling, beat phenomenon, amplitude dependence, and structural system type on dynamic properties, as well as correlating observed periods of vibration against fi nite element predictions. The fi ndings suggest the need for time–frequency analyses to identify coalescing modes and the mechanisms spurring them. The study also highlighted the effect of this phenomenon on damping values, the overestimates that can result due to amplitude dependence, as well as the comparatively larger degree of energy dissipation experienced by buildings dominated by frame action. Copyright © 2007 John Wiley & Sons, Ltd. |
Chemical activation through super energy transfer collisions. | Can a molecule be efficiently activated with a large amount of energy in a single collision with a fast atom? If so, this type of collision will greatly affect molecular reactivity and equilibrium in systems where abundant hot atoms exist. Conventional expectation of molecular energy transfer (ET) is that the probability decreases exponentially with the amount of energy transferred, hence the probability of what we label "super energy transfer" is negligible. We show, however, that in collisions between an atom and a molecule for which chemical reactions may occur, such as those between a translationally hot H atom and an ambient acetylene (HCCH) or sulfur dioxide, ET of chemically significant amounts of energy commences with surprisingly high efficiency through chemical complex formation. Time-resolved infrared emission observations are supported by quasi-classical trajectory calculations on a global ab initio potential energy surface. Results show that ∼10% of collisions between H atoms moving with ∼60 kcal/mol energy and HCCH result in transfer of up to 70% of this energy to activate internal degrees of freedom. |
The GRASP Multiple Micro-UAV Testbed | In the last five years, advances in materials, electronics, sensors, and batteries have fueled a growth in the development of microunmanned aerial vehicles (MAVs) that are between 0.1 and 0.5 m in length and 0.1-0.5 kg in mass [1]. A few groups have built and analyzed MAVs in the 10-cm range [2], [3]. One of the smallest MAV is the Picoftyer with a 60-mmpropellor diameter and a mass of 3.3 g [4]. Platforms in the 50-cm range are more prevalent with several groups having built and flown systems of this size [5]-[7]. In fact, there are severalcommercially available radiocontrolled (PvC) helicopters and research-grade helicopters in this size range [8]. |
Lactobacillus iners: Friend or Foe? | The vaginal microbial community is typically characterized by abundant lactobacilli. Lactobacillus iners, a fairly recently detected species, is frequently present in the vaginal niche. However, the role of this species in vaginal health is unclear, since it can be detected in normal conditions as well as during vaginal dysbiosis, such as bacterial vaginosis, a condition characterized by an abnormal increase in bacterial diversity and lack of typical lactobacilli. Compared to other Lactobacillus species, L. iners has more complex nutritional requirements and a Gram-variable morphology. L. iners has an unusually small genome (ca. 1 Mbp), indicative of a symbiotic or parasitic lifestyle, in contrast to other lactobacilli that show niche flexibility and genomes of up to 3-4 Mbp. The presence of specific L. iners genes, such as those encoding iron-sulfur proteins and unique σ-factors, reflects a high degree of niche specification. The genome of L. iners strains also encodes inerolysin, a pore-forming toxin related to vaginolysin of Gardnerella vaginalis. Possibly, this organism may have clonal variants that in some cases promote a healthy vagina, and in other cases are associated with dysbiosis and disease. Future research should examine this friend or foe relationship with the host. |
Overview of Beyond-CMOS Devices and a Uniform Methodology for Their Benchmarking | Multiple logic devices are presently under study within the Nanoelectronic Research Initiative (NRI) to carry the development of integrated circuits beyond the complementary metal-oxide-semiconductor (CMOS) roadmap. Structure and operational principles of these devices are described. Theories used for benchmarking these devices are overviewed, and a general methodology is described for consistent estimates of the circuit area, switching time, and energy. The results of the comparison of the NRI logic devices using these benchmarks are presented. |
Tag-aware recommender systems by fusion of collaborative filtering algorithms | Recommender Systems (RS) aim at predicting items or ratings of items that the user are interested in. Collaborative Filtering (CF) algorithms such as user- and item-based methods are the dominant techniques applied in RS algorithms. To improve recommendation quality, metadata such as content information of items has typically been used as additional knowledge. With the increasing popularity of the collaborative tagging systems, tags could be interesting and useful information to enhance RS algorithms. Unlike attributes which are "global" descriptions of items, tags are "local" descriptions of items given by the users. To the best of our knowledge, there hasn't been any prior study on tag-aware RS. In this paper, we propose a generic method that allows tags to be incorporated to standard CF algorithms, by reducing the three-dimensional correlations to three two-dimensional correlations and then applying a fusion method to re-associate these correlations. Additionally, we investigate the effect of incorporating tags information to different CF algorithms. Empirical evaluations on three CF algorithms with real-life data set demonstrate that incorporating tags to our proposed approach provides promising and significant results. |
RDF2Vec: RDF Graph Embeddings for Data Mining | Linked Open Data has been recognized as a valuable source for background information in data mining. However, most data mining tools require features in propositional form, i.e., a vector of nominal or numerical features associated with an instance, while Linked Open Data sources are graphs by nature. In this paper, we present RDF2Vec, an approach that uses language modeling approaches for unsupervised feature extraction from sequences of words, and adapts them to RDF graphs. We generate sequences by leveraging local information from graph substructures, harvested by Weisfeiler-Lehman Subtree RDF Graph Kernels and graph walks, and learn latent numerical representations of entities in RDF graphs. Our evaluation shows that such vector representations outperform existing techniques for the propositionalization of RDF graphs on a variety of different predictive machine learning tasks, and that feature vector representations of general knowledge graphs such as DBpedia and Wikidata can be easily reused for different tasks. |
Psychometric properties of a Korean version of the summary of diabetes self-care activities measure. | BACKGROUND
The summary of diabetes self-care activities (SDSCA) questionnaire is one of the most widely used self-report instruments for measuring diabetes self-management in adults.
OBJECTIVES
This study aimed to examine the psychometric properties of a Korean version of the SDSCA questionnaire.
METHODS
The 11-item English version of the SDSCA was translated into Korean following the standard translation methodology. The questionnaire was administered to 208 patients with type 2 diabetes. Exploratory and confirmatory factor analyses (EFA and CFA) were carried out for construct validity. Content validity index (CVI), internal consistency and a diabetes management self-efficacy scale (DMSES) were assessed.
RESULTS
The CVI of a Korean version of the SDSCA was .83. The EFA yielded a 9-item measure with a four factor solution with the same labels for original scales. The results of CFA showed the goodness of fit in the 9-item Korean SDSCA version (SDSCA-K). The internal consistency of SDSCA-K was moderate (Cronbach's α=.69) and the positive correlation between the SDSCA-K and the DMSES was identified.
CONCLUSION
The current study provides the initial psychometric properties of SDSCA-K modified to 9 items and supports SDSCA-K as a reliable and valid measure of diabetes self-management in Korean patients. |
Predicting player churn in destiny: A Hidden Markov models approach to predicting player departure in a major online game | Destiny is, to date, the most expensive digital game ever released with a total operating budget of over half a billion US dollars. It stands as one of the main examples of AAA titles, the term used for the largest and most heavily marketed game productions in the games industry. Destiny is a blend of a shooter game and massively multi-player online game, and has attracted dozens of millions of players. As a persistent game title, predicting retention and churn in Destiny is crucial to the running operations of the game, but prediction has not been attempted for this type of game in the past. In this paper, we present a discussion of the challenge of predicting churn in Destiny, evaluate the area under curve (ROC) of behavioral features, and use Hidden Markov Models to develop a churn prediction model for the game. |
Comparison of learning algorithms for handwritten digit recognition | This paper compares the performance of several classi er algorithms on a standard database of handwritten digits. We consider not only raw accuracy, but also rejection, training time, recognition time, and memory requirements. |
Orally dissolving strips: A new approach to oral drug delivery system | Recently, fast dissolving films are gaining interest as an alternative of fast dissolving tablets. The films are designed to dissolve upon contact with a wet surface, such as the tongue, within a few seconds, meaning the consumer can take the product without need for additional liquid. This convenience provides both a marketing advantage and increased patient compliance. As the drug is directly absorbed into systemic circulation, degradation in gastrointestinal tract and first pass effect can be avoided. These points make this formulation most popular and acceptable among pediatric and geriatric patients and patients with fear of choking. Over-the-counter films for pain management and motion sickness are commercialized in the US markets. Many companies are utilizing transdermal drug delivery technology to develop thin film formats. In the present review, recent advancements regarding fast dissolving buccal film formulation and their evaluation parameters are compiled. |
Virtual model control of a bipedal walking robot | The transformation from high level task speci cation to low level motion control is a fundamental issue in sensorimotor control in animals and robots. This thesis develops a control scheme called virtual model control which addresses this issue. Virtual model control is a motion control language which uses simulations of imagined mechanical components to create forces, which are applied through joint torques, thereby creating the illusion that the components are connected to the robot. Due to the intuitive nature of this technique, designing a virtual model controller requires the same skills as designing the mechanism itself. A high level control system can be cascaded with the low level virtual model controller to modulate the parameters of the virtual mechanisms. Discrete commands from the high level controller would then result in uid motion. An extension of Gardner's Partitioned Actuator Set Control method is developed. This method allows for the speci cation of constraints on the generalized forces which each serial path of a parallel mechanism can apply. Virtual model control has been applied to a bipedal walking robot. A simple algorithm utilizing a simple set of virtual components has successfully compelled the robot to walk eight consecutive steps. Thesis Supervisor: Gill A. Pratt Title: Assistant Professor of Electrical Engineering and Computer Science |
Accessing the deep web | Attempting to locate and quantify material on the Web that is hidden from typical search techniques. |
Cytoscape: a software environment for integrated models of biomolecular interaction networks. | Cytoscape is an open source software project for integrating biomolecular interaction networks with high-throughput expression data and other molecular states into a unified conceptual framework. Although applicable to any system of molecular components and interactions, Cytoscape is most powerful when used in conjunction with large databases of protein-protein, protein-DNA, and genetic interactions that are increasingly available for humans and model organisms. Cytoscape's software Core provides basic functionality to layout and query the network; to visually integrate the network with expression profiles, phenotypes, and other molecular states; and to link the network to databases of functional annotations. The Core is extensible through a straightforward plug-in architecture, allowing rapid development of additional computational analyses and features. Several case studies of Cytoscape plug-ins are surveyed, including a search for interaction pathways correlating with changes in gene expression, a study of protein complexes involved in cellular recovery to DNA damage, inference of a combined physical/functional interaction network for Halobacterium, and an interface to detailed stochastic/kinetic gene regulatory models. |
Aspartate transcarbamylase. The use of primary kinetic and solvent deuterium isotope effects to delineate some aspects of the mechanism. | Abstract 14C- and 18O-labeled carbamyl phosphates have been used to study primary kinetic isotope effects in the reaction catalyzed by the catalytic subunit of the aspartate transcarbamylase of Escherichia coli. A novel aspect of the methodology is the use of 14C as a tracer to determine the isotope effect caused by 18O alone, without the need for analysis by mass spectrometry. At optimum concentrations of carbamyl phosphate and l-aspartate and at pH 7.8 (near the pH optimum of the enzyme), the effect of 14C at the carbonyl carbon or 18O at the anhydride oxygen of carbamyl phosphate is very small. One possible explanation, but not the only one, is that kinetic steps in which the bonds to these atoms change are not important in determining the observed rate. Such an interpretation is consistent with a rate-determining conformational change of the enzyme at the pH optimum. At pH 10, where the activity of the enzyme is low, the rate with 14C-labeled carbamyl phosphate is about 95% of the rate with the 12C compound. On the other hand, no substantial effect is seen for 18O in the anhydride position. Taken together, these observations indicate that a step in which the bonding to the carbon atom changes and the bonding to the anhydride oxygen does not change becomes important in the observed rate at pH 10. Such a process might be the formation of a tetrahedral intermediate. The solvent deuterium isotope effect in D2O has also been studied, at optimum concentrations of the substrates. The maximum velocity is essentially the same in D2O and in H2O. Thus, proton transfer does not seem to be involved in a ratedetermining step in the aspartate transcarbamylase reaction. The pH activity profile of the enzyme is shifted up by about 0.8 pH unit in D2O, an effect which can be accounted for approximately quantitatively by the expected change in the pKa of the amino group of l-aspartate. |
Epidemiology of rotavirus infection in children in Blantyre, Malawi, 1997-2007. | Acute gastroenteritis caused by rotavirus infection is an important cause of morbidity and mortality among infants and young children in Africa. From 1997 through 2007, we enrolled 3740 children <5 years of age with acute gastroenteritis who received hospital care at the Queen Elizabeth Central Hospital in Blantyre, Malawi. Group A rotavirus was detected in fecal specimens by enzyme immunoassay. Rotavirus strains were characterized for VP7 (G) and VP4 (P) types with use of reverse-transcription polymerase chain reaction. Overall, rotavirus was detected in one-third of children. The median age of children with rotavirus gastroenteritis was 7.8 months, compared with 10.9 months for those without rotavirus in stool specimens (P > .001). Rotavirus circulated throughout the year, with the detection proportion greatest during the dry season (from May through October). A total of 15 single rotavirus strain types were detected during the study period, with genotypes P[8]G1, P[6]G8, P[4]G8, P[6]G1, P[8]G3, and P[6]G9 comprising 83% of all strains characterized. Serotype G12 was detected for the first time in Blantyre during the final 2 years of study. Zoonotic transmission and viral reassortment contributed to the rich diversity of strains identified. Current rotavirus vaccines have the potential to greatly reduce the rotavirus disease burden in Malawi, but they will be required to protect against a broad range of rotavirus serotypes in a young population with year-round rotavirus exposure. |
A time series approach for profiling attack | The goal of a profiling attack is to challenge the security of a cryptographic device in the worst case scenario. Though template attack are reputed as the strongest power analysis attack, they effectiveness is strongly dependent on the validity of the Gaussian assumption. This led recently to the appearance of nonparametric approaches, often based on machine learning strategies. Though these approaches outperform template attack, they tend to neglect the time series nature of the power traces. In this paper, we propose an original multi-class profiling attack that takes into account the temporal dependence of power traces. The experimental study shows that the time series analysis approach is competitive and often better than static classification alternatives. |
Characteristics of high performing testers: a case study | Objective: We studied what are the characteristics of high performing software testers in the industry. Method: We conducted an exploratory case study, collecting data through recorded interviews of one development manager and three testers in each of the three companies, analysis of the defect database, and informal communication within our research partnership with the companies. Results: We found that experience, reflection, motivation and personal characteristics were the top level themes. Experience related to the domain, e.g. processes of the customer, and on the other hand, specialized technical skills, e.g. performance testing, were seen more important than skills of test case design and test planning. |
Hypothalamic-Pituitary--Adrenal Axis-Feedback Control. | The hypothalamo-pituitary-adrenal axis (HPA) is responsible for stimulation of adrenal corticosteroids in response to stress. Negative feedback control by corticosteroids limits pituitary secretion of corticotropin, ACTH, and hypothalamic secretion of corticotropin-releasing hormone, CRH, and vasopressin, AVP, resulting in regulation of both basal and stress-induced ACTH secretion. The negative feedback effect of corticosteroids occurs by action of corticosteroids at mineralocorticoid receptors (MR) and/or glucocorticoid receptors (GRs) located in multiple sites in the brain and in the pituitary. The mechanisms of negative feedback vary according to the receptor type and location within the brain-hypothalmo-pituitary axis. A very rapid nongenomic action has been demonstrated for GR action on CRH neurons in the hypothalamus, and somewhat slower nongenomic effects are observed in the pituitary or other brain sites mediated by GR and/or MR. Corticosteroids also have genomic actions, including repression of the pro-opiomelanocortin (POMC) gene in the pituitary and CRH and AVP genes in the hypothalamus. The rapid effect inhibits stimulated secretion, but requires a rapidly rising corticosteroid concentration. The more delayed inhibitory effect on stimulated secretion is dependent on the intensity of the stimulus and the magnitude of the corticosteroid feedback signal, but also the neuroanatomical pathways responsible for activating the HPA. The pathways for activation of some stressors may partially bypass hypothalamic feedback sites at the CRH neuron, whereas others may not involve forebrain sites; therefore, some physiological stressors may override or bypass negative feedback, and other psychological stressors may facilitate responses to subsequent stress. |
Investigating users' query formulations for cognitive search intents | This study investigated query formulations by users with {\it Cognitive Search Intents} (CSIs), which are users' needs for the cognitive characteristics of documents to be retrieved, {\em e.g. comprehensibility, subjectivity, and concreteness. Our four main contributions are summarized as follows (i) we proposed an example-based method of specifying search intents to observe query formulations by users without biasing them by presenting a verbalized task description;(ii) we conducted a questionnaire-based user study and found that about half our subjects did not input any keywords representing CSIs, even though they were conscious of CSIs;(iii) our user study also revealed that over 50\% of subjects occasionally had experiences with searches with CSIs while our evaluations demonstrated that the performance of a current Web search engine was much lower when we not only considered users' topical search intents but also CSIs; and (iv) we demonstrated that a machine-learning-based query expansion could improve the performances for some types of CSIs.Our findings suggest users over-adapt to current Web search engines,and create opportunities to estimate CSIs with non-verbal user input. |
Factors Predicting Post-thyroidectomy Hypoparathyroidism Recovery | Hypoparathyroidism is the most common complication after thyroidectomy and the main reason for frequent outpatient visits; however, there is a poor understanding of its outcomes and no clear follow-up strategies are available. We aimed to predict post-thyroidectomy hypoparathyroidism outcomes and identify relevant factors. A multicenter, standardized prospective study was conducted. The parathyroid hormone level (PTH) was measured preoperatively and at the first hour after surgery, then at each outpatient follow-up visit after 1 week, 3 weeks, and 1 month, and then every 2 months, until it either reached normal values or up to 6 months. Cox proportional hazard modeling was used to determine the factors that affect PTH recovery. A Weibull distribution model was used to predict time to recovery. Both models were evaluated by goodness of fit. A total of 186 patients were enrolled in the study; 53 (28.5 %) developed hypoparathyroidism, 47 of them (88.6 %) females. Their mean age was 41.2 years, and 11.4 % were diabetic. Of these women, 33 (62.3 %) recovered within 1 month, 10 (18.9 %) recovered after 1 month but within 6 months, 7 (13.2 %) did not recover within 6 months, and 3 (5.6 %) missed follow-up. Factors that are found to affect and predict the speed of recovery were the preoperative PTH level, perioperative percent drop in PTH level, diabetes mellitus, and gender. This study provides potentially useful information for early prediction of PTH recovery, and it highlights the factors that affect the course of hypoparathyroidism recovery, which in turn should be reflected in better patient management, improved patient satisfaction, and overall cost-effectiveness. |
The single channel interferometer using a pseudo-Doppler direction finding system | A new technique for obtaining high performance, low power, radio direction finding (RDF) using a single receiver is presented. For man-portable applications, multichannel systems consume too much power, are too expensive, and are too heavy to easily be carried by a single individual. Most single channel systems are not accurate enough or do not provide the capability to listen while direction finding (DF) is being performed. By employing feedback in a pseudo-Doppler system via a vector modulator in the IF of a single receiver and an adaptive algorithm to control it, the accuracy of a pseudoDoppler system can be enhanced to the accuracy of an interferometer based system without the expense of a multichannel receiver. And, it will maintain audio listenthrough while direction finding is being performed all with a single inexpensive low power receiver. The use of these techniques provides performance not attainable by other single channel methods. |
Automatic Differentiation of Algorithms for Machine Learning | Automatic differentiation—the mechanical transformation of numeric computer programs to calculate derivatives efficiently and accurately—dates to the origin of the computer age. Reverse mode automatic differentiation both antedates and generalizes the method of backwards propagation of errors used in machine learning. Despite this, practitioners in a variety of fields, including machine learning, have been little influenced by automatic differentiation, and make scant use of available tools. Here we review the technique of automatic differentiation, describe its two main modes, and explain how it can benefit machine learning practitioners. To reach the widest possible audience our treatment assumes only elementary differential calculus, and does not assume any knowledge of linear algebra. |
Effects of a physical therapy program combined with manual lymphatic drainage on shoulder function, quality of life, lymphedema incidence, and pain in breast cancer patients with axillary web syndrome following axillary dissection | The aim of this study was to evaluate the effects of physical therapy (PT) combined with manual lymphatic drainage (MLD) on shoulder function, pain, lymphedema, visible cords, and quality of life (QOL) in breast cancer patients with axillary web syndrome (AWS). In this prospective, randomized trial, 41 breast cancer patients with visible and palpable cords on the arm and axilla and a numeric rating scale (NRS) pain score of >3 were randomly assigned to PT (3 times/week for 4 weeks; n = 20) and PT combined with MLD (5 times/week for 4 weeks; PTMLD; n = 21) groups. MLD was performed by a physical therapist and the patients themselves during week 1 and weeks 2–4, respectively. Arm volume, shoulder function (muscular strength; active range of motion; and disabilities of the arm, shoulder, and hand [DASH]); QOL (European Organization for Research and Treatment of Cancer Core and Breast Cancer‐Specific QOL questionnaires), and pain (NRS) were assessed at baseline and after 4 weeks of treatment. QOL including functional and symptom aspects, shoulder flexor strength, DASH, and NRS scores were significantly improved in both groups after the 4-week intervention (P < 0.05). NRS score and arm volume were significantly lower in the PTMLD group than in the PT group (P < 0.05). Lymphedema was observed in the PT (n = 6), but not PTMLD, group (P < 0.05). PT improves shoulder function, pain, and QOL in breast cancer patients with AWS and combined with MLD decreases arm lymphedema. |
Equivariance Through Parameter-Sharing | We propose to study equivariance in deep neural networks through parameter symmetries. In particular, given a group G that acts discretely on the input and output of a standard neural network layer φW ∶ R → R , we show that φW is equivariant with respect to G-action iff G explains the symmetries of the network parameters W. Inspired by this observation, we then propose two parameter-sharing schemes to induce the desirable symmetry on W. Our procedure for tying the parameters achieves G-equivariance and, under some conditions on the action of G, it guarantees sensitivity to all other permutation groups outside G. Given enough training data, a multi-layer perceptron would eventually learn the domain invariances in a classification task. Nevertheless, success of convolutional and recurrent networks suggests that encoding the domain symmetries through shared parameters can significantly boost the generalization of deep neural networks. The same observation can be made in deep learning for semi-supervised and unsupervised learning in structured domains. This raises an important question that is addressed in this paper: What kind of priors on input/output structure can be encoded through parameter-sharing? This work is an attempt at answering this question, when our priors are in the form discrete domain symmetries. To formalize this type of prior, a family of transformations of input and output to a neural layer are expressed as group “action” on the input and output. The resulting neural network is invariant to this action, if transformations of the input within that particular family, does not change the output (e.g., rotation-invariance). However, if the output is transformed, in a predictable way, as we transform the input, the neural layer is equivariant to the action of the group. School of Computer Science, Carnegie Mellon University, 5000 Forbes Ave., Pittsburgh, PA, USA 15217. Correspondence to: Siamak Ravanbakhsh <[email protected]>. Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s). Our goal is to show that parameter-sharing can be used to achieve equivariance to any discrete group action. Application of group theory in machine learning has been the topic of various works in the past (e.g., Kondor, 2008; Bartók et al., 2010). In particular, many probabilistic inference techniques have been extended to graphical models with known symmetry groups (Raedt et al., 2016; Kersting et al., 2009; Bui et al., 2012; Niepert, 2012). Deep and hierarchical models have used a variety of techniques to study or obtain representations that isolate transformations from the “content” (e.g., Hinton et al., 2011; Jayaraman & Grauman, 2015; Lenc & Vedaldi, 2015; Agrawal et al., 2015). The simplest method of achieving equivariance is through data-augmentation (Krizhevsky et al., 2012; Dieleman et al., 2015). Going beyond augmentation, several methods directly apply the group-action, in one way or another, by transforming the data or its encodings using group members (Jaderberg et al., 2015; Anselmi et al., 2013; Dieleman et al., 2016). An alternative path to invariance via harmonic analysis. In particular cascade of wavelet transforms is investigated in (Bruna & Mallat, 2013; Oyallon & Mallat, 2015; Sifre & Mallat, 2013). More recently (Cohen & Welling, 2016b) study steerable filters (e.g., Freeman et al., 1991; Hel-Or & Teo, 1998) as a general mean for achieving equivariance in deep networks. Invariance and equivariance through parameter-sharing is also discussed in several prior works (Cohen & Welling, 2016a; Gens & Domingos, 2014). The desirability of using parameter-sharing for this purpose is mainly due to its simplicity and computational efficiency. However, it also suggests possible directions for discovering domain symmetries through regularization schemes. Following the previous work on the study of symmetry in deep networks, we rely on group theory and group-actions to formulate invariances and equivariances of a function. Due to discrete nature of parameter-sharing, our treatment here is limited to permutation groups. Action of a permutation group G can model discrete transformations of a set of variables, such as translation and 90○ rotation of pixels around any center in an image. If the output of a function transforms with a G-action as we transform its input with a different G-action, the function is equivariant with respect to action of G. For example, in a convolution layer, as we translate the input, the feature-maps are also translated. If ar X iv :1 70 2. 08 38 9v 2 [ st at .M L ] 1 3 Ju n 20 17 Equivariance Through Parameter-Sharing Figure 1. Summary: given a group action on input and output of a neural network layer, define a parameter-sharing for this layer that is equivariant to these actions. (left) G = D5 is a Dihedral group, acting on a 4 × 5 input image and an output vector of size 5. N and M denote the index set of input, and output variables respectively. Here G is represented using its Cayley diagram. (middle-left) G-action for g ∈ G is shown for an example input. G-action on the input is a combination of circular shifts (blue arrows) and vertical flips (red arrows) of the 2D image. G acts on the output indices M only through circular shift. A permutation group GN,M encodes the simultaneous “action” of G on input and output indices. (middle-right) The structure Ω designed using our procedure, such that its symmetries Aut(Ω) subsumes the permutation group GN,M. (right) the same structure Ω unfolded to a bipartite form to better show the resulting parameter-sharing in the neural layer. The layer is equivariant to G-action: shifting the input will shift the output of the resulting neural network function, while flipping the input does not change the output. the output does not transform at all, the function is invariant to the action of G. Therefore, invariance is a special equivariance. In this example, different translations correspond to the action of different members of G. The novelty of this work is its focus on the “model symmetry” as a gateway to equivariance. This gives us new theoretical guarantees for a “strict” notion of equivariance in neural networks. The core idea is simple: consider a colored bipartite graph Ω representing a neural network layer. Edges of the same color represent tied parameters. This neural network layer as a function is equivariant to the actions of a given group G (and nothing more) iff the action of G is the symmetry group of Ω – i.e., there is a simple bijection between parameter symmetries and equivariences of the corresponding neural network. The problem then boils down to designing colored bipartite graphs with given symmetries, which constitutes a major part of this paper. Fig. 1 demonstrates this idea.1 For the necessary background on group theory see the Appendix. In the following, Section 1 formalizes equivariance wrt discrete group action. Section 2 relates the model symmetries a neural layer to its equivariance. Section 3 then builds on this observation to introduce two procedures for parameter-sharing that achieves a desirable equivariance. Throughout this paper, since we deal with finite sets, we use circular shift and circular convolution instead of shift and convolution. The two can be made identical with zero-padding of the input. Here, we also see how group and graph convolution as well as deep-sets become special instances in our parametersharing procedure, which provides new insight and improved design in the case of group convolution. Where input and output of the layer have a one-to-one mapping, we see that the design problem reduces a well-known problem in combinatorics. 1. Group Action and Equivariance Let x = [x1, . . . , xN ] ∈ X denote a set of variables and G = {g} be a finite group. The discrete action of G on x is in the form of permutation of indices in N = {1, . . . ,N}. This group is a subgroup of the symmetric group SN; the group of all N ! permutations of N objects. We use Ð→ N = [1, . . . ,N] to denote the ordered counterpart to N and the G-action on this vector g Ð→ N ≐ [g1, . . . ,gN] is a simple permutation. Using xÐ→ N to denote x, the discrete action of g ∈ G on x ∈ X is given by gxÐ→ N ≐ x g Ð→ N . G-action on N is a permutation group that is not necessarily isomorphic to G itself. GN ≤ G captures the structure of G when it acts on N. We use gN to denote the image of g ∈ G in GN. G-action is faithful iff two groups are isomorphic G ≅ GN – that is G-action preserves its structure. In this case, each g ∈ G maps to a distinct permutation g Ð→ N ≠ g′Ð→ N∀g,g′ ∈ G. Given any G-action on N we can efficiently obtain GN; see Appendix. Equivariance Through Parameter-Sharing Example 1.1 (Cyclic Group) Consider the cyclic group G = Z6 and define its action on x ∈ R by defining it on the index set N = {1,2,3} as gn ≐ g + n mod 3∀g ∈ Z6. This action is not faithful. For example, the action of g = 1 and g = 4 result in the same permutations of variables in x; i.e., single-step of circular shift to the right. With the above action, the resulting permutation group GN is isomorphic to Z3 < Z6. Now consider the same group G = Z6 with a different action on N: gn ≐ g − n mod 3∀g ∈ Z6, where we replaced (+) with (−). Let G̃N be the resulting permutation group. Here again G̃N ≅ Z3. Although isomorphic, G̃N ≠ GN, as they are different permutation groups of N. Consider the function φ ∶ X → Y and let GN and GM be the action of G on input/output index sets N and M. Definition 1.1 The joint permutation group GN,M is a subdirect product (or pairing) of GN and GM GN,M = GN ⊙GM ≐ {(gN,gM) ∣ g ∈ G}. We are now ready to define equivariance and invariance. φ(⋅) is GN,M-equivariant iff gNφ(x) = φ(gMx) ∀x ∈ X , (gN,gM) ∈ GN,M (1) Moreover, if GM = {e} is trivial, we have gNφ(x) = φ(x) ∀x ∈ X ,gN ∈ GN and φ(⋅) is GN-invariant. gN and gM can also be represented using permu |
ID Preserving Generative Adversarial Network for Partial Latent Fingerprint Reconstruction | Performing recognition tasks using latent fingerprint samples is often challenging for automated identification systems due to poor quality, distortion, and partially missing information from the input samples. We propose a direct latent fingerprint reconstruction model based on conditional generative adversarial networks (cGANs). Two modifications are applied to the cGAN to adapt it for the task of latent fingerprint reconstruction. First, the model is forced to generate three additional maps to the ridge map to ensure that the orientation and frequency information are considered in the generation process, and prevent the model from filling large missing areas and generating erroneous minutiae. Second, a perceptual ID preservation approach is developed to force the generator to preserve the ID information during the reconstruction process. Using a synthetically generated database of latent fingerprints, the deep network learns to predict missing information from the input latent samples. We evaluate the proposed method in combination with two different fingerprint matching algorithms on several publicly available latent fingerprint datasets. We achieved rank-10 accuracy of 88.02% on the IIIT-Delhi latent fingerprint database for the task of latent-to-latent matching and rank-50 accuracy of 70.89% on the IIIT-Delhi MOLF database for the task of latent-to-sensor matching. Experimental results of matching reconstructed samples in both latent-to-sensor and latent-to-latent frameworks indicate that the proposed method significantly increases the matching accuracy of the fingerprint recognition systems for the latent samples. |
Post-therapy surveillance of patients with uterine cancers: value of integrated FDG PET/CT in the detection of recurrence | The purpose of this study was to prospectively determine the diagnostic accuracy of PET/CT in the detection of recurrence in patients with treated uterine cancers. Twenty-five women, ranging in age from 37 to 79 years (mean 58.9 years), who underwent primary surgical treatment for either a cervical or an endometrial cancer met the inclusion criterion of the study, which was suspicion of recurrence based on results of routine follow-up procedures. PET/CT was performed after administration of 18F-fluorodeoxyglucose (FDG); two readers interpreted the images in consensus. Histopathological findings or correlation with results of subsequent clinical and imaging follow-up examinations served as the reference standard. Diagnostic accuracy of PET/CT was reported in terms of the proportion of correctly classified patients and lesion sites. Tumour recurrence was found at histopathological analysis or follow-up examinations after PET/CT in 14 (56%) of the 25 patients. Patient-based sensitivity, specificity, positive predictive value, negative predictive value and accuracy of PET/CT for detection of tumour recurrence were 92.9%, 100.0%, 100.0%, 91.7% and 96.0%, respectively. Lesion site-based sensitivity, specificity, positive predictive value, negative predictive value and accuracy of PET/CT were 94.7%, 99.5%, 94.7%, 99.5% and 99.0%, respectively. This preliminary study shows that PET/CT may be an accurate method for the evaluation of recurrence in patients who have been treated for uterine cancers and are undergoing follow-up. |
Indoor scene recognition through object detection | Scene recognition is a highly valuable perceptual ability for an indoor mobile robot, however, current approaches for scene recognition present a significant drop in performance for the case of indoor scenes. We believe that this can be explained by the high appearance variability of indoor environments. This stresses the need to include high-level semantic information in the recognition process. In this work we propose a new approach for indoor scene recognition based on a generative probabilistic hierarchical model that uses common objects as an intermediate semantic representation. Under this model, we use object classifiers to associate low-level visual features to objects, and at the same time, we use contextual relations to associate objects to scenes. As a further contribution, we improve the performance of current state-of-the-art category-level object classifiers by including geometrical information obtained from a 3D range sensor that facilitates the implementation of a focus of attention mechanism within a Monte Carlo sampling scheme. We test our approach using real data, showing significant advantages with respect to previous state-of-the-art methods. |
Morphometric changes in the reward system of Parkinson’s disease patients with impulse control disorders | Impulse control disorders (ICDs) occur in a subset of patients with Parkinson’s disease (PD) who are receiving dopamine replacement therapy. In this study, we aimed to investigate structural abnormalities within the mesocortical and limbic cortices and subcortical structures in PD patients with ICDs. We studied 18 PD patients with ICDs, 18 PD patients without ICDs and a group of 24 age and sex-matched healthy controls. Cortical thickness (CTh) and subcortical nuclei volume analyses were carried out using the automated surface-based analysis package FreeSurfer (version 5.3.0). We found significant differences in MRI measures between the three groups. There was volume loss in the nucleus accumbens of both PD patients with ICDs and without ICDs compared to the control group. In addition, PD patients with ICDs showed significant atrophy in caudate, hippocampus and amygdala compared to the group of healthy controls. PD patients with ICDs had significant increased cortical thickness in rostral anterior cingulate cortex and frontal pole compared to PD patients without ICDs. Cortical thickness in rostral anterior cingulate and frontal pole was increased in PD patients with ICDs compared to the control group, but the differences failed to reach corrected levels of statistical significance. PD patients with ICDs showed increased cortical thickness in medial prefrontal regions. We speculate that these findings reflect either a pre-existing neural trait vulnerability to impulsivity or the expression of a maladaptive synaptic plasticity under non-physiological dopaminergic stimulation. |
Association of Variants in Candidate Genes with Lipid Profiles in Women with Early Breast Cancer on Adjuvant Aromatase Inhibitor Therapy. | PURPOSE
Aromatase inhibitors can exert unfavorable effects on lipid profiles; however, previous studies have reported inconsistent results. We describe the association of single-nucleotide polymorphisms (SNP) in candidate genes with lipid profiles in women treated with adjuvant aromatase inhibitors.
EXPERIMENTAL DESIGN
We conducted a prospective observational study to test the associations between SNPs in candidate genes in estrogen signaling and aromatase inhibitor metabolism pathways with fasting lipid profiles during the first 3 months of aromatase inhibitor therapy in postmenopausal women with early breast cancer randomized to adjuvant letrozole or exemestane. We performed genetic association analysis and multivariable linear regressions using dominant, recessive, and additive models.
RESULTS
A total of 303 women had complete genetic and lipid data and were evaluable for analysis. In letrozole-treated patients, SNPs in CYP19A1, including rs4646, rs10046, rs700518, rs749292, rs2289106, rs3759811, and rs4775936 were significantly associated with decreases in triglycerides by 20.2 mg/dL and 39.3 mg/dL (P < 0.00053), respectively, and with variable changes in high-density lipoprotein (HDL-C) from decreases by 4.2 mg/dL to increases by 9.8 mg/dL (P < 0.00053).
CONCLUSIONS
Variants in CYP19A1 are associated with decreases in triglycerides and variable changes in HDL-C in postmenopausal women on adjuvant aromatase inhibitors. Future studies are needed to validate these findings, and to identify breast cancer survivors who are at higher risk for cardiovascular disease with aromatase inhibitor therapy. |
Learning to Laugh (automatically): Computational Models for Humor Recognition | Humor is one of the most interesting and puzzling aspects of human behavior. Despite the attention it has received in fields such as philosophy, linguistics, and psychology, there have been only few attempts to create computational models for humor recognition or generation. In this article, we bring empirical evidence that computational approaches can be successfully applied to the task of humor recognition. Through experiments performed on very large data sets, we show that automatic classification techniques can be effectively used to distinguish between humorous and non-humorous texts, with significant improvements observed over a priori known baselines. |
Selective Serotonin Reuptake Inhibitors for the Treatment of Hypersensitive Esophagus: A Randomized, Double-Blind, Placebo-Controlled Study | OBJECTIVES:Ambulatory 24-h pH–impedance monitoring can be used to assess the relationship of persistent symptoms and reflux episodes, despite proton pump inhibitor (PPI) therapy. Using this technique, we aimed to identify patients with hypersensitive esophagus and evaluate the effect of selective serotonin reuptake inhibitors (SSRIs) on their symptoms.METHODS:Patients with normal endoscopy and typical reflux symptoms (heartburn, chest pain, and regurgitation), despite PPI therapy twice daily, underwent 24-h pH–impedance monitoring. Distal esophageal acid exposure (% time pH <4) was measured and reflux episodes were classified into acid or non-acid. A positive symptom index (SI) was declared if at least half of the symptom events were preceded by reflux episodes. Patients with a normal distal esophageal acid exposure time, but with a positive SI were classified as having hypersensitive esophagus and were randomized to receive citalopram 20 mg or placebo once daily for 6 months.RESULTS:A total of 252 patients (150 females (59.5%); mean age 55 (range 18–75) years) underwent 24-h pH–impedance monitoring. Two hundred and nineteen patients (86.9%) recorded symptoms during the study day, while 105 (47.9%) of those had a positive SI (22 (20.95%) with acid, 5 (4.76%) with both acid and non-acid, and 78 (74.29%) with non-acid reflux). Among those 105 patients, 75 (71.4%) had normal distal esophageal acid exposure time and were randomized to receive citalopram 20 mg (group A, n=39) or placebo (group B, n=36). At the end of the follow-up period, 15 out of the 39 patients of group A (38.5%) and 24 out of the 36 patients of group B (66.7%) continue to report reflux symptoms (P=0.021).CONCLUSIONS:Treatment with SSRIs is effective in a select group of patients with hypersensitive esophagus. |
Relationships between dietary intakes of children and their parents: a cross-sectional, secondary analysis of families participating in the Family Diet Quality Study. | BACKGROUND
Being overweight and obese in Australian children is common. Current evidence related to parental influence on child dietary intake is conflicting, and is particularly limited in terms of which parent exerts the stronger relationship. The present study aimed to assess mother-father and parent-child dietary relationships and to identify which parent-child relationship is stronger.
METHODS
A cross-sectional analysis was performed of dietary intake data from 66 families with one parent and one child aged 8-12 years who were participating in the Family Diet Quality Study, in the Hunter and Forster regions of New South Wales, Australia. Dietary intakes were assessed using adult and child specific, validated semi-quantitative 120-item food frequency questionnaires. Diet quality and variety subscores were assessed using the Australian Recommended Food Scores for adults and children/adolescents. Pearson's correlations were used to assess dietary relationships between mother-father, father-child and mother-child dyads.
RESULTS
Weak-to-moderate correlations were found between mother-child dyads for components of dietary intake (r = 0.27-0.47). Similarly, for father-child dyads, predominantly weak-to-moderate correlations were found (r = 0.01-0.52). Variety of fruit intake was the most strongly correlated in both parent-child dyads, with the weakest relationships found for fibre (g 1000 kJ(-1) ) in father-child and percentage energy from total fats for mother-child dyads. Mother-father dyads demonstrated mostly moderate-to-strong correlations (r = 0.13-0.73), with scores for condiments showing the weakest relationship and vegetables the strongest. For all dyads, strong correlations were observed for overall diet quality (r = 0.50-0.59).
CONCLUSIONS
Parent-child dietary intake is significantly related but differs for mother versus fathers. Further research is required to examine whether differing dietary components should be targeted for mothers versus fathers in interventions aiming to improve family dietary patterns. |
The growth and some properties of doped YAlO3 single crystals | The growth of perfect YAP crystals doped with Nd, Cr and Ce in resistant furnace, reductive conditions and high vacuum and Mo crucible are described. The growth conditions for decreasing: decomposition of crystals, iron content and twinning are found. Distribution coefficients of dopants, some spectral and generation properties of the crystals were measured. |
Integrating SOM and fuzzy k-means clustering for customer classification in personalized recommendation system for non-text based transactional data | The world of e-commerce is reshaping marketing strategies based on the analysis of e-commerce data. Huge amounts of data are being collecting and can be analyzed for some discoveries that may be used as guidance for people sharing same interests but lacking experience. Indeed, recommendation systems are becoming an essential business strategy tool from just a novelty. Many large e-commerce web sites are already encapsulating recommendation systems to provide a customer friendly environment by helping customers in their decision-making process. A recommendation system learns from a customer behavior patterns and recommend the most valuable from available alternative choices. In this paper, we developed a two-stage algorithm using self-organizing map (SOM) and fuzzy k-means with an improved distance function to classify users into clusters. This will lead to have in the same cluster users who mostly share common interests. Results from the combination of SOM and fuzzy K-means revealed better accuracy in identifying user related classes or clusters. We validated our results using various datasets to check the accuracy of the employed clustering approach. The generated groups of users form the domain for transactional datasets to find most valuable products for customers. |
Coseismic Deformation from the 1999 M w 7 . 1 Hector Mine , California , Earthquake as Inferred from InSAR and GPS Observations | We use interferometric synthetic aperture radar (InSAR) and Global Positioning System (GPS) observations to investigate static deformation due to the 1999 Mw 7.1 Hector Mine earthquake, that occurred in the eastern California shear zone. Interferometric decorrelation, phase, and azimuth offset measurements indicate regions of surface and near-surface slip, which we use to constrain the geometry of surface rupture. The inferred geometry is spatially complex, with multiple strands. The southern third of the rupture zone consists of three subparallel segments extending about 20 km in length in a N45 W direction. The central segment is the simplest, with a single strand crossing the Bullion Mountains and a strike of N10 W. The northern third of the rupture zone is characterized by multiple splays, with directions subparallel to strikes in the southern and central. The average strike for the entire rupture is about N30 W. The interferograms indicate significant alongstrike variations in strain which are consistent with variations in the ground-based slip measurements. Using a variable resolution data sampling routine to reduce the computational burden, we invert the InSAR and GPS data for the fault geometry and distribution of slip. We compare results from assuming an elastic half-space and a layered elastic space. Results from these two elastic models are similar, although the layered-space model predicts more slip at depth than does the half-space model. The layered model predicts a maximum coseismic slip of more than 5 m at a depth of 3 to 6 km. Contrary to preliminary reports, the northern part of the Hector Mine rupture accommodates the maximum slip. Our model predictions for the surface fault offset and total seismic moment agree with both field mapping results and recent seismic models. The inferred shallow slip deficit is enigmatic and may suggest that distributed inelastic yielding occurred in the uppermost few kilometers of the crust during or soon after the earthquake. |
Fundamentals, processes and applications of high-permittivity polymer–matrix composites | Abstract There is an increasing need for high-permittivity (high- k ) materials due to rapid development of electrical/electronic industry. It is well-known that single composition materials cannot meet the high- k need. The combination of dissimilar materials is expected to be an effective way to fabricate composites with high- k , especial for high- k polymer–matrix composites (PMC). This review paper focuses on the important role and challenges of high- k PMC in new technologies. The use of different materials in the PMC creates interfaces which have a crucial effect on final dielectric properties. Therefore it is necessary to understand dielectric properties and processing need before the high- k PMC can be made and applied commercially. Theoretical models for increasing dielectric permittivity are summarized and are used to explain the behavior of dielectric properties. The effects of fillers, fabrication processes and the nature of the interfaces between fillers and polymers are discussed. Potential applications of high- k PMC are also discussed. |
Binaural interaction in auditory evoked potentials: Brainstem, middle- and long-latency components | Binaural interaction occurs in the auditory evoked potentials when the sum of the monaural auditory evoked potentials are not equivalent to the binaural evoked auditory potentials. Binaural interaction of the early- (0-10 ms), middle- (10-50 ms) and long-latency (50-200 ms) auditory evoked potentials was studied in 17 normal young adults. For the early components, binaural interaction was maximal at 7.35 ms accounting for a reduction of 21% of the amplitude of the binaural evoked potentials. For the middle latency auditory evoked potentials, binaural interaction was maximal at 39.6 ms accounting for a reduction of 48% of the binaural evoked potential. For the long-latency auditory evoked potentials, binaural interaction was maximal at 145 ms accounting for a reduction of 38% of the binaural evoked potential. In all of the auditory evoked potentials binaural interaction was long lasting around the maxima. The binaural interaction component extends for several milliseconds in the brainstem to tens of milliseconds in the middle- and long-latency components. Binaural interaction takes the form of a reduction of amplitude of the binaural evoked potential relative to the sum of the monaural responses, suggests that inhibitory processes are represented in binaural interaction using evoked potentials. Binaural processing in the auditory pathway is maximal in the time domain of the middle-latency components reflecting activity in the thalamo-cortical portions of the auditory pathways. |
The mathematical work of S. C. Kleene | §1. The origins of recursion theory . In dedicating a book to Steve Kleene, I referred to him as the person who made recursion theory into a theory. Recursion theory was begun by Kleene's teacher at Princeton, Alonzo Church, who first defined the class of recursive functions; first maintained that this class was the class of computable functions (a claim which has come to be known as Church's Thesis); and first used this fact to solve negatively some classical problems on the existence of algorithms. However, it was Kleene who, in his thesis and in his subsequent attempts to convince himself of Church's Thesis, developed a general theory of the behavior of the recursive functions. He continued to develop this theory and extend it to new situations throughout his mathematical career. Indeed, all of the research which he did had a close relationship to recursive functions. Church's Thesis arose in an accidental way. In his investigations of a system of logic which he had invented, Church became interested in a class of functions which he called the λ-definable functions. Initially, Church knew that the successor function and the addition function were λ-definable, but not much else. During 1932, Kleene gradually showed1 that this class of functions was quite extensive; and these results became an important part of his thesis 1935a (completed in June of 1933). |
Traffic Signal Control Based on Adaptive Neuro-Fuzzy Inference | An adaptive neuro-fuzzy inference system is developed and tested for traffic signal controlling. From a given input data set, the developed adaptive neuro-fuzzy inference system can draw the membership functions and corresponding rules by its own, thus making the designing process easier and reliable compared to standard fuzzy logic controllers. Among useful inputs of fuzzy signal control systems, gap between two vehicles, delay at intersections, vehicle density, flow rate and queue length are often used. By considering the practical applicability, the average vehicle inflow rate of each lane is considered in this work as inputs to model the adaptive neuro-fuzzy signal control system. In order to define the desired objectives of reducing the waiting time of vehicles at the signal control, the combined delay of vehicles within one signal cycle is minimized using a simple mathematical optimization method The performance of the control system was tested further by developing an event driven traffic simulation program in Matlab under Windows environment. As expected, the neuro-fuzzy logic controller performed better than the fixed time controller due to its real time adaptability. The neuro-fuzzy controlling system allows more vehicles to pass the junction in congestion and less number of vehicles when the flow rate is low. In particular, the performance of the developed system was superior when there were abrupt changes in traffic flow rates. |
Channel Adaptive One Hop Broadcasting for VANETs | One-hop broadcasting is the predominate form of network traffic in VANETs. Exchanging status information by broadcasting among the vehicles enhances vehicular active safety. Since there is no MAC layer broadcasting recovery for 802.11 based VANETs, efforts should be made towards more robust and effective transmission of such safety-related information. In this paper, a channel adaptive broadcasting method is proposed. It relies solely on channel condition information available at each vehicle by employing standard supported sequence number mechanisms. The proposed method is fully compatible with 802.11 and introduces no communication overhead. Simulation studies show that it outperforms standard broadcasting in term of reception rate and channel utilization. |
Deployment of a tensegrity footbridge | Deployable structures are structures that transform their shape from a compact state to an extended in-service position. Structures composed of tension elements that surround compression elements in equilibrium are called tensegrity structures. Tensegrities are good candidates for deployable structures since shape transformations occur by changing lengths of elements at low energy costs. Although the tensegrity concept was first introduced in 1948, few full-scale tensegrity-based structures have been built. Previous work has demonstrated that a tensegrity-ring topology is potentially a viable system for a deployable footbridge. This paper describes a study of a near-full-scale deployable tensegrity footbridge. The study has been carried out both numerically and experimentally. The deployment of two modules (one half of the footbridge) is achieved through changing the length of five active cables. Deployment is aided by energy stored in low stiffness spring elements. Self-weight significantly influences deployment, and deployment is not reproducible using the same sequence of cable-length changes. Active control is thus required for accurate positioning of front nodes in order to complete deployment through joining both sides at center span. Additionally, testing and numerical analyses have revealed that the deployment behavior of the structure is non-linear with respect to cable-length changes. Finally, modelling the behavior of the structure cannot be done accurately using friction-free and dimensionless joints. Similar deployable tensegrity structures of class two and higher are expected to require simulation models that include joint dimensions for accurate prediction of nodal positions. Introduction Tensegrity structures are structures composed of tension elements (strings, tendons or cables) surrounding compression elements (bars or struts) in equilibrium (Motro et al. 2003). Various definitions of tensegrity structures exist; a concise definition has been proposed by Skelton et al. (2001) who defined a class K tensegrity structure as a stable equilibrium of axially loaded elements, with a maximum of K compressive members at the nodes. Veuve, N., Dalil Safaei, S., and Smith, I.F.C. (2015). Deployment of a Tensegrity Footbridge. Journal of Structural Engineering, 141(11), 04015021 2 Tensegrity structures are identified by the number of infinitesimal mechanisms and states of selfstress (Pellegrino et al. 1986). Pellegrino et al. (1986) introduced a method for computing the number of infinitesimal mechanisms and self-stress states by singular value decomposition (SVD) of the equilibrium matrix. The design process of tensegrity structures includes simultaneous identification of geometry, topology, element axial stiffness, actuator position, and self-stressed state. Tibert and Pellegrino (2003) reviewed form-finding methods. Several studies have included the derivation of tangent stiffness matrix for a pre-stressed framework (Argyris and Scharpf 1972, Murakami 2001 and Guest 2006). Skelton et al. (2014) proposed a topology optimization method for minimum mass design of tensegrity bridges and showed that the optimized design has a multi-scale character. Tensegrities are attractive for deployment and active control since their shape can be changed through changing lengths of elements. In addition to carrying loading, the elements in tensegrity structures can be actuators and sensors (Skelton et al. 2001). In several studies, e.g. (Furuya. 1992, Tibert. 2002, Furuya et al. 2006, Dalil Safaei et al. 2013), tensegrities have been proposed as deployable booms for space missions. Zolesi et al. (2012) successfully demonstrated deployment repeatability of a large tensegrity reflector antenna through a 1/4 scale prototype equipped with a gravity compensation system. Tensegrities are good candidate for active structures since they can change their shape under changing environments, such as new loading situations. Fest et al. (2004) employed telescopic struts to investigate the active control behavior of a five-module tensegrity structure. In several studies, biomimetic properties of active tensegrity structures have been studied (Adam and Smith 2008; Domer 2003; Domer and Smith 2005). These structures were not deployable. Although studies have been carried out on deployment of tensegrity structures, very few near full-scale deployable tensegrity structures have been built. Physical models are important for exploration, validation and development of ideas. In addition, a physical model should be large enough to be able to study important full-scale challenges. The influence of joint characteristics on deployment of tensegrity structures has not been studied. This is particularly important for structures of class 2 and higher since a pre-defined sequence of length changes of active cables may not result in a successful deployment due to joint configuration and friction effects. In addition, no study has been performed to investigate the influence of self-weight on deployment. This paper studies the deployment of a clustered-cable-spring tensegrity configuration that was adapted from a design proposal by Motro et.al (2006). Clustered cables are continuous cables that slide continuously over joints located at the ends of struts. Design and analysis of tensegrity footbridges have been studied by several researchers. Tensegrity-ring modules were introduced by Pugh (1976). These modules were studied for deployment and named ring modules by Motro et.al (2006) because of their hollow interior space. Nguyen (2011) and Cevaer et al. (2012) Veuve, N., Dalil Safaei, S., and Smith, I.F.C. (2015). Deployment of a Tensegrity Footbridge. Journal of Structural Engineering, 141(11), 04015021 3 studied the structural behavior of a single pentagonal ring module under the assumption that there was no cable continuity. A near full-scale tensegrity footbridge has been built to study strategies for deployment. More specifically, objectives of this paper are as follows: Describe an experimental study of a clustered-cable-spring configuration for deployment of a near-full-scale tensegrity footbridge. Demonstrate the need for active control for reproducible deployment behavior. Evaluate the assumption of dimensionless joints for prediction of deployment behavior. Study the influence of self-weight on deployment. The next sections provide details, results and evaluations related to these goals. The tensegrity ring footbridge The footbridge is composed of four identical pentagonal ring modules connected together in a “hollow-rope” system (Rhode-Barbarigos et al. 2010, Bel Hadj Ali et al 2010, and Motro et.al 2006). Each pentagonal ring module in Figure 1 has 15 struts and 30 cables. Tensegrity structures are characterized by the number of self-stress states and infinitesimal mechanisms. Analysis of the equilibrium matrix reveals no infinitesimal mechanisms and six independent states of selfstress. Each tensegrity ring module is a class two tensegrity structure (Skelton et al. 2001). The empty space in the tensegrity ring module is employed for footbridge pedestrian traffic. Figure 1: Deployment illustration of four-module tensegrity footbridge. This paper studies the deployment of one half of a 1⁄4 scale model of the footbridge (adapted from Rhode-Barbarigos et al. (2010) and Rhode-Barbarigos et al. (2012b)) Rhode-Barbarigos et al. (2010) and Bel Hadj Ali et al. (2010) studied the serviceability of a 16 m span and 6.2 m diameter tensegrity footbridge without considering clustered cables and Veuve, N., Dalil Safaei, S., and Smith, I.F.C. (2015). Deployment of a Tensegrity Footbridge. Journal of Structural Engineering, 141(11), 04015021 4 deployability. Rhode-Barbarigos et al. (2012b) then studied several deployment methods and showed that utilizing springs, inspired from (Schenk et al. 2007), and clustered-cables reduces the number of actuators required for controlled deployment. Bel Hadj Ali et al. (2011) proposed an analysis method for clustered tensegrity structures through a modified dynamic relaxation algorithm. Clustered cable elements run continuously through nodes. They influence the mechanics of tensegrity structures through reducing number of kinematic constraints and changing the internal-force distribution of the elements (Moored and Bart-Smith 2009). A 1/4 scale model has been designed, manufactured and assembled in order to study deployment behavior (Figure 2). Each half is composed of 15 springs, 5 continuous cables, 30 struts and 20 cables. Also, each half is controlled by five continuous active cables. Each active continuous cable starts from a node connected to the support and ends at the front nodes. Figure 2 shows continuous actuated cables (active cables) and spring elements. Active cables are connected to the front nodes of the half bridge (Figure 1). The length of each active cable is changed through winding or unwinding the cable on a drum that is fixed on moving support (Figure 3). An actuation step or a control command is the length changes of the five active cables applied during one step of deployment. Figure 2: (a) Side view of the near-full scale tensegrity footbridge Veuve, N., Dalil Safaei, S., and Smith, I.F.C. (2015). Deployment of a Tensegrity Footbridge. Journal of Structural Engineering, 141(11), 04015021 5 Figure 2: (b) front view with node numbering of the near-full scale tensegrity footbridge The structural weight of each half is approximately 100 kg. Both ends move in rail-support systems (Figure 3). Struts are made of steel hollow tube section profiles with a length of 1.35 m, a diameter of 28 mm and a thickness of 1.5 mm. The steel grade of the struts is S355 with a modulus of elasticity of 210 GPa. Cables have a diameter of 4 mm and are made of stainless steel with a modulus of elasticity of 120 GPa. The value of spring stiffnesses at the support is 2 kN/m and is 2.9 kN/m for other springs. The foot |
Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics | Deep learning has greatly improved visual recognition in recent years. However, recent research has shown that there exist many adversarial examples that can negatively impact the performance of such an architecture. This paper focuses on detecting those adversarial examples by analyzing whether they come from the same distribution as the normal examples. Instead of directly training a deep neural network to detect adversarials, a much simpler approach was proposed based on statistics on outputs from convolutional layers. A cascade classifier was designed to efficiently detect adversarials. Furthermore, trained from one particular adversarial generating mechanism, the resulting classifier can successfully detect adversarials from a completely different mechanism as well. The resulting classifier is non-subdifferentiable, hence creates a difficulty for adversaries to attack by using the gradient of the classifier. After detecting adversarial examples, we show that many of them can be recovered by simply performing a small average filter on the image. Those findings should lead to more insights about the classification mechanisms in deep convolutional neural networks. |
Inferring Structural Models of Travel Behavior: An Inverse Reinforcement Learning Approach | Inferring Structural Models of Travel Behavior: An Inverse Reinforcement Learning Approach |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.