title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Coherent control of flexural vibrations in dual-nanoweb fibers using phase-modulated two-frequency light
Coherent control of the resonant response in spatially extended optomechanical structures is complicated by the fact that the optical drive is affected by the back-action from the generated phonons. Here we report a new approach to coherent control based on stimulated Raman-like scattering, in which the optical pressure can remain unaffected by the induced vibrations even in the regime of strong optomechanical interactions. We demonstrate experimentally coherent control of flexural vibrations simultaneously along the whole length of a dual-nanoweb fiber, by imprinting steps in the relative phase between the components of a two-frequency pump signal,the beat frequency being chosen to match a flexural resonance. Furthermore, sequential switching of the relative phase at time intervals shorter than the lifetime of the vibrations reduces their amplitude to a constant value that is fully adjustable by tuning the phase-modulation depth and switching rate. The results may trigger new developments in silicon photonics, since such coherent control uniquely decouples the amplitude of optomechanical oscillations from power-dependent thermal effects and nonlinear optical loss.
0
1
0
0
0
0
Static and Dynamic Magnetic Properties of FeMn/Pt Multilayers
Recently we have demonstrated the presence of spin-orbit toque in FeMn/Pt multilayers which, in combination with the anisotropy field, is able to rotate its magnetization consecutively from 0o to 360o without any external field. Here, we report on an investigation of static and dynamic magnetic properties of FeMn/Pt multilayers using combined techniques of magnetometry, ferromagnetic resonance, inverse spin Hall effect and spin Hall magnetoresistance measurements. The FeMn/Pt multilayer was found to exhibit ferromagnetic properties, and its temperature dependence of saturation magnetization can be fitted well using a phenomenological model by including a finite distribution in Curie temperature due to subtle thickness variations across the multilayer samples. The non-uniformity in static magnetic properties is also manifested in the ferromagnetic resonance spectra, which typically exhibit a broad resonance peak. A damping parameter of around 0.106 is derived from the frequency dependence of ferromagnetic resonance linewidth, which is comparable to the reported values for other types of Pt-based multilayers. Clear inverse spin Hall signals and spin Hall magnetoresistance have been observed in all samples below the Curie temperature, which corroborate the strong spin-orbit torque effect observed previously.
0
1
0
0
0
0
Ultraproducts of crossed product von Neumann algebras
We study a relationship between the ultraproduct of a crossed product von Neumann algebra and the crossed product of an ultraproduct von Neumann algebra. As an application, the continuous core of an ultraproduct von Neumann algebra is described.
0
0
1
0
0
0
Concept Drift Detection and Adaptation with Hierarchical Hypothesis Testing
A fundamental issue for statistical classification models in a streaming environment is that the joint distribution between predictor and response variables changes over time (a phenomenon also known as concept drifts), such that their classification performance deteriorates dramatically. In this paper, we first present a hierarchical hypothesis testing (HHT) framework that can detect and also adapt to various concept drift types (e.g., recurrent or irregular, gradual or abrupt), even in the presence of imbalanced data labels. A novel concept drift detector, namely Hierarchical Linear Four Rates (HLFR), is implemented under the HHT framework thereafter. By substituting a widely-acknowledged retraining scheme with an adaptive training strategy, we further demonstrate that the concept drift adaptation capability of HLFR can be significantly boosted. The theoretical analysis on the Type-I and Type-II errors of HLFR is also performed. Experiments on both simulated and real-world datasets illustrate that our methods outperform state-of-the-art methods in terms of detection precision, detection delay as well as the adaptability across different concept drift types.
1
0
0
1
0
0
Interval-based Prediction Uncertainty Bound Computation in Learning with Missing Values
The problem of machine learning with missing values is common in many areas. A simple approach is to first construct a dataset without missing values simply by discarding instances with missing entries or by imputing a fixed value for each missing entry, and then train a prediction model with the new dataset. A drawback of this naive approach is that the uncertainty in the missing entries is not properly incorporated in the prediction. In order to evaluate prediction uncertainty, the multiple imputation (MI) approach has been studied, but the performance of MI is sensitive to the choice of the probabilistic model of the true values in the missing entries, and the computational cost of MI is high because multiple models must be trained. In this paper, we propose an alternative approach called the Interval-based Prediction Uncertainty Bounding (IPUB) method. The IPUB method represents the uncertainties due to missing entries as intervals, and efficiently computes the lower and upper bounds of the prediction results when all possible training sets constructed by imputing arbitrary values in the intervals are considered. The IPUB method can be applied to a wide class of convex learning algorithms including penalized least-squares regression, support vector machine (SVM), and logistic regression. We demonstrate the advantages of the IPUB method by comparing it with an existing method in numerical experiment with benchmark datasets.
0
0
0
1
0
0
Methods for Interpreting and Understanding Deep Neural Networks
This paper provides an entry point to the problem of interpreting a deep neural network model and explaining its predictions. It is based on a tutorial given at ICASSP 2017. It introduces some recently proposed techniques of interpretation, along with theory, tricks and recommendations, to make most efficient use of these techniques on real data. It also discusses a number of practical applications.
1
0
0
1
0
0
Passive Classification of Source Printer using Text-line-level Geometric Distortion Signatures from Scanned Images of Printed Documents
In this digital era, one thing that still holds the convention is a printed archive. Printed documents find their use in many critical domains such as contract papers, legal tenders and proof of identity documents. As more advanced printing, scanning and image editing techniques are becoming available, forgeries on these legal tenders pose a serious threat. Ability to easily and reliably identify source printer of a printed document can help a lot in reducing this menace. During printing procedure, printer hardware introduces certain distortions in printed characters' locations and shapes which are invisible to naked eyes. These distortions are referred as geometric distortions, their profile (or signature) is generally unique for each printer and can be used for printer classification purpose. This paper proposes a set of features for characterizing text-line-level geometric distortions, referred as geometric distortion signatures and presents a novel system to use them for identification of the origin of a printed document. Detailed experiments performed on a set of thirteen printers demonstrate that the proposed system achieves state of the art performance and gives much higher accuracy under small training size constraint. For four training and six test pages of three different fonts, the proposed method gives 99\% classification accuracy.
1
0
0
0
0
0
The duration of load effect in lumber as stochastic degradation
This paper proposes a gamma process for modelling the damage that accumulates over time in the lumber used in structural engineering applications when stress is applied. The model separates the stochastic processes representing features internal to the piece of lumber on the one hand, from those representing external forces due to applied dead and live loads. The model applies those external forces through a time-varying population level function designed for time-varying loads. The application of this type of model, which is standard in reliability analysis, is novel in this context, which has been dominated by accumulated damage models (ADMs) over more than half a century. The proposed model is compared with one of the traditional ADMs. Our statistical results based on a Bayesian analysis of experimental data highlight the limitations of using accelerated testing data to assess long-term reliability, as seen in the wide posterior intervals. This suggests the need for more comprehensive testing in future applications, or to encode appropriate expert knowledge in the priors used for Bayesian analysis.
0
0
0
1
0
0
Partial Bridging of Vaccine Efficacy to New Populations
Suppose one has data from one or more completed vaccine efficacy trials and wishes to estimate the efficacy in a new setting. Often logistical or ethical considerations make running another efficacy trial impossible. Fortunately, if there is a biomarker that is the primary modifier of efficacy, then the biomarker-conditional efficacy may be identical in the completed trials and the new setting, or at least informative enough to meaningfully bound this quantity. Given a sample of this biomarker from the new population, we might hope we can bridge the results of the completed trials to estimate the vaccine efficacy in this new population. Unfortunately, even knowing the true conditional efficacy in the new population fails to identify the marginal efficacy due to the unknown conditional unvaccinated risk. We define a curve that partially identifies (lower bounds) the marginal efficacy in the new population as a function of the population's marginal unvaccinated risk, under the assumption that one can identify bounds on the conditional unvaccinated risk in the new population. Interpreting the curve only requires identifying plausible regions of the marginal unvaccinated risk in the new population. We present a nonparametric estimator of this curve and develop valid lower confidence bounds that concentrate at a parametric rate. We use vaccine terminology throughout, but the results apply to general binary interventions and bounded outcomes.
0
0
0
1
0
0
An induced map between rationalized classifying spaces for fibrations
Let $B{ aut}_1X$ be the Dold-Lashof classifying space of orientable fibrations with fiber $X$. For a rationally weakly trivial map $f:X\to Y$, our strictly induced map $a_f: (Baut_1X)_0\to (Baut_1Y)_0$ induces a natural map from a $X_0$-fibration to a $Y_0$-fibration. It is given by a map between the differential graded Lie algebras of derivations of Sullivan models. We note some conditions that the map $a_f$ admits a section and note some relations with the Halperin conjecture. Furthermore we give the obstruction class for a lifting of a classifying map $h: B\to (Baut_1Y)_0$ and apply it for liftings of $G$-actions on $Y$ for a compact connected Lie group $G$ as the case of $B=BG$ and evaluating of rational toral ranks as $r_0(Y)\leq r_0(X)$.
0
0
1
0
0
0
The Statistical Recurrent Unit
Sophisticated gated recurrent neural network architectures like LSTMs and GRUs have been shown to be highly effective in a myriad of applications. We develop an un-gated unit, the statistical recurrent unit (SRU), that is able to learn long term dependencies in data by only keeping moving averages of statistics. The SRU's architecture is simple, un-gated, and contains a comparable number of parameters to LSTMs; yet, SRUs perform favorably to more sophisticated LSTM and GRU alternatives, often outperforming one or both in various tasks. We show the efficacy of SRUs as compared to LSTMs and GRUs in an unbiased manner by optimizing respective architectures' hyperparameters in a Bayesian optimization scheme for both synthetic and real-world tasks.
1
0
0
1
0
0
Many-body localization in the droplet spectrum of the random XXZ quantum spin chain
We study many-body localization properties of the disordered XXZ spin chain in the Ising phase. Disorder is introduced via a random magnetic field in the $z$-direction. We prove a strong form of dynamical exponential clustering for eigenstates in the droplet spectrum: For any pair of local observables separated by a distance $\ell$, the sum of the associated correlators over these states decays exponentially in $\ell$, in expectation. This exponential clustering persists under the time evolution in the droplet spectrum. Our result applies to the large disorder regime as well as to the strong Ising phase at fixed disorder, with bounds independent of the support of the observables.
0
0
1
0
0
0
FastDeepIoT: Towards Understanding and Optimizing Neural Network Execution Time on Mobile and Embedded Devices
Deep neural networks show great potential as solutions to many sensing application problems, but their excessive resource demand slows down execution time, pausing a serious impediment to deployment on low-end devices. To address this challenge, recent literature focused on compressing neural network size to improve performance. We show that changing neural network size does not proportionally affect performance attributes of interest, such as execution time. Rather, extreme run-time nonlinearities exist over the network configuration space. Hence, we propose a novel framework, called FastDeepIoT, that uncovers the non-linear relation between neural network structure and execution time, then exploits that understanding to find network configurations that significantly improve the trade-off between execution time and accuracy on mobile and embedded devices. FastDeepIoT makes two key contributions. First, FastDeepIoT automatically learns an accurate and highly interpretable execution time model for deep neural networks on the target device. This is done without prior knowledge of either the hardware specifications or the detailed implementation of the used deep learning library. Second, FastDeepIoT informs a compression algorithm how to minimize execution time on the profiled device without impacting accuracy. We evaluate FastDeepIoT using three different sensing-related tasks on two mobile devices: Nexus 5 and Galaxy Nexus. FastDeepIoT further reduces the neural network execution time by $48\%$ to $78\%$ and energy consumption by $37\%$ to $69\%$ compared with the state-of-the-art compression algorithms.
1
0
0
0
0
0
Robust MPC for tracking of nonholonomic robots with additive disturbances
In this paper, two robust model predictive control (MPC) schemes are proposed for tracking control of nonholonomic systems with bounded disturbances: tube-MPC and nominal robust MPC (NRMPC). In tube-MPC, the control signal consists of a control action and a nonlinear feedback law based on the deviation of the actual states from the states of a nominal system. It renders the actual trajectory within a tube centered along the optimal trajectory of the nominal system. Recursive feasibility and input-to-state stability are established and the constraints are ensured by tightening the input domain and the terminal region. While in NRMPC, an optimal control sequence is obtained by solving an optimization problem based on the current state, and the first portion of this sequence is applied to the real system in an open-loop manner during each sampling period. The state of nominal system model is updated by the actual state at each step, which provides additional a feedback. By introducing a robust state constraint and tightening the terminal region, recursive feasibility and input-to-state stability are guaranteed. Simulation results demonstrate the effectiveness of both strategies proposed.
1
0
0
0
0
0
Investigation on the use of Hidden-Markov Models in automatic transcription of music
Hidden Markov Models (HMMs) are a ubiquitous tool to model time series data, and have been widely used in two main tasks of Automatic Music Transcription (AMT): note segmentation, i.e. identifying the played notes after a multi-pitch estimation, and sequential post-processing, i.e. correcting note segmentation using training data. In this paper, we employ the multi-pitch estimation method called Probabilistic Latent Component Analysis (PLCA), and develop AMT systems by integrating different HMM-based modules in this framework. For note segmentation, we use two different twostate on/o? HMMs, including a higher-order one for duration modeling. For sequential post-processing, we focused on a musicological modeling of polyphonic harmonic transitions, using a first- and second-order HMMs whose states are defined through candidate note mixtures. These different PLCA plus HMM systems have been evaluated comparatively on two different instrument repertoires, namely the piano (using the MAPS database) and the marovany zither. Our results show that the use of HMMs could bring noticeable improvements to transcription results, depending on the instrument repertoire.
1
0
0
1
0
0
Network archaeology: phase transition in the recoverability of network history
Network growth processes can be understood as generative models of the structure and history of complex networks. This point of view naturally leads to the problem of network archaeology: Reconstructing all the past states of a network from its structure---a difficult permutation inference problem. In this paper, we introduce a Bayesian formulation of network archaeology, with a generalization of preferential attachment as our generative mechanism. We develop a sequential importance sampling algorithm to evaluate the posterior averages of this model, as well as an efficient heuristic that uncovers the history of a network in linear time. We use these methods to identify and characterize a phase transition in the quality of the reconstructed history, when they are applied to artificial networks generated by the model itself. Despite the existence of a no-recovery phase, we find that non-trivial inference is possible in a large portion of the parameter space as well as on empirical data.
0
0
0
1
0
0
Big Data, Data Science, and Civil Rights
Advances in data analytics bring with them civil rights implications. Data-driven and algorithmic decision making increasingly determine how businesses target advertisements to consumers, how police departments monitor individuals or groups, how banks decide who gets a loan and who does not, how employers hire, how colleges and universities make admissions and financial aid decisions, and much more. As data-driven decisions increasingly affect every corner of our lives, there is an urgent need to ensure they do not become instruments of discrimination, barriers to equality, threats to social justice, and sources of unfairness. In this paper, we argue for a concrete research agenda aimed at addressing these concerns, comprising five areas of emphasis: (i) Determining if models and modeling procedures exhibit objectionable bias; (ii) Building awareness of fairness into machine learning methods; (iii) Improving the transparency and control of data- and model-driven decision making; (iv) Looking beyond the algorithm(s) for sources of bias and unfairness-in the myriad human decisions made during the problem formulation and modeling process; and (v) Supporting the cross-disciplinary scholarship necessary to do all of that well.
1
0
0
0
0
0
Global well-posedness for 2-D Boussinesq system with the temperature-dependent viscosity and supercritical dissipation
The present paper is dedicated to the global well-posedness issue for the Boussinesq system with the temperature-dependent viscosity in $\mathbb{R}^2.$ We aim at extending the work by Abidi and Zhang ( Adv. Math. 2017 (305) 1202--1249 ) to a supercritical dissipation for temperature.
0
0
1
0
0
0
CO~($J = 1-0$) Observations of a Filamentary Molecular Cloud in the Galactic Region Centered at $l = 150\arcdeg, b = 3.5\arcdeg$
We present large-field (4.25~$\times$~3.75 deg$^2$) mapping observations toward the Galactic region centered at $l = 150\arcdeg, b = 3.5\arcdeg$ in the $J = 1-0$ emission line of CO isotopologues ($^{12}$CO, $^{13}$CO, and C$^{18}$O), using the 13.7 m millimeter-wavelength telescope of the Purple Mountain Observatory. Based on the $^{13}$CO observations, we reveal a filamentary cloud in the Local Arm at a velocity range of $-$0.5 to 6.5~km~s$^{-1}$. This molecular cloud contains 1 main filament and 11 sub-filaments, showing the so-called "ridge-nest" structure. The main filament and three sub-filaments are also detected in the C$^{18}$O line. The velocity structures of most identified filaments display continuous distribution with slight velocity gradients. The measured median excitation temperature, line width, length, width, and linear mass of the filaments are $\sim$9.28~K, 0.85~km~s$^{-1}$, 7.30~pc, 0.79~pc, and 17.92~$M_\sun$~pc$^{-1}$, respectively, assuming a distance of 400~pc. We find that the four filaments detected in the C$^{18}$O line are thermally supercritical, and two of them are in the virialized state, and thus tend to be gravitationally bound. We identify in total 146 $^{13}$CO clumps in the cloud, about 77$\%$ of the clumps are distributed along the filaments. About 56$\%$ of the virialized clumps are found to be associated with the supercritical filaments. Three young stellar object (YSO) candidates are also identified in the supercritical filaments, based on the complementary infrared (IR) data. These results indicate that the supercritical filaments, especially the virialized filaments, may contain star-forming activities.
0
1
0
0
0
0
Leveraging Deep Neural Network Activation Entropy to cope with Unseen Data in Speech Recognition
Unseen data conditions can inflict serious performance degradation on systems relying on supervised machine learning algorithms. Because data can often be unseen, and because traditional machine learning algorithms are trained in a supervised manner, unsupervised adaptation techniques must be used to adapt the model to the unseen data conditions. However, unsupervised adaptation is often challenging, as one must generate some hypothesis given a model and then use that hypothesis to bootstrap the model to the unseen data conditions. Unfortunately, reliability of such hypotheses is often poor, given the mismatch between the training and testing datasets. In such cases, a model hypothesis confidence measure enables performing data selection for the model adaptation. Underlying this approach is the fact that for unseen data conditions, data variability is introduced to the model, which the model propagates to its output decision, impacting decision reliability. In a fully connected network, this data variability is propagated as distortions from one layer to the next. This work aims to estimate the propagation of such distortion in the form of network activation entropy, which is measured over a short- time running window on the activation from each neuron of a given hidden layer, and these measurements are then used to compute summary entropy. This work demonstrates that such an entropy measure can help to select data for unsupervised model adaptation, resulting in performance gains in speech recognition tasks. Results from standard benchmark speech recognition tasks show that the proposed approach can alleviate the performance degradation experienced under unseen data conditions by iteratively adapting the model to the unseen datas acoustic condition.
1
0
0
1
0
0
Asymmetric Preheating
We study the generation of the matter-antimatter asymmetry during bosonic preheating, focusing on the sources of the asymmetry. If the asymmetry appears in the multiplication factor of the resonant particle production, the matter-antimatter ratio will grow during preheating. On the other hand, if the asymmetry does not grow during preheating, one has to find out another reason. We consider several scenarios for the asymmetric preheating to distinguish the sources of the asymmetry. We also discuss a new baryogenesis scenario, in which the asymmetry is generated without introducing neither loop corrections nor rotation of a field.
0
1
0
0
0
0
A multi-device dataset for urban acoustic scene classification
This paper introduces the acoustic scene classification task of DCASE 2018 Challenge and the TUT Urban Acoustic Scenes 2018 dataset provided for the task, and evaluates the performance of a baseline system in the task. As in previous years of the challenge, the task is defined for classification of short audio samples into one of predefined acoustic scene classes, using a supervised, closed-set classification setup. The newly recorded TUT Urban Acoustic Scenes 2018 dataset consists of ten different acoustic scenes and was recorded in six large European cities, therefore it has a higher acoustic variability than the previous datasets used for this task, and in addition to high-quality binaural recordings, it also includes data recorded with mobile devices. We also present the baseline system consisting of a convolutional neural network and its performance in the subtasks using the recommended cross-validation setup.
1
0
0
0
0
0
Asymptotically preserving particle-in-cell methods for inhomogenous strongly magnetized plasmas
We propose a class of Particle-In-Cell (PIC) methods for the Vlasov-Poisson system with a strong and inhomogeneous external magnetic field with fixed direction, where we focus on the motion of particles in the plane orthogonal to the magnetic field (so-called poloidal directions). In this regime, the time step can be subject to stability constraints related to the smallness of Larmor radius and plasma frequency. To avoid this limitation, our approach is based on first and higher-order semi-implicit numerical schemes already validated on dissipative systems [3] and for homogeneous magnetic fields [10]. Thus, when the magnitude of the external magnetic field becomes large, this method provides a consistent PIC discretization of the guiding-center system taking into account variations of the magnetic field. We carry out some theoretical proofs and perform several numerical experiments that establish a solid validation of the method and its underlying concepts.
0
0
1
0
0
0
Cluster Failure Revisited: Impact of First Level Design and Data Quality on Cluster False Positive Rates
Methodological research rarely generates a broad interest, yet our work on the validity of cluster inference methods for functional magnetic resonance imaging (fMRI) created intense discussion on both the minutia of our approach and its implications for the discipline. In the present work, we take on various critiques of our work and further explore the limitations of our original work. We address issues about the particular event-related designs we used, considering multiple event types and randomisation of events between subjects. We consider the lack of validity found with one-sample permutation (sign flipping) tests, investigating a number of approaches to improve the false positive control of this widely used procedure. We found that the combination of a two-sided test and cleaning the data using ICA FIX resulted in nominal false positive rates for all datasets, meaning that data cleaning is not only important for resting state fMRI, but also for task fMRI. Finally, we discuss the implications of our work on the fMRI literature as a whole, estimating that at least 10% of the fMRI studies have used the most problematic cluster inference method (P = 0.01 cluster defining threshold), and how individual studies can be interpreted in light of our findings. These additional results underscore our original conclusions, on the importance of data sharing and thorough evaluation of statistical methods on realistic null data.
0
0
0
1
0
0
An Integrated Simulator and Dataset that Combines Grasping and Vision for Deep Learning
Deep learning is an established framework for learning hierarchical data representations. While compute power is in abundance, one of the main challenges in applying this framework to robotic grasping has been obtaining the amount of data needed to learn these representations, and structuring the data to the task at hand. Among contemporary approaches in the literature, we highlight key properties that have encouraged the use of deep learning techniques, and in this paper, detail our experience in developing a simulator for collecting cylindrical precision grasps of a multi-fingered dexterous robotic hand.
1
0
0
1
0
0
A Recursive Bayesian Approach To Describe Retinal Vasculature Geometry
Demographic studies suggest that changes in the retinal vasculature geometry, especially in vessel width, are associated with the incidence or progression of eye-related or systemic diseases. To date, the main information source for width estimation from fundus images has been the intensity profile between vessel edges. However, there are many factors affecting the intensity profile: pathologies, the central light reflex and local illumination levels, to name a few. In this study, we introduce three information sources for width estimation. These are the probability profiles of vessel interior, centreline and edge locations generated by a deep network. The probability profiles provide direct access to vessel geometry and are used in the likelihood calculation for a Bayesian method, particle filtering. We also introduce a geometric model which can handle non-ideal conditions of the probability profiles. Our experiments conducted on the REVIEW dataset yielded consistent estimates of vessel width, even in cases when one of the vessel edges is difficult to identify. Moreover, our results suggest that the method is better than human observers at locating edges of low contrast vessels.
1
0
0
0
0
0
Quantum Harmonic Analysis of the Density Matrix: Basics
In this Review we will study rigorously the notion of mixed states and their density matrices. We mostly give complete proofs. We will also discuss the quantum-mechanical consequences of possible variations of Planck's constant h. This Review has been written having in mind two readerships: mathematical physicists and quantum physicists. The mathematical rigor is maximal, but the language and notation we use throughout should be familiar to physicists.
0
0
1
0
0
0
The bromodomain-containing protein Ibd1 links multiple chromatin related protein complexes to highly expressed genes in Tetrahymena thermophila
Background: The chromatin remodelers of the SWI/SNF family are critical transcriptional regulators. Recognition of lysine acetylation through a bromodomain (BRD) component is key to SWI/SNF function; in most eukaryotes, this function is attributed to SNF2/Brg1. Results: Using affinity purification coupled to mass spectrometry (AP-MS) we identified members of a SWI/SNF complex (SWI/SNFTt) in Tetrahymena thermophila. SWI/SNFTt is composed of 11 proteins, Snf5Tt, Swi1Tt, Swi3Tt, Snf12Tt, Brg1Tt, two proteins with potential chromatin interacting domains and four proteins without orthologs to SWI/SNF proteins in yeast or mammals. SWI/SNFTt subunits localize exclusively to the transcriptionally active macronucleus (MAC) during growth and development, consistent with a role in transcription. While Tetrahymena Brg1 does not contain a BRD, our AP-MS results identified a BRD-containing SWI/SNFTt component, Ibd1 that associates with SWI/SNFTt during growth but not development. AP-MS analysis of epitope-tagged Ibd1 revealed it to be a subunit of several additional protein complexes, including putative SWRTt, and SAGATt complexes as well as a putative H3K4-specific histone methyl transferase complex. Recombinant Ibd1 recognizes acetyl-lysine marks on histones correlated with active transcription. Consistent with our AP-MS and histone array data suggesting a role in regulation of gene expression, ChIP-Seq analysis of Ibd1 indicated that it primarily binds near promoters and within gene bodies of highly expressed genes during growth. Conclusions: Our results suggest that through recognizing specific histones marks, Ibd1 targets active chromatin regions of highly expressed genes in Tetrahymena where it subsequently might coordinate the recruitment of several chromatin remodeling complexes to regulate the transcriptional landscape of vegetatively growing Tetrahymena cells.
0
0
0
0
1
0
A Bayesian Perspective on Generalization and Stochastic Gradient Descent
We consider two questions at the heart of machine learning; how can we predict if a minimum will generalize to the test set, and why does stochastic gradient descent find minima that generalize well? Our work responds to Zhang et al. (2016), who showed deep neural networks can easily memorize randomly labeled training data, despite generalizing well on real labels of the same inputs. We show that the same phenomenon occurs in small linear models. These observations are explained by the Bayesian evidence, which penalizes sharp minima but is invariant to model parameterization. We also demonstrate that, when one holds the learning rate fixed, there is an optimum batch size which maximizes the test set accuracy. We propose that the noise introduced by small mini-batches drives the parameters towards minima whose evidence is large. Interpreting stochastic gradient descent as a stochastic differential equation, we identify the "noise scale" $g = \epsilon (\frac{N}{B} - 1) \approx \epsilon N/B$, where $\epsilon$ is the learning rate, $N$ the training set size and $B$ the batch size. Consequently the optimum batch size is proportional to both the learning rate and the size of the training set, $B_{opt} \propto \epsilon N$. We verify these predictions empirically.
1
0
0
1
0
0
Higher Derivative Field Theories: Degeneracy Conditions and Classes
We provide a full analysis of ghost free higher derivative field theories with coupled degrees of freedom. Assuming the absence of gauge symmetries, we derive the degeneracy conditions in order to evade the Ostrogradsky ghosts, and analyze which (non)trivial classes of solutions this allows for. It is shown explicitly how Lorentz invariance avoids the propagation of "half" degrees of freedom. Moreover, for a large class of theories, we construct the field redefinitions and/or (extended) contact transformations that put the theory in a manifestly first order form. Finally, we identify which class of theories cannot be brought to first order form by such transformations.
0
1
0
0
0
0
3D Morphology Prediction of Progressive Spinal Deformities from Probabilistic Modeling of Discriminant Manifolds
We introduce a novel approach for predicting the progression of adolescent idiopathic scoliosis from 3D spine models reconstructed from biplanar X-ray images. Recent progress in machine learning have allowed to improve classification and prognosis rates, but lack a probabilistic framework to measure uncertainty in the data. We propose a discriminative probabilistic manifold embedding where locally linear mappings transform data points from high-dimensional space to corresponding low-dimensional coordinates. A discriminant adjacency matrix is constructed to maximize the separation between progressive and non-progressive groups of patients diagnosed with scoliosis, while minimizing the distance in latent variables belonging to the same class. To predict the evolution of deformation, a baseline reconstruction is projected onto the manifold, from which a spatiotemporal regression model is built from parallel transport curves inferred from neighboring exemplars. Rate of progression is modulated from the spine flexibility and curve magnitude of the 3D spine deformation. The method was tested on 745 reconstructions from 133 subjects using longitudinal 3D reconstructions of the spine, with results demonstrating the discriminatory framework can identify between progressive and non-progressive of scoliotic patients with a classification rate of 81% and prediction differences of 2.1$^{o}$ in main curve angulation, outperforming other manifold learning methods. Our method achieved a higher prediction accuracy and improved the modeling of spatiotemporal morphological changes in highly deformed spines compared to other learning methods.
1
0
0
1
0
0
The Galaxy's Veil of Excited Hydrogen
Many of the baryons in our Galaxy probably lie outside the well known disk and bulge components. Despite a wealth of evidence for the presence of some gas in galactic halos, including absorption line systems in the spectra of quasars, high velocity neutral hydrogen clouds in our Galaxy halo, line emitting ionised hydrogen originating from galactic winds in nearby starburst galaxies, and the X-ray coronas surrounding the most massive galaxies, accounting for the gas in the halo of any galaxy has been observationally challenging primarily because of its low density in the expansive halo. The most sensitive measurements come from detecting absorption by the intervening gas in the spectra of distant objects such as quasars or distant halo stars, but these have typically been limited to a few lines of sight to sufficiently bright objects. Massive spectroscopic surveys of millions of objects provide an alternative approach to the problem. Here, we present the first evidence for a widely distributed, neutral, excited hydrogen component of the Galaxy's halo. It is observed as the slight, (0.779 $\pm$ 0.006)\%, absorption of flux near the rest wavelength of H$\alpha$ in the combined spectra of hundreds of thousands of galaxy spectra and is ubiquitous in high latitude lines of sight. This observation provides an avenue to tracing, both spatially and kinematically, the majority of the gas in the halo of our Galaxy.
0
1
0
0
0
0
Photonic-chip supercontinuum with tailored spectra for precision frequency metrology
Supercontinuum generation using chip-integrated photonic waveguides is a powerful approach for spectrally broadening pulsed laser sources with very low pulse energies and compact form factors. When pumped with a mode-locked laser frequency comb, these waveguides can coherently expand the comb spectrum to more than an octave in bandwidth to enable self-referenced stabilization. However, for applications in frequency metrology and precision spectroscopy, it is desirable to not only support self-referencing, but also to generate low-noise combs with customizable broadband spectra. In this work, we demonstrate dispersion-engineered waveguides based on silicon nitride that are designed to meet these goals and enable precision optical metrology experiments across large wavelength spans. We perform a clock comparison measurement and report a clock-limited relative frequency instability of $3.8\times10^{-15}$ at $\tau = 2$ seconds between a 1550 nm cavity-stabilized reference laser and NIST's calcium atomic clock laser at 657 nm using a two-octave waveguide-supercontinuum comb.
0
1
0
0
0
0
Endogenizing Epistemic Actions
Through a series of examples, we illustrate some important drawbacks that the action logic framework suffers from in its ability to represent the dynamics of information updates. We argue that these problems stem from the fact that the action model, a central construct designed to encode agents' uncertainty about actions, is itself effectively common knowledge amongst the agents. In response to these difficulties, we motivate and propose an alternative semantics that avoids them by (roughly speaking) endogenizing the action model. We discuss the relationship to action logic, and provide a sound and complete axiomatization.
1
0
0
0
0
0
The Meaning of Memory Safety
We give a rigorous characterization of what it means for a programming language to be memory safe, capturing the intuition that memory safety supports local reasoning about state. We formalize this principle in two ways. First, we show how a small memory-safe language validates a noninterference property: a program can neither affect nor be affected by unreachable parts of the state. Second, we extend separation logic, a proof system for heap-manipulating programs, with a memory-safe variant of its frame rule. The new rule is stronger because it applies even when parts of the program are buggy or malicious, but also weaker because it demands a stricter form of separation between parts of the program state. We also consider a number of pragmatically motivated variations on memory safety and the reasoning principles they support. As an application of our characterization, we evaluate the security of a previously proposed dynamic monitor for memory safety of heap-allocated data.
1
0
0
0
0
0
The Flexible Group Spatial Keyword Query
We present a new class of service for location based social networks, called the Flexible Group Spatial Keyword Query, which enables a group of users to collectively find a point of interest (POI) that optimizes an aggregate cost function combining both spatial distances and keyword similarities. In addition, our query service allows users to consider the tradeoffs between obtaining a sub-optimal solution for the entire group and obtaining an optimimized solution but only for a subgroup. We propose algorithms to process three variants of the query: (i) the group nearest neighbor with keywords query, which finds a POI that optimizes the aggregate cost function for the whole group of size n, (ii) the subgroup nearest neighbor with keywords query, which finds the optimal subgroup and a POI that optimizes the aggregate cost function for a given subgroup size m (m <= n), and (iii) the multiple subgroup nearest neighbor with keywords query, which finds optimal subgroups and corresponding POIs for each of the subgroup sizes in the range [m, n]. We design query processing algorithms based on branch-and-bound and best-first paradigms. Finally, we provide theoretical bounds and conduct extensive experiments with two real datasets which verify the effectiveness and efficiency of the proposed algorithms.
1
0
0
0
0
0
Undersampled windowed exponentials and their applications
We characterize the completeness and frame/basis property of a union of under-sampled windowed exponentials of the form $$ {\mathcal F}(g): =\{e^{2\pi i n x}: n\ge 0\}\cup \{g(x)e^{2\pi i nx}: n<0\} $$ for $L^2[-1/2,1/2]$ by the spectra of the Toeplitz operators with symbol $g$. Using this characterization, we classify all real-valued functions $g$ such that ${\mathcal F}(g)$ is complete or forms a frame/basis. Conversely, we use the classical Kadec-1/4-theorem in non-harmonic Fourier series to determine all $\xi$ such that the Toeplitz operators with symbol $e^{2\pi i \xi x}$ is injective or invertible. These results demonstrate an elegant interaction between frame theory of windowed exponentials and Toeplitz operators. Finally, as an application, we use our results to answer some open questions in dynamical sampling, phase retrieval and derivative samplings on $\ell^2({\mathbb Z})$ and Paley-Wiener spaces of bandlimited functions.
0
0
1
0
0
0
Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs
Deep learning models are often successfully trained using gradient descent, despite the worst case hardness of the underlying non-convex optimization problem. The key question is then under what conditions can one prove that optimization will succeed. Here we provide a strong result of this kind. We consider a neural net with one hidden layer and a convolutional structure with no overlap and a ReLU activation function. For this architecture we show that learning is NP-complete in the general case, but that when the input distribution is Gaussian, gradient descent converges to the global optimum in polynomial time. To the best of our knowledge, this is the first global optimality guarantee of gradient descent on a convolutional neural network with ReLU activations.
0
0
1
1
0
0
Detection of irregular QRS complexes using Hermite Transform and Support Vector Machine
Computer based recognition and detection of abnormalities in ECG signals is proposed. For this purpose, the Support Vector Machines (SVM) are combined with the advantages of Hermite transform representation. SVM represent a special type of classification techniques commonly used in medical applications. Automatic classification of ECG could make the work of cardiologic departments faster and more efficient. It would also reduce the number of false diagnosis and, as a result, save lives. The working principle of the SVM is based on translating the data into a high dimensional feature space and separating it using a linear classificator. In order to provide an optimal representation for SVM application, the Hermite transform domain is used. This domain is proved to be suitable because of the similarity of the QRS complex with Hermite basis functions. The maximal signal information is obtained using a small set of features that are used for detection of irregular QRS complexes. The aim of the paper is to show that these features can be employed for automatic ECG signal analysis.
1
0
0
0
0
0
On the Lipschitz equivalence of self-affine sets
Let $A$ be an expanding $d\times d$ matrix with integer entries and ${\mathcal D}\subset {\mathbb Z}^d$ be a finite digit set. Then the pair $(A, {\mathcal D})$ defines a unique integral self-affine set $K=A^{-1}(K+{\mathcal D})$. In this paper, by replacing the Euclidean norm with a pseudo-norm $w$ in terms of $A$, we construct a hyperbolic graph on $(A, {\mathcal D})$ and show that $K$ can be identified with the hyperbolic boundary. Moreover, if $(A, {\mathcal D})$ safisfies the open set condition, we also prove that two totally disconnected integral self-affine sets are Lipschitz equivalent if an only if they have the same $w$-Hausdorff dimension, that is, their digit sets have equal cardinality. We extends some well-known results in the self-similar sets to the self-affine sets.
0
0
1
0
0
0
The Gross-Pitaevskii equations of a static and spherically symmetric condensate of gravitons
In this paper we consider the Dvali and Gómez assumption that the end state of a gravitational collapse is a Bose-Einstein condensate of gravitons. We then construct the two Gross-Pitaevskii equations for a static and spherically symmetric configuration of the condensate. These two equations correspond to the constrained minimisation of the gravitational Hamiltonian with respect to the redshift and the Newtonian potential, per given number of gravitons. We find that the effective geometry of the condensate is the one of a gravastar (a DeSitter star) with a sub-Planckian cosmological constant, for masses larger than the Planck scale. Thus, a condensate corresponding to a semiclassical black hole, is always quantum and weakly coupled. Finally, we obtain that the boundary of our gravastar, although it is not the location of a horizon, corresponds to the Schwarzschild radius.
0
1
0
0
0
0
Stability and instability in saddle point dynamics - Part I
We consider the problem of convergence to a saddle point of a concave-convex function via gradient dynamics. Since first introduced by Arrow, Hurwicz and Uzawa in [1] such dynamics have been extensively used in diverse areas, there are, however, features that render their analysis non trivial. These include the lack of convergence guarantees when the function considered is not strictly concave-convex and also the non-smoothness of subgradient dynamics. Our aim in this two part paper is to provide an explicit characterization to the asymptotic behaviour of general gradient and subgradient dynamics applied to a general concave-convex function. We show that despite the nonlinearity and non-smoothness of these dynamics their $\omega$-limit set is comprised of trajectories that solve only explicit linear ODEs that are characterized within the paper. More precisely, in Part I an exact characterization is provided to the asymptotic behaviour of unconstrained gradient dynamics. We also show that when convergence to a saddle point is not guaranteed then the system behaviour can be problematic, with arbitrarily small noise leading to an unbounded variance. In Part II we consider a general class of subgradient dynamics that restrict trajectories in an arbitrary convex domain, and show that their limiting trajectories are solutions of subgradient dynamics on only affine subspaces. The latter is a smooth class of dynamics with an asymptotic behaviour exactly characterized in Part I, as solutions to explicit linear ODEs. These results are used to formulate corresponding convergence criteria and are demonstrated with several examples and applications presented in Part II.
1
0
1
0
0
0
Continual Prediction of Notification Attendance with Classical and Deep Network Approaches
We investigate to what extent mobile use patterns can predict -- at the moment it is posted -- whether a notification will be clicked within the next 10 minutes. We use a data set containing the detailed mobile phone usage logs of 279 users, who over the course of 5 weeks received 446,268 notifications from a variety of apps. Besides using classical gradient-boosted trees, we demonstrate how to make continual predictions using a recurrent neural network (RNN). The two approaches achieve a similar AUC of ca. 0.7 on unseen users, with a possible operation point of 50% sensitivity and 80% specificity considering all notification types (an increase of 40% with respect to a probabilistic baseline). These results enable automatic, intelligent handling of mobile phone notifications without the need for user feedback or personalization. Furthermore, they showcase how forego feature-extraction by using RNNs for continual predictions directly on mobile usage logs. To the best of our knowledge, this is the first work that leverages mobile sensor data for continual, context-aware predictions of interruptibility using deep neural networks.
1
0
0
0
0
0
Extensions of Operators, Liftings of Monads and Distributive Laws
In a previous study, the algebraic formulation of the First Fundamental Theorem of Calculus (FFTC) is shown to allow extensions of differential and Rota-Baxter operators on the one hand, and to give rise to liftings of monads and comonads, and mixed distributive laws on the other. Generalizing the FFTC, we consider in this paper a class of constraints between a differential operator and a Rota-Baxter operator. For a given constraint, we show that the existences of extensions of differential and Rota-Baxter operators, of liftings of monads and comonads, and of mixed distributive laws are equivalent. We further give a classification of the constraints satisfying these equivalent conditions.
0
0
1
0
0
0
Cascaded Incremental Nonlinear Dynamic Inversion Control for MAV Disturbance Rejection
Micro Aerial Vehicles (MAVs) are limited in their operation outdoors near obstacles by their ability to withstand wind gusts. Currently widespread position control methods such as Proportional Integral Derivative control do not perform well under the influence of gusts. Incremental Nonlinear Dynamic Inversion (INDI) is a sensor-based control technique that can control nonlinear systems subject to disturbances. It was developed for the attitude control of manned aircraft or MAVs. In this paper we generalize this method to the outer loop control of MAVs under severe gust loads. Significant improvements over a traditional Proportional Integral Derivative (PID) controller are demonstrated in an experiment where the quadrotor flies in and out of a windtunnel exhaust at 10 m/s. The control method does not rely on frequent position updates, as is demonstrated in an outside experiment using a standard GPS module. Finally, we investigate the effect of using a linearization to calculate thrust vector increments, compared to a nonlinear calculation. The method requires little modeling and is computationally efficient.
1
0
0
0
0
0
Numerical Evaluation of Elliptic Functions, Elliptic Integrals and Modular Forms
We describe algorithms to compute elliptic functions and their relatives (Jacobi theta functions, modular forms, elliptic integrals, and the arithmetic-geometric mean) numerically to arbitrary precision with rigorous error bounds for arbitrary complex variables. Implementations in ball arithmetic are available in the open source Arb library. We discuss the algorithms from a concrete implementation point of view, with focus on performance at tens to thousands of digits of precision.
1
0
0
0
0
0
In situ Electric Field Skyrmion Creation in Magnetoelectric Cu$_2$OSeO$_3$
Magnetic skyrmions are localized nanometric spin textures with quantized winding numbers as the topological invariant. Rapidly increasing attention has been paid to the investigations of skyrmions since their experimental discovery in 2009, due both to the fundamental properties and the promising potential in spintronics based applications. However, controlled creation of skyrmions remains a pivotal challenge towards technological applications. Here, we report that skyrmions can be created locally by electric field in the magnetoelectric helimagnet Cu$\mathsf{_2}$OSeO$\mathsf{_3}$. Using Lorentz transmission electron microscopy, we successfully write skyrmions in situ from a helical spin background. Our discovery is highly coveted since it implies that skyrmionics can be integrated into contemporary field effect transistor based electronic technology, where very low energy dissipation can be achieved, and hence realizes a large step forward to its practical applications.
0
1
0
0
0
0
An online sequence-to-sequence model for noisy speech recognition
Generative models have long been the dominant approach for speech recognition. The success of these models however relies on the use of sophisticated recipes and complicated machinery that is not easily accessible to non-practitioners. Recent innovations in Deep Learning have given rise to an alternative - discriminative models called Sequence-to-Sequence models, that can almost match the accuracy of state of the art generative models. While these models are easy to train as they can be trained end-to-end in a single step, they have a practical limitation that they can only be used for offline recognition. This is because the models require that the entirety of the input sequence be available at the beginning of inference, an assumption that is not valid for instantaneous speech recognition. To address this problem, online sequence-to-sequence models were recently introduced. These models are able to start producing outputs as data arrives, and the model feels confident enough to output partial transcripts. These models, like sequence-to-sequence are causal - the output produced by the model until any time, $t$, affects the features that are computed subsequently. This makes the model inherently more powerful than generative models that are unable to change features that are computed from the data. This paper highlights two main contributions - an improvement to online sequence-to-sequence model training, and its application to noisy settings with mixed speech from two speakers.
1
0
0
1
0
0
Existence of infinite Viterbi path for pairwise Markov models
For hidden Markov models one of the most popular estimates of the hidden chain is the Viterbi path -- the path maximising the posterior probability. We consider a more general setting, called the pairwise Markov model, where the joint process consisting of finite-state hidden regime and observation process is assumed to be a Markov chain. We prove that under some conditions it is possible to extend the Viterbi path to infinity for almost every observation sequence which in turn enables to define an infinite Viterbi decoding of the observation process, called the Viterbi process. This is done by constructing a block of observations, called a barrier, which ensures that the Viterbi path goes trough a given state whenever this block occurs in the observation sequence.
0
0
1
1
0
0
Coding for Segmented Edit Channels
This paper considers insertion and deletion channels with the additional assumption that the channel input sequence is implicitly divided into segments such that at most one edit can occur within a segment. No segment markers are available in the received sequence. We propose code constructions for the segmented deletion, segmented insertion, and segmented insertion-deletion channels based on subsets of Varshamov-Tenengolts codes chosen with pre-determined prefixes and/or suffixes. The proposed codes, constructed for any finite alphabet, are zero-error and can be decoded segment-by-segment. We also derive an upper bound on the rate of any zero-error code for the segmented edit channel, in terms of the segment length. This upper bound shows that the rate scaling of the proposed codes as the segment length increases is the same as that of the maximal code.
1
0
0
0
0
0
Charge compensation at the interface between the polar NaCl(111) surface and a NaCl aqueous solution
Periodic supercell models of electric double layers formed at the interface between a charged surface and an electrolyte are subject to serious finite size errors and require certain adjustments in the treatment of the long-range electrostatic interactions. In a previous publication (C. Zhang, M. Sprik, Phys. Rev. B 94, 245309 (2016)) we have shown how this can be achieved using finite field methods. The test system was the familiar simple point charge model of a NaCl aqueous solution confined between two oppositely charged walls. Here this method is extended to the interface between the (111) polar surface of a NaCl crystal and a high concentration NaCl aqueous solution. The crystal is kept completely rigid and the compensating charge screening the polarization can only be provided by the electrolyte. We verify that the excess electrolyte ionic charge at the interface conforms to the Tasker 1/2 rule for compensating charge in the theory of polar rocksalt (111) surfaces. The interface can be viewed as an electric double layer with a net charge. We define a generalized Helmholtz capacitance $C_\text{H}$ which can be computed by varying the applied electric field. We find $C_\text{H} = 8.23 \, \mu \mathrm{Fcm}^{-2}$, which should be compared to the $4.23 \, \mu \mathrm{Fcm}^{-2}$ for the (100) non-polar surface of the same NaCl crystal. This is rationalized by the observation that compensating ions shed their first solvation shell adsorbing as contact ions pairs on the polar surface.
0
1
0
0
0
0
CrowdTone: Crowd-powered tone feedback and improvement system for emails
In this paper, we present CrowdTone, a system designed to help people set the appropriate tone in their email communication. CrowdTone utilizes the context and content of an email message to identify and set the appropriate tone through a consensus-building process executed by crowd workers. We evaluated CrowdTone with 22 participants, who provided a total of 29 emails that they had received in the past, and ran them through CrowdTone. Participants and professional writers assessed the quality of improvements finding a substantial increase in the percentage of emails deemed "appropriate" or "very appropriate" - from 25% to more than 90% by recipients, and from 45% to 90% by professional writers. Additionally, the recipients' feedback indicated that more than 90% of the CrowdTone processed emails showed improvement.
1
0
0
0
0
0
A Deterministic and Generalized Framework for Unsupervised Learning with Restricted Boltzmann Machines
Restricted Boltzmann machines (RBMs) are energy-based neural-networks which are commonly used as the building blocks for deep architectures neural architectures. In this work, we derive a deterministic framework for the training, evaluation, and use of RBMs based upon the Thouless-Anderson-Palmer (TAP) mean-field approximation of widely-connected systems with weak interactions coming from spin-glass theory. While the TAP approach has been extensively studied for fully-visible binary spin systems, our construction is generalized to latent-variable models, as well as to arbitrarily distributed real-valued spin systems with bounded support. In our numerical experiments, we demonstrate the effective deterministic training of our proposed models and are able to show interesting features of unsupervised learning which could not be directly observed with sampling. Additionally, we demonstrate how to utilize our TAP-based framework for leveraging trained RBMs as joint priors in denoising problems.
1
1
0
1
0
0
Integrability conditions for Compound Random Measures
Compound random measures (CoRM's) are a flexible and tractable framework for vectors of completely random measure. In this paper, we provide conditions to guarantee the existence of a CoRM. Furthermore, we prove some interesting properties of CoRM's when exponential scores and regularly varying Lévy intensities are considered.
0
0
1
1
0
0
Tunable low energy Ps beam for the anti-hydrogen free fall and for testing gravity with a Mach-Zehnder interferometer
The test of gravitational force on antimatter in the field of the matter gravitational field, produced by earth, can be done by a free fall experiment which involves only General Relativity, and with a Mach-Zehnder interferometer which involves Quantum Mechanics. This article presents a new method to produce a tunable low energy (Ps ) beam suitable for trapping the (Hbar + ) ion in a free fall experiment, and suitable for a gravity Mach-Zehnder interferometer with (Ps). The low energy (Ps) beam is tunable in the [10 eV, 100 eV] range.
0
1
0
0
0
0
An analysis of incorporating an external language model into a sequence-to-sequence model
Attention-based sequence-to-sequence models for automatic speech recognition jointly train an acoustic model, language model, and alignment mechanism. Thus, the language model component is only trained on transcribed audio-text pairs. This leads to the use of shallow fusion with an external language model at inference time. Shallow fusion refers to log-linear interpolation with a separately trained language model at each step of the beam search. In this work, we investigate the behavior of shallow fusion across a range of conditions: different types of language models, different decoding units, and different tasks. On Google Voice Search, we demonstrate that the use of shallow fusion with a neural LM with wordpieces yields a 9.1% relative word error rate reduction (WERR) over our competitive attention-based sequence-to-sequence model, obviating the need for second-pass rescoring.
1
0
0
0
0
0
Multi-Generator Generative Adversarial Nets
We propose a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapse and delivering state-of-the-art results. A minimax formulation is able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators' distributions and the empirical data distribution is minimal, whilst the JSD among generators' distributions is maximal, hence effectively avoiding the mode collapse. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by generators.
1
0
0
1
0
0
Neural Style Transfer: A Review
The seminal work of Gatys et al. demonstrated the power of Convolutional Neural Networks (CNNs) in creating artistic imagery by separating and recombining image content and style. This process of using CNNs to render a content image in different styles is referred to as Neural Style Transfer (NST). Since then, NST has become a trending topic both in academic literature and industrial applications. It is receiving increasing attention and a variety of approaches are proposed to either improve or extend the original NST algorithm. In this paper, we aim to provide a comprehensive overview of the current progress towards NST. We first propose a taxonomy of current algorithms in the field of NST. Then, we present several evaluation methods and compare different NST algorithms both qualitatively and quantitatively. The review concludes with a discussion of various applications of NST and open problems for future research. A list of papers discussed in this review, corresponding codes, pre-trained models and more comparison results are publicly available at this https URL.
1
0
0
1
0
0
Distribution of water in the G327.3-0.6 massive star-forming region
We aim at characterizing the large-scale distribution of H2O in G327.3-0.6, a massive star-forming region made of individual objects in different evolutionary phases. We investigate variations of H2O abundance as function of evolution. We present Herschel continuum maps at 89 and 179 $\mu$m of the whole region and an APEX map at 350 {\mu}m of the IRDC. New spectral HIFI maps toward the IRDC region covering low-energy H2O lines at 987 and 1113 GHz are also presented and combined with HIFI pointed observations of the G327 hot core. We infer the physical properties of the gas through optical depth analysis and radiative transfer modeling. The continuum emission at 89 and 179 {\mu}m follows the thermal continuum emission at longer wavelengths, with a peak at the position of the hot core, a secondary peak in the Hii region, and an arch-like layer of hot gas west of the Hii region. The same morphology is observed in the 1113 GHz line, in absorption toward all dust condensations. Optical depths of ~80 and 15 are estimated and correspond to column densities of 10^15 and 2 10^14 cm-2, for the hot core and IRDC position. These values indicate an H2O to H2 ratio of 3 10^-8 toward the hot core; the abundance of H2O does not change along the IRDC with values of some 10^-8. Infall (over ~ 20") is detected toward the hot core position with a rate of 1-1.3 10^-2 M_sun /yr, high enough to overcome the radiation pressure due to the stellar luminosity. The source structure of the hot core region is complex, with a cold outer gas envelope in expansion, situated between the outflow and the observer, extending over 0.32 pc. The outflow is seen face-on and centered away from the hot core. The distribution of H2O along the IRDC is roughly constant with an abundance peak in the more evolved object. These water abundances are in agreement with previous studies in other massive objects and chemical models.
0
1
0
0
0
0
Straightening rule for an $m'$-truncated polynomial ring
We consider a certain quotient of a polynomial ring categorified by both the isomorphic Green rings of the symmetric groups and Schur algebras generated by the signed Young permutation modules and mixed powers respectively. They have bases parametrised by pairs of partitions whose second partitions are multiples of the odd prime $p$ the characteristic of the underlying field. We provide an explicit formula rewriting a signed Young permutation module (respectively, mixed power) in terms of signed Young permutation modules (respectively, mixed powers) labelled by those pairs of partitions. As a result, for each partition $\lambda$, we discovered the number of compositions $\delta$ such that $\delta$ can be rearranged to $\lambda$ and whose partial sums of $\delta$ are not divisible by $p$.
0
0
1
0
0
0
Effective Blog Pages Extractor for Better UGC Accessing
Blog is becoming an increasingly popular media for information publishing. Besides the main content, most of blog pages nowadays also contain noisy information such as advertisements etc. Removing these unrelated elements can improves user experience, but also can better adapt the content to various devices such as mobile phones. Though template-based extractors are highly accurate, they may incur expensive cost in that a large number of template need to be developed and they will fail once the template is updated. To address these issues, we present a novel template-independent content extractor for blog pages. First, we convert a blog page into a DOM-Tree, where all elements including the title and body blocks in a page correspond to subtrees. Then we construct subtree candidate set for the title and the body blocks respectively, and extract both spatial and content features for elements contained in the subtree. SVM classifiers for the title and the body blocks are trained using these features. Finally, the classifiers are used to extract the main content from blog pages. We test our extractor on 2,250 blog pages crawled from nine blog sites with obviously different styles and templates. Experimental results verify the effectiveness of our extractor.
1
0
0
0
0
0
The Principle of Similitude in Biology: From Allometry to the Formulation of Dimensionally Homogenous `Laws'
Meaningful laws of nature must be independent of the units employed to measure the variables. The principle of similitude (Rayleigh 1915) or dimensional homogeneity, states that only commensurable quantities (ones having the same dimension) may be compared, therefore, meaningful laws of nature must be homogeneous equations in their various units of measurement, a result which was formalized in the $\rm \Pi$ theorem (Vaschy 1892; Buckingham 1914). However, most relations in allometry do not satisfy this basic requirement, including the `3/4 Law' (Kleiber 1932) that relates the basal metabolic rate and body mass, which it is sometimes claimed to be the most fundamental biological rate (Brown et al. 2004) and the closest to a law in life sciences (West \& Brown 2004). Using the $\rm \Pi$ theorem, here we show that it is possible to construct a unique homogeneous equation for the metabolic rates, in agreement with data in the literature. We find that the variations in the dependence of the metabolic rates on body mass are secondary, coming from variations in the allometric dependence of the heart frequencies. This includes not only different classes of animals (mammals, birds, invertebrates) but also different exercise conditions (basal and maximal). Our results demonstrate that most of the differences found in the allometric exponents (White et al. 2007) are due to compare incommensurable quantities and that our dimensionally homogenous formula, unify these differences into a single formulation. We discuss the ecological implications of this new formulation in the context of the Malthusian's, Fenchel's and the total energy consumed in a lifespan relations.
0
1
0
0
0
0
New insight into the dynamics of rhodopsin photoisomerization from one-dimensional quantum-classical modeling
Characterization of the primary events involved in the $cis-trans$ photoisomerization of the rhodopsin retinal chromophore was approximated by a minimum one-dimensional quantum-classical model. The developed mathematical model is identical to that obtained using conventional quantum-classical approaches, and multiparametric quantum-chemical or molecular dynamics (MD) computations were not required. The quantum subsystem of the model includes three electronic states for rhodopsin: (i) the ground state, (ii) the excited state, and (iii) the primary photoproduct in the ground state. The resultant model is in perfect agreement with experimental data in terms of the quantum yield, the time required to reach the conical intersection and to complete the quantum evolution, the range of the characteristic low frequencies active within the primary events of the $11-cis$ retinal isomerization, and the coherent character of the photoreaction. An effective redistribution of excess energy between the vibration modes of rhodopsin was revealed by analysis of the dissipation process. The results confirm the validity of the minimal model, despite its one-dimensional character. The fundamental nature of the photoreaction was therefore demonstrated using a minimum mathematical model for the first time.
0
1
0
0
0
0
High-order schemes for the Euler equations in cylindrical/spherical coordinates
We consider implementations of high-order finite difference Weighted Essentially Non-Oscillatory (WENO) schemes for the Euler equations in cylindrical and spherical coordinate systems with radial dependence only. The main concern of this work lies in ensuring both high-order accuracy and conservation. Three different spatial discretizations are assessed: one that is shown to be high-order accurate but not conservative, one conservative but not high-order accurate, and a new approach that is both high-order accurate and conservative. For cylindrical and spherical coordinates, we present convergence results for the advection equation and the Euler equations with an acoustics problem; we then use the Sod shock tube and the Sedov point-blast problems in cylindrical coordinates to verify our analysis and implementations.
0
1
1
0
0
0
Suppressing correlations in massively parallel simulations of lattice models
For lattice Monte Carlo simulations parallelization is crucial to make studies of large systems and long simulation time feasible, while sequential simulations remain the gold-standard for correlation-free dynamics. Here, various domain decomposition schemes are compared, concluding with one which delivers virtually correlation-free simulations on GPU Extensive simulations of the octahedron model for $2+1$ dimensional Karda--Parisi--Zhang surface growth, which is very sensitive to correlation in the site-selection dynamics, were performed to show self-consistency of the parallel runs and agreement with the sequential algorithm. We present a GPU implementation providing a speedup of about $30\times$ over a parallel CPU implementation on a single socket and at least $180\times$ with respect to the sequential reference.
0
1
0
0
0
0
New bounds on the strength of some restrictions of Hindman's Theorem
We prove upper and lower bounds on the effective content and logical strength for a variety of natural restrictions of Hindman's Finite Sums Theorem. For example, we show that Hindman's Theorem for sums of length at most 2 and 4 colors implies $\mathsf{ACA}_0$. An emerging {\em leitmotiv} is that the known lower bounds for Hindman's Theorem and for its restriction to sums of at most 2 elements are already valid for a number of restricted versions which have simple proofs and better computability- and proof-theoretic upper bounds than the known upper bound for the full version of the theorem. We highlight the role of a sparsity-like condition on the solution set, which we call apartness.
0
0
1
0
0
0
3D Face Morphable Models "In-the-Wild"
3D Morphable Models (3DMMs) are powerful statistical models of 3D facial shape and texture, and among the state-of-the-art methods for reconstructing facial shape from single images. With the advent of new 3D sensors, many 3D facial datasets have been collected containing both neutral as well as expressive faces. However, all datasets are captured under controlled conditions. Thus, even though powerful 3D facial shape models can be learnt from such data, it is difficult to build statistical texture models that are sufficient to reconstruct faces captured in unconstrained conditions ("in-the-wild"). In this paper, we propose the first, to the best of our knowledge, "in-the-wild" 3DMM by combining a powerful statistical model of facial shape, which describes both identity and expression, with an "in-the-wild" texture model. We show that the employment of such an "in-the-wild" texture model greatly simplifies the fitting procedure, because there is no need to optimize with regards to the illumination parameters. Furthermore, we propose a new fast algorithm for fitting the 3DMM in arbitrary images. Finally, we have captured the first 3D facial database with relatively unconstrained conditions and report quantitative evaluations with state-of-the-art performance. Complementary qualitative reconstruction results are demonstrated on standard "in-the-wild" facial databases. An open source implementation of our technique is released as part of the Menpo Project.
1
0
0
0
0
0
Image Segmentation to Distinguish Between Overlapping Human Chromosomes
In medicine, visualizing chromosomes is important for medical diagnostics, drug development, and biomedical research. Unfortunately, chromosomes often overlap and it is necessary to identify and distinguish between the overlapping chromosomes. A segmentation solution that is fast and automated will enable scaling of cost effective medicine and biomedical research. We apply neural network-based image segmentation to the problem of distinguishing between partially overlapping DNA chromosomes. A convolutional neural network is customized for this problem. The results achieved intersection over union (IOU) scores of 94.7% for the overlapping region and 88-94% on the non-overlapping chromosome regions.
1
0
0
1
0
0
Implementing Large-Scale Agile Frameworks: Challenges and Recommendations
Based on 13 agile transformation cases over 15 years, this article identifies nine challenges associated with implementing SAFe, Scrum-at-Scale, Spotify, LeSS, Nexus, and other mixed or customised large-scale agile frameworks. These challenges should be considered by organizations aspiring to pursue a large-scale agile strategy. This article also provides recommendations for practitioners and agile researchers.
1
0
0
0
0
0
On The Limiting Distributions of the Total Height On Families of Trees
A symbolic-computational algorithm, fully implemented in Maple, is described, that computes explicit expressions for generating functions that enable the efficient computations of the expectation, variance, and higher moments, of the random variable `sum of distances to the root', defined on any given family of rooted ordered trees (defined by degree restrictions). Taking limits, we confirm, via elementary methods, the fact, due to David Aldous, and expanded by Svante Janson and others, that the limiting (scaled) distributions are all the same, and coincide with the limiting distribution of the same random variable, when it is defined on labeled rooted trees.
0
0
1
0
0
0
HESS J1826$-$130: A Very Hard $γ$-Ray Spectrum Source in the Galactic Plane
HESS J1826$-$130 is an unidentified hard spectrum source discovered by H.E.S.S. along the Galactic plane, the spectral index being $\Gamma$ = 1.6 with an exponential cut-off at about 12 TeV. While the source does not have a clear counterpart at longer wavelengths, the very hard spectrum emission at TeV energies implies that electrons or protons accelerated up to several hundreds of TeV are responsible for the emission. In the hadronic case, the VHE emission can be produced by runaway cosmic-rays colliding with the dense molecular clouds spatially coincident with the H.E.S.S. source.
0
1
0
0
0
0
Laser opacity in underdense preplasma of solid targets due to quantum electrodynamics effects
We investigate how next-generation laser pulses at 10 PW $-$ 200 PW interact with a solid target in the presence of a relativistically underdense preplasma produced by amplified spontaneous emission (ASE). Laser hole boring and relativistic transparency are strongly restrained due to the generation of electron-positron pairs and $\gamma$-ray photons via quantum electrodynamics (QED) processes. A pair plasma with a density above the initial preplasma density is formed, counteracting the electron-free channel produced by the hole boring. This pair-dominated plasma can block the laser transport and trigger an avalanche-like QED cascade, efficiently transfering the laser energy to photons. This renders a 1-$\rm\mu m$-scalelength, underdense preplasma completely opaque to laser pulses at this power level. The QED-induced opacity therefore sets much higher contrast requirements for such pulse in solid-target experiments than expected by classical plasma physics. Our simulations show for example, that proton acceleration from the rear of a solid with a preplasma would be strongly impaired.
0
1
0
0
0
0
Consequentialist conditional cooperation in social dilemmas with imperfect information
Social dilemmas, where mutual cooperation can lead to high payoffs but participants face incentives to cheat, are ubiquitous in multi-agent interaction. We wish to construct agents that cooperate with pure cooperators, avoid exploitation by pure defectors, and incentivize cooperation from the rest. However, often the actions taken by a partner are (partially) unobserved or the consequences of individual actions are hard to predict. We show that in a large class of games good strategies can be constructed by conditioning one's behavior solely on outcomes (ie. one's past rewards). We call this consequentialist conditional cooperation. We show how to construct such strategies using deep reinforcement learning techniques and demonstrate, both analytically and experimentally, that they are effective in social dilemmas beyond simple matrix games. We also show the limitations of relying purely on consequences and discuss the need for understanding both the consequences of and the intentions behind an action.
1
0
0
0
0
0
Casimir free energy of dielectric films: Classical limit, low-temperature behavior and control
The Casimir free energy of dielectric films, both free-standing in vacuum and deposited on metallic or dielectric plates, is investigated. It is shown that the values of the free energy depend considerably on whether the calculation approach used neglects or takes into account the dc conductivity of film material. We demonstrate that there are the material-dependent and universal classical limits in the former and latter cases, respectively. The analytic behavior of the Casimir free energy and entropy for a free-standing dielectric film at low temperature in found. According to our results, the Casimir entropy goes to zero when the temperature vanishes if the calculation approach with neglected dc conductivity of a film is employed. If the dc conductivity is taken into account, the Casimir entropy takes the positive value at zero temperature, depending on the parameters of a film, i.e., the Nernst heat theorem is violated. By considering the Casimir free energy of silica and sapphire films deposited on a Au plate in the framework of two calculation approaches, we argue that physically correct values are obtained by disregarding the role of dc conductivity. A comparison with the well known results for the configuration of two parallel plates is made. Finally, we compute the Casimir free energy of silica, sapphire and Ge films deposited on high-resistivity Si plates of different thicknesses and demonstrate that it can be positive, negative and equal to zero. Possible applications of the obtained results to thin films used in microelectronics are discussed.
0
1
0
0
0
0
Discriminant chronicles mining: Application to care pathways analytics
Pharmaco-epidemiology (PE) is the study of uses and effects of drugs in well defined populations. As medico-administrative databases cover a large part of the population, they have become very interesting to carry PE studies. Such databases provide longitudinal care pathways in real condition containing timestamped care events, especially drug deliveries. Temporal pattern mining becomes a strategic choice to gain valuable insights about drug uses. In this paper we propose DCM, a new discriminant temporal pattern mining algorithm. It extracts chronicle patterns that occur more in a studied population than in a control population. We present results on the identification of possible associations between hospitalizations for seizure and anti-epileptic drug switches in care pathway of epileptic patients.
1
0
0
0
0
0
Adversarial Symmetric Variational Autoencoder
A new form of variational autoencoder (VAE) is developed, in which the joint distribution of data and codes is considered in two (symmetric) forms: ($i$) from observed data fed through the encoder to yield codes, and ($ii$) from latent codes drawn from a simple prior and propagated through the decoder to manifest data. Lower bounds are learned for marginal log-likelihood fits observed data and latent codes. When learning with the variational bound, one seeks to minimize the symmetric Kullback-Leibler divergence of joint density functions from ($i$) and ($ii$), while simultaneously seeking to maximize the two marginal log-likelihoods. To facilitate learning, a new form of adversarial training is developed. An extensive set of experiments is performed, in which we demonstrate state-of-the-art data reconstruction and generation on several image benchmark datasets.
1
0
0
0
0
0
The singular locus of hypersurface sections containing a closed subscheme over finite fields
We prove that there exist hypersurfaces that contain a given closed subscheme $Z$ of the projective space over a finite field and intersect a given smooth scheme $X$ off of $Z$ smoothly, if the intersection $V = Z \cap X$ is smooth. Furthermore, we can give a bound on the dimension of the singular locus of the hypersurface section and prescribe finitely many local conditions on the hypersurface. This is an analogue of a Bertini theorem of Bloch over finite fields and is proved using Poonen's closed point sieve. We also show a similar theorem for the case where $V$ is not smooth.
0
0
1
0
0
0
A Statistical Perspective on Inverse and Inverse Regression Problems
Inverse problems, where in broad sense the task is to learn from the noisy response about some unknown function, usually represented as the argument of some known functional form, has received wide attention in the general scientific disciplines. How- ever, in mainstream statistics such inverse problem paradigm does not seem to be as popular. In this article we provide a brief overview of such problems from a statistical, particularly Bayesian, perspective. We also compare and contrast the above class of problems with the perhaps more statistically familiar inverse regression problems, arguing that this class of problems contains the traditional class of inverse problems. In course of our review we point out that the statistical literature is very scarce with respect to both the inverse paradigms, and substantial research work is still necessary to develop the fields.
0
0
1
1
0
0
Model reduction for transport-dominated problems via online adaptive bases and adaptive sampling
This work presents a model reduction approach for problems with coherent structures that propagate over time such as convection-dominated flows and wave-type phenomena. Traditional model reduction methods have difficulties with these transport-dominated problems because propagating coherent structures typically introduce high-dimensional features that require high-dimensional approximation spaces. The approach proposed in this work exploits the locality in space and time of propagating coherent structures to derive efficient reduced models. First, full-model solutions are approximated locally in time via local reduced spaces that are adapted with basis updates during time stepping. The basis updates are derived from querying the full model at a few selected spatial coordinates. Second, the locality in space of the coherent structures is exploited via an adaptive sampling scheme that selects at which components to query the full model for computing the basis updates. Our analysis shows that, in probability, the more local the coherent structure is in space, the fewer full-model samples are required to adapt the reduced basis with the proposed adaptive sampling scheme. Numerical results on benchmark examples with interacting wave-type structures and time-varying transport speeds and on a model combustor of a single-element rocket engine demonstrate the wide applicability of our approach and the significant runtime speedups compared to full models and traditional reduced models.
1
0
0
0
0
0
Weighted parallel SGD for distributed unbalanced-workload training system
Stochastic gradient descent (SGD) is a popular stochastic optimization method in machine learning. Traditional parallel SGD algorithms, e.g., SimuParallel SGD, often require all nodes to have the same performance or to consume equal quantities of data. However, these requirements are difficult to satisfy when the parallel SGD algorithms run in a heterogeneous computing environment; low-performance nodes will exert a negative influence on the final result. In this paper, we propose an algorithm called weighted parallel SGD (WP-SGD). WP-SGD combines weighted model parameters from different nodes in the system to produce the final output. WP-SGD makes use of the reduction in standard deviation to compensate for the loss from the inconsistency in performance of nodes in the cluster, which means that WP-SGD does not require that all nodes consume equal quantities of data. We also analyze the theoretical feasibility of running two other parallel SGD algorithms combined with WP-SGD in a heterogeneous environment. The experimental results show that WP-SGD significantly outperforms the traditional parallel SGD algorithms on distributed training systems with an unbalanced workload.
1
0
0
1
0
0
Neural Lander: Stable Drone Landing Control using Learned Dynamics
Precise trajectory control near ground is difficult for multi-rotor drones, due to the complex ground effects caused by interactions between multi-rotor airflow and the environment. Conventional control methods often fail to properly account for these complex effects and fall short in accomplishing smooth landing. In this paper, we present a novel deep-learning-based robust nonlinear controller (Neural-Lander) that improves control performance of a quadrotor during landing. Our approach blends together a nominal dynamics model coupled with a Deep Neural Network (DNN) that learns the high-order interactions. We employ a novel application of spectral normalization to constrain the DNN to have bounded Lipschitz behavior. Leveraging this Lipschitz property, we design a nonlinear feedback linearization controller using the learned model and prove system stability with disturbance rejection. To the best of our knowledge, this is the first DNN-based nonlinear feedback controller with stability guarantees that can utilize arbitrarily large neural nets. Experimental results demonstrate that the proposed controller significantly outperforms a baseline linear proportional-derivative (PD) controller in both 1D and 3D landing cases. In particular, we show that compared to the PD controller, Neural-Lander can decrease error in z direction from 0.13m to zero, and mitigate average x and y drifts by 90% and 34% respectively, in 1D landing. Meanwhile, Neural-Lander can decrease z error from 0.12m to zero, in 3D landing. We also empirically show that the DNN generalizes well to new test inputs outside the training domain.
1
0
0
0
0
0
Finite-Time Stabilization of Longitudinal Control for Autonomous Vehicles via a Model-Free Approach
This communication presents a longitudinal model-free control approach for computing the wheel torque command to be applied on a vehicle. This setting enables us to overcome the problem of unknown vehicle parameters for generating a suitable control law. An important parameter in this control setting is made time-varying for ensuring finite-time stability. Several convincing computer simulations are displayed and discussed. Overshoots become therefore smaller. The driving comfort is increased and the robustness to time-delays is improved.
1
0
1
0
0
0
Review of Geraint F. Lewis and Luke A. Barnes, A Fortunate Universe: Life in a Finely Tuned Cosmos
This new book by cosmologists Geraint F. Lewis and Luke A. Barnes is another entry in the long list of cosmology-centered physics books intended for a large audience. While many such books aim at advancing a novel scientific theory, A Fortunate Universe has no such scientific pretense. Its goals are to assert that the universe is fine-tuned for life, to defend that this fact can reasonably motivate further scientific inquiry as to why it is so, and to show that the multiverse and intelligent design hypotheses are reasonable proposals to explain this fine-tuning. This book's potential contribution, therefore, lies in how convincingly and efficiently it can make that case.
0
1
0
0
0
0
Introducing SPAIN (SParse Audion INpainter)
A novel sparsity-based algorithm for audio inpainting is proposed by translating the SPADE algorithm by Kitić et. al.---the state-of-the-art for audio declipping---into the task of audio inpainting. SPAIN (SParse Audio INpainter) comes in synthesis and analysis variants. Experiments show that both A-SPAIN and S-SPAIN outperform other sparsity-based inpainting algorithms and that A-SPAIN performs on a par with the state-of-the-art method based on linear prediction.
1
0
0
0
0
0
Palindromic Decompositions with Gaps and Errors
Identifying palindromes in sequences has been an interesting line of research in combinatorics on words and also in computational biology, after the discovery of the relation of palindromes in the DNA sequence with the HIV virus. Efficient algorithms for the factorization of sequences into palindromes and maximal palindromes have been devised in recent years. We extend these studies by allowing gaps in decompositions and errors in palindromes, and also imposing a lower bound to the length of acceptable palindromes. We first present an algorithm for obtaining a palindromic decomposition of a string of length n with the minimal total gap length in time O(n log n * g) and space O(n g), where g is the number of allowed gaps in the decomposition. We then consider a decomposition of the string in maximal \delta-palindromes (i.e. palindromes with \delta errors under the edit or Hamming distance) and g allowed gaps. We present an algorithm to obtain such a decomposition with the minimal total gap length in time O(n (g + \delta)) and space O(n g).
1
0
0
0
0
0
End-of-Use Core Triage in Extreme Scenarios Based on a Threshold Approach
Remanufacturing is a significant factor in securing sustainability through a circular economy. Sorting plays a significant role in remanufacturing pre-processing inspections. Its significance can increase when remanufacturing facilities encounter extreme situations, such as abnormally huge core arrivals. Our main objective in this work is switching from less efficient to a more efficient model and to characterize extreme behavior of core arrival in remanufacturing and applying the developed model to triage cores. Central tendency core flow models are not sufficient to handle extreme situations, however, complementary Extreme Value (EV) approaches have shown to improve model efficiency. Extreme core flows to remanufacturing facilities are rare but still likely and can adversely affect remanufacturing business operations. In this investigation, extreme end-of-use core flow is modelled by a threshold approach using the Generalized Pareto Distribution (GPD). It is shown that GPD has better performance than its maxima-block GEV counterpart from practical and data efficiency perspectives. The model is validated by a synthesized big dataset, tested by sophisticated statistical Anderson Darling (AD) test, and is applied to a case of extreme flow to a valve shop in order to predict probability of over-capacity arrivals that is critical in remanufacturing business management. Finally, the GPD model combined with triage strategies is used to initiate investigations into the efficacy of different triage methods in remanufacturing operations.
0
0
0
1
0
0
A temperature-dependent implicit-solvent model of polyethylene glycol in aqueous solution
A temperature (T)-dependent coarse-grained (CG) Hamiltonian of polyethylene glycol/oxide (PEG/PEO) in aqueous solution is reported to be used in implicit-solvent material models in a wide temperature (i.e., solvent quality) range. The T-dependent nonbonded CG interactions are derived from a combined "bottom-up" and "top-down" approach. The pair potentials calculated from atomistic replica-exchange molecular dynamics simulations in combination with the iterative Boltzmann inversion are post-refined by benchmarking to experimental data of the radius of gyration. For better handling and a fully continuous transferability in T-space, the pair potentials are conveniently truncated and mapped to an analytic formula with three structural parameters expressed as explicit continuous functions of T. It is then demonstrated that this model without further adjustments successfully reproduces other experimentally known key thermodynamic properties of semi-dilute PEG solutions such as the full equation of state (i.e., T-dependent osmotic pressure) for various chain lengths as well as their cloud point (or collapse) temperature.
0
1
0
0
0
0
Directional Statistics and Filtering Using libDirectional
In this paper, we present libDirectional, a MATLAB library for directional statistics and directional estimation. It supports a variety of commonly used distributions on the unit circle, such as the von Mises, wrapped normal, and wrapped Cauchy distributions. Furthermore, various distributions on higher-dimensional manifolds such as the unit hypersphere and the hypertorus are available. Based on these distributions, several recursive filtering algorithms in libDirectional allow estimation on these manifolds. The functionality is implemented in a clear, well-documented, and object-oriented structure that is both easy to use and easy to extend.
1
0
0
1
0
0
A solution of the dark energy and its coincidence problem based on local antigravity sources without fine-tuning or new scales
A novel idea is proposed for a natural solution of the dark energy and its cosmic coincidence problem. The existence of local antigravity sources, associated with astrophysical matter configurations distributed throughout the universe, can lead to a recent cosmic acceleration effect. Various physical theories can be compatible with this idea, but here, in order to test our proposal, we focus on quantum originated spherically symmetric metrics matched with the cosmological evolution through the simplest Swiss cheese model. In the context of asymptotically safe gravity, we have explained the observed amount of dark energy using Newton's constant, the galaxy or cluster length scales, and dimensionless order one parameters predicted by the theory, without fine-tuning or extra unproven energy scales. The interior modified Schwarzschild-de Sitter metric allows us to approximately interpret this result as that the standard cosmological constant is a composite quantity made of the above parameters, instead of a fundamental one.
0
1
0
0
0
0
Active Bias: Training More Accurate Neural Networks by Emphasizing High Variance Samples
Self-paced learning and hard example mining re-weight training instances to improve learning accuracy. This paper presents two improved alternatives based on lightweight estimates of sample uncertainty in stochastic gradient descent (SGD): the variance in predicted probability of the correct class across iterations of mini-batch SGD, and the proximity of the correct class probability to the decision threshold. Extensive experimental results on six datasets show that our methods reliably improve accuracy in various network architectures, including additional gains on top of other popular training techniques, such as residual learning, momentum, ADAM, batch normalization, dropout, and distillation.
0
0
0
1
0
0
Optical signature of Weyl electronic structures in tantalum pnictides Ta$Pn$ ($Pn=$ P, As)
To investigate the electronic structure of Weyl semimetals Ta$Pn$ ($Pn=$P, As), optical conductivity [$\sigma(\omega)$] spectra are measured over a wide range of photon energies and temperatures, and these measured values are compared with band calculations. Two significant structures can be observed: a bending structure at $\hbar\omega\sim$85 meV in TaAs, and peaks at $\hbar\omega\sim$ 50 meV (TaP) and $\sim$30 meV (TaAs). The bending structure can be explained by the interband transition between saddle points connecting a set of $W_2$ Weyl points. The temperature dependence of the peak intensity can be fitted by assuming the interband transition between saddle points connecting a set of $W_1$ Weyl points. Owing to the different temperature dependence of the Drude weight in both materials, it is found that the Weyl points of TaAs are located near the Fermi level, whereas those of TaP are further away.
0
1
0
0
0
0
A study of cyber security in hospitality industry- threats and countermeasures: case study in Reno, Nevada
The purpose of this study is to analyze cyber security and security practices of electronic information and network system, network threats, and techniques to prevent the cyber attacks in hotels. Helping the information technology directors and chief information officers (CIO) is the aim of this study to advance policy for security of electronic information in hotels and suggesting some techniques and tools to secure the computer networks. This research is completely qualitative while the case study and interviews have done in 5 random hotels in Reno, Nevada, United States of America. The interview has done with 50 hotel guests, 10 front desk employees, 3 IT manager and 2 assistant of General manager. The results show that hotels' cyber security is very low and hotels are very vulnerable in this regard and at the end, the implications and contribution of the study is mentioned.
1
0
0
0
0
0
Weak Fraisse categories
We develop the theory of weak Fraisse categories, where the crucial concept is the weak amalgamation property, discovered relatively recently in model theory. We show that, in a suitable framework, every weak Fraisse category has its unique limit, a special object in a bigger category, characterized by certain variant of injectivity. This significantly extends the known theory of Fraisse limits.
0
0
1
0
0
0
Multiple Stakeholders in Music Recommender Systems
Music recommendation services collectively spin billions of songs for millions of listeners on a daily basis. Users can typically listen to a variety of songs tailored to their personal tastes and preferences. Music is not the only type of content encountered in these services, however. Advertisements are generally interspersed throughout the music stream to generate revenue for the business. Additional content may include artist messaging, ticketing, sports, news and weather. In this paper, we discuss issues that arise when multiple content providers are stakeholders in the recommendation process. These stakeholders each have their own objectives and must work in concert to sustain a healthy music recommendation service.
1
0
0
0
0
0
Large time behavior of solution to nonlinear Dirac equation in $1+1$ dimensions
This paper studies the large time behavior of solution for a class of nonlinear massless Dirac equations in $R^{1+1}$. It is shown that the solution will tend to travelling wave solution when time tends to infinity.
0
0
1
0
0
0
Compositions of Functions and Permutations Specified by Minimal Reaction Systems
This paper studies mathematical properties of reaction systems that was introduced by Enrenfeucht and Rozenberg as computational models inspired by biochemical reaction in the living cells. In particular, we continue the study on the generative power of functions specified by minimal reaction systems under composition initiated by Salomaa. Allowing degenerate reaction systems, functions specified by minimal reaction systems over a quarternary alphabet that are permutations generate the alternating group on the power set of the background set.
1
0
0
0
0
0
The center problem for the Lotka reactions with generalized mass-action kinetics
Chemical reaction networks with generalized mass-action kinetics lead to power-law dynamical systems. As a simple example, we consider the Lotka reactions and the resulting planar ODE. We characterize the parameters (positive coefficients and real exponents) for which the unique positive equilibrium is a center.
0
0
1
0
0
0
Recovery of Missing Samples Using Sparse Approximation via a Convex Similarity Measure
In this paper, we study the missing sample recovery problem using methods based on sparse approximation. In this regard, we investigate the algorithms used for solving the inverse problem associated with the restoration of missed samples of image signal. This problem is also known as inpainting in the context of image processing and for this purpose, we suggest an iterative sparse recovery algorithm based on constrained $l_1$-norm minimization with a new fidelity metric. The proposed metric called Convex SIMilarity (CSIM) index, is a simplified version of the Structural SIMilarity (SSIM) index, which is convex and error-sensitive. The optimization problem incorporating this criterion, is then solved via Alternating Direction Method of Multipliers (ADMM). Simulation results show the efficiency of the proposed method for missing sample recovery of 1D patch vectors and inpainting of 2D image signals.
1
0
0
1
0
0
Skin cancer reorganization and classification with deep neural network
As one kind of skin cancer, melanoma is very dangerous. Dermoscopy based early detection and recarbonization strategy is critical for melanoma therapy. However, well-trained dermatologists dominant the diagnostic accuracy. In order to solve this problem, many effort focus on developing automatic image analysis systems. Here we report a novel strategy based on deep learning technique, and achieve very high skin lesion segmentation and melanoma diagnosis accuracy: 1) we build a segmentation neural network (skin_segnn), which achieved very high lesion boundary detection accuracy; 2) We build another very deep neural network based on Google inception v3 network (skin_recnn) and its well-trained weight. The novel designed transfer learning based deep neural network skin_inceptions_v3_nn helps to achieve a high prediction accuracy.
1
0
0
0
0
0
Rescaled extrapolation for vector-valued functions
We extend Rubio de Francia's extrapolation theorem for functions valued in UMD Banach function spaces, leading to short proofs of some new and known results. In particular we prove Littlewood-Paley-Rubio de Francia-type estimates and boundedness of variational Carleson operators for Banach function spaces with UMD concavifications.
0
0
1
0
0
0