text
stringlengths
57
2.88k
labels
sequencelengths
6
6
Title: Holomorphic differentials, thermostats and Anosov flows, Abstract: We introduce a new family of thermostat flows on the unit tangent bundle of an oriented Riemannian $2$-manifold. Suitably reparametrised, these flows include the geodesic flow of metrics of negative Gauss curvature and the geodesic flow induced by the Hilbert metric on the quotient surface of divisible convex sets. We show that the family of flows can be parametrised in terms of certain weighted holomorphic differentials and investigate their properties. In particular, we prove that they admit a dominated splitting and we identify special cases in which the flows are Anosov. In the latter case, we study when they admit an invariant measure in the Lebesgue class and the regularity of the weak foliations.
[ 0, 0, 1, 0, 0, 0 ]
Title: Near-field coupling of gold plasmonic antennas for sub-100 nm magneto-thermal microscopy, Abstract: The development of spintronic technology with increasingly dense, high-speed, and complex devices will be accelerated by accessible microscopy techniques capable of probing magnetic phenomena on picosecond time scales and at deeply sub-micron length scales. A recently developed time-resolved magneto-thermal microscope provides a path towards this goal if it is augmented with a picosecond, nanoscale heat source. We theoretically study adiabatic nanofocusing and near-field heat induction using conical gold plasmonic antennas to generate sub-100 nm thermal gradients for time-resolved magneto-thermal imaging. Finite element calculations of antenna-sample interactions reveal focused electromagnetic loss profiles that are either peaked directly under the antenna or are annular, depending on the sample's conductivity, the antenna's apex radius, and the tip-sample separation. We find that the thermal gradient is confined to 40 nm to 60 nm full width at half maximum for realistic ranges of sample conductivity and apex radius. To mitigate this variation, which is undesirable for microscopy, we investigate the use of a platinum capping layer on top of the sample as a thermal transduction layer to produce heat uniformly across different sample materials. After determining the optimal capping layer thickness, we simulate the evolution of the thermal gradient in the underlying sample layer, and find that the temporal width is below 10 ps. These results lay a theoretical foundation for nanoscale, time-resolved magneto-thermal imaging.
[ 0, 1, 0, 0, 0, 0 ]
Title: A reproducible effect size is more useful than an irreproducible hypothesis test to analyze high throughput sequencing datasets, Abstract: Motivation: P values derived from the null hypothesis significance testing framework are strongly affected by sample size, and are known to be irreproducible in underpowered studies, yet no suitable replacement has been proposed. Results: Here we present implementations of non-parametric standardized median effect size estimates, dNEF, for high-throughput sequencing datasets. Case studies are shown for transcriptome and tag-sequencing datasets. The dNEF measure is shown to be more repro- ducible and robust than P values and requires sample sizes as small as 3 to reproducibly identify differentially abundant features. Availability: Source code and binaries freely available at: this https URL, omicplotR, and this https URL.
[ 0, 0, 0, 0, 1, 0 ]
Title: Laplace Beltrami operator in the Baran metric and pluripotential equilibrium measure: the ball, the simplex and the sphere, Abstract: The Baran metric $\delta_E$ is a Finsler metric on the interior of $E\subset \R^n$ arising from Pluripotential Theory. We consider the few instances, namely $E$ being the ball, the simplex, or the sphere, where $\delta_E$ is known to be Riemaniann and we prove that the eigenfunctions of the associated Laplace Beltrami operator (with no boundary conditions) are the orthogonal polynomials with respect to the pluripotential equilibrium measure $\mu_E$ of $E.$ We conjecture that this may hold in a wider generality. The considered differential operators have been already introduced in the framework of orthogonal polynomials and studied in connection with certain symmetry groups. In this work instead we highlight the relationships between orthogonal polynomials with respect to $\mu_E$ and the Riemaniann structure naturally arising from Pluripotential Theory
[ 0, 0, 1, 0, 0, 0 ]
Title: Magnetic polarons in a nonequilibrium polariton condensate, Abstract: We consider a condensate of exciton-polaritons in a diluted magnetic semiconductor microcavity. Such system may exhibit magnetic self-trapping in the case of sufficiently strong coupling between polaritons and magnetic ions embedded in the semiconductor. We investigate the effect of the nonequilibrium nature of exciton-polaritons on the physics of the resulting self-trapped magnetic polarons. We find that multiple polarons can exist at the same time, and derive a critical condition for self-trapping which is different to the one predicted previously in the equilibrium case. Using the Bogoliubov-de Gennes approximation, we calculate the excitation spectrum and provide a physical explanation in terms of the effective magnetic attraction between polaritons, mediated by the ion subsystem.
[ 0, 1, 0, 0, 0, 0 ]
Title: Oracle Importance Sampling for Stochastic Simulation Models, Abstract: We consider the problem of estimating an expected outcome from a stochastic simulation model using importance sampling. We propose a two-stage procedure that involves a regression stage and a sampling stage to construct our estimator. We introduce a parametric and a nonparametric regression estimator in the first stage and study how the allocation between the two stages affects the performance of final estimator. We derive the oracle property for both approaches. We analyze the empirical performances of our approaches using two simulated data and a case study on wind turbine reliability evaluation.
[ 0, 0, 0, 1, 0, 0 ]
Title: The Generalized Cross Validation Filter, Abstract: Generalized cross validation (GCV) is one of the most important approaches used to estimate parameters in the context of inverse problems and regularization techniques. A notable example is the determination of the smoothness parameter in splines. When the data are generated by a state space model, like in the spline case, efficient algorithms are available to evaluate the GCV score with complexity that scales linearly in the data set size. However, these methods are not amenable to on-line applications since they rely on forward and backward recursions. Hence, if the objective has been evaluated at time $t-1$ and new data arrive at time t, then O(t) operations are needed to update the GCV score. In this paper we instead show that the update cost is $O(1)$, thus paving the way to the on-line use of GCV. This result is obtained by deriving the novel GCV filter which extends the classical Kalman filter equations to efficiently propagate the GCV score over time. We also illustrate applications of the new filter in the context of state estimation and on-line regularized linear system identification.
[ 1, 0, 0, 1, 0, 0 ]
Title: Of the People: Voting Is More Effective with Representative Candidates, Abstract: In light of the classic impossibility results of Arrow and Gibbard and Satterthwaite regarding voting with ordinal rules, there has been recent interest in characterizing how well common voting rules approximate the social optimum. In order to quantify the quality of approximation, it is natural to consider the candidates and voters as embedded within a common metric space, and to ask how much further the chosen candidate is from the population as compared to the socially optimal one. We use this metric preference model to explore a fundamental and timely question: does the social welfare of a population improve when candidates are representative of the population? If so, then by how much, and how does the answer depend on the complexity of the metric space? We restrict attention to the most fundamental and common social choice setting: a population of voters, two independently drawn candidates, and a majority rule election. When candidates are not representative of the population, it is known that the candidate selected by the majority rule can be thrice as far from the population as the socially optimal one. We examine how this ratio improves when candidates are drawn independently from the population of voters. Our results are two-fold: When the metric is a line, the ratio improves from $3$ to $4-2\sqrt{2}$, roughly $1.1716$; this bound is tight. When the metric is arbitrary, we show a lower bound of $1.5$ and a constant upper bound strictly better than $2$ on the approximation ratio of the majority rule. The positive result depends in part on the assumption that candidates are independent and identically distributed. However, we show that independence alone is not enough to achieve the upper bound: even when candidates are drawn independently, if the population of candidates can be different from the voters, then an upper bound of $2$ on the approximation is tight.
[ 1, 0, 0, 0, 0, 0 ]
Title: Hidden Community Detection in Social Networks, Abstract: We introduce a new paradigm that is important for community detection in the realm of network analysis. Networks contain a set of strong, dominant communities, which interfere with the detection of weak, natural community structure. When most of the members of the weak communities also belong to stronger communities, they are extremely hard to be uncovered. We call the weak communities the hidden community structure. We present a novel approach called HICODE (HIdden COmmunity DEtection) that identifies the hidden community structure as well as the dominant community structure. By weakening the strength of the dominant structure, one can uncover the hidden structure beneath. Likewise, by reducing the strength of the hidden structure, one can more accurately identify the dominant structure. In this way, HICODE tackles both tasks simultaneously. Extensive experiments on real-world networks demonstrate that HICODE outperforms several state-of-the-art community detection methods in uncovering both the dominant and the hidden structure. In the Facebook university social networks, we find multiple non-redundant sets of communities that are strongly associated with residential hall, year of registration or career position of the faculties or students, while the state-of-the-art algorithms mainly locate the dominant ground truth category. In the Due to the difficulty of labeling all ground truth communities in real-world datasets, HICODE provides a promising approach to pinpoint the existing latent communities and uncover communities for which there is no ground truth. Finding this unknown structure is an extremely important community detection problem.
[ 1, 1, 0, 1, 0, 0 ]
Title: Two-photon exchange correction to the hyperfine splitting in muonic hydrogen, Abstract: We reevaluate the Zemach, recoil and polarizability corrections to the hyperfine splitting in muonic hydrogen expressing them through the low-energy proton structure constants and obtain the precise values of the Zemach radius and two-photon exchange (TPE) contribution. The uncertainty of TPE correction to S energy levels in muonic hydrogen of 105 ppm exceeds the ppm accuracy level of the forthcoming 1S hyperfine splitting measurements at PSI, J-PARC and RIKEN-RAL.
[ 0, 1, 0, 0, 0, 0 ]
Title: Generation and analysis of lamplighter programs, Abstract: We consider a programming language based on the lamplighter group that uses only composition and iteration as control structures. We derive generating functions and counting formulas for this language and special subsets of it, establishing lower and upper bounds on the growth rate of semantically distinct programs. Finally, we show how to sample random programs and analyze the distribution of runtimes induced by such sampling.
[ 1, 0, 1, 0, 0, 0 ]
Title: Preduals for spaces of operators involving Hilbert spaces and trace-class operators, Abstract: Continuing the study of preduals of spaces $\mathcal{L}(H,Y)$ of bounded, linear maps, we consider the situation that $H$ is a Hilbert space. We establish a natural correspondence between isometric preduals of $\mathcal{L}(H,Y)$ and isometric preduals of $Y$. The main ingredient is a Tomiyama-type result which shows that every contractive projection that complements $\mathcal{L}(H,Y)$ in its bidual is automatically a right $\mathcal{L}(H)$-module map. As an application, we show that isometric preduals of $\mathcal{L}(\mathcal{S}_1)$, the algebra of operators on the space of trace-class operators, correspond to isometric preduals of $\mathcal{S}_1$ itself (and there is an abundance of them). On the other hand, the compact operators are the unique predual of $\mathcal{S}_1$ making its multiplication separately weak* continuous.
[ 0, 0, 1, 0, 0, 0 ]
Title: Interactions between Health Searchers and Search Engines, Abstract: The Web is an important resource for understanding and diagnosing medical conditions. Based on exposure to online content, people may develop undue health concerns, believing that common and benign symptoms are explained by serious illnesses. In this paper, we investigate potential strategies to mine queries and searcher histories for clues that could help search engines choose the most appropriate information to present in response to exploratory medical queries. To do this, we performed a longitudinal study of health search behavior using the logs of a popular search engine. We found that query variations which might appear innocuous (e.g. "bad headache" vs "severe headache") may hold valuable information about the searcher which could be used by search engines to improve performance. Furthermore, we investigated how medically concerned users respond differently to search engine result pages (SERPs) and find that their disposition for clicking on concerning pages is pronounced, potentially leading to a self-reinforcement of concern. Finally, we studied to which degree variations in the SERP impact future search and real-world health-seeking behavior and obtained some surprising results (e.g., viewing concerning pages may lead to a short-term reduction of real-world health seeking).
[ 1, 0, 0, 0, 0, 0 ]
Title: Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices, Abstract: In a previous work we have detailed the requirements to obtain a maximal performance benefit by implementing fully connected deep neural networks (DNN) in form of arrays of resistive devices for deep learning. This concept of Resistive Processing Unit (RPU) devices we extend here towards convolutional neural networks (CNNs). We show how to map the convolutional layers to RPU arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm. We find that the noise and bound limitations imposed due to analog nature of the computations performed on the arrays effect the training accuracy of the CNNs. Noise and bound management techniques are presented that mitigate these problems without introducing any additional complexity in the analog circuits and can be addressed by the digital circuits. In addition, we discuss digitally programmable update management and device variability reduction techniques that can be used selectively for some of the layers in a CNN. We show that combination of all those techniques enables a successful application of the RPU concept for training CNNs. The techniques discussed here are more general and can be applied beyond CNN architectures and therefore enables applicability of RPU approach for large class of neural network architectures.
[ 1, 0, 0, 1, 0, 0 ]
Title: Absolute versus convective helical magnetorotational instabilities in Taylor-Couette flows, Abstract: We study magnetic Taylor-Couette flow in a system having nondimensional radii $r_i=1$ and $r_o=2$, and periodic in the axial direction with wavelengths $h\ge100$. The rotation ratio of the inner and outer cylinders is adjusted to be slightly in the Rayleigh-stable regime, where magnetic fields are required to destabilize the flow, in this case triggering the axisymmetric helical magnetorotational instability (HMRI). Two choices of imposed magnetic field are considered, both having the same azimuthal component $B_\phi=r^{-1}$, but differing axial components. The first choice has $B_z=0.1$, and yields the familiar HMRI, consisting of unidirectionally traveling waves. The second choice has $B_z\approx0.1\sin(2\pi z/h)$, and yields HMRI waves that travel in opposite directions depending on the sign of $B_z$. The first configuration corresponds to a convective instability, the second to an absolute instability. The two variants behave very similarly regarding both linear onset as well as nonlinear equilibration.
[ 0, 1, 0, 0, 0, 0 ]
Title: Symmetries and multipeakon solutions for the modified two-component Camassa-Holm system, Abstract: Compared with the two-component Camassa-Holm system, the modified two-component Camassa-Holm system introduces a regularized density which makes possible the existence of solutions of lower regularity, and in particular of multipeakon solutions. In this paper, we derive a new pointwise invariant for the modified two-component Camassa-Holm system. The derivation of the invariant uses directly the symmetry of the system, following the classical argument of Noether's theorem. The existence of the multipeakon solutions can be directly inferred from this pointwise invariant. This derivation shows the strong connection between symmetries and the existence of special solutions. The observation also holds for the scalar Camassa-Holm equation and, for comparison, we have also included the corresponding derivation. Finally, we compute explicitly the solutions obtained for the peakon-antipeakon case. We observe the existence of a periodic solution which has not been reported in the literature previously. This case shows the attractive effect that the introduction of an elastic potential can have on the solutions.
[ 0, 0, 1, 0, 0, 0 ]
Title: A pliable lasso for the Cox model, Abstract: We introduce a pliable lasso method for estimation of interaction effects in the Cox proportional hazards model framework. The pliable lasso is a linear model that includes interactions between covariates X and a set of modifying variables Z and assumes sparsity of the main effects and interaction effects. The hierarchical penalty excludes interaction effects when the corresponding main effects are zero: this avoids overfitting and an explosion of model complexity. We extend this method to the Cox model for survival data, incorporating modifiers that are either fixed or varying in time into the partial likelihood. For example, this allows modeling of survival times that differ based on interactions of genes with age, gender, or other demographic information. The optimization is done by blockwise coordinate descent on a second order approximation of the objective.
[ 0, 0, 0, 1, 0, 0 ]
Title: Khintchine's Theorem with random fractions, Abstract: We prove versions of Khintchine's Theorem (1924) for approximations by rational numbers whose numerators lie in randomly chosen sets of integers, and we explore the extent to which the monotonicity assumption can be removed. Roughly speaking, we show that if the number of available fractions for each denominator grows too fast, then the monotonicity assumption cannot be removed. There are questions in this random setting which may be seen as cognates of the Duffin-Schaeffer Conjecture (1941), and are likely to be more accessible. We point out that the direct random analogue of the Duffin-Schaeffer Conjecture, like the Duffin-Schaeffer Conjecture itself, implies Catlin's Conjecture (1976). It is not obvious whether the Duffin-Schaeffer Conjecture and its random version imply one another, and it is not known whether Catlin's Conjecture implies either of them. The question of whether Catlin implies Duffin-Schaeffer has been unsettled for decades.
[ 0, 0, 1, 0, 0, 0 ]
Title: A Method of Generating Random Weights and Biases in Feedforward Neural Networks with Random Hidden Nodes, Abstract: Neural networks with random hidden nodes have gained increasing interest from researchers and practical applications. This is due to their unique features such as very fast training and universal approximation property. In these networks the weights and biases of hidden nodes determining the nonlinear feature mapping are set randomly and are not learned. Appropriate selection of the intervals from which weights and biases are selected is extremely important. This topic has not yet been sufficiently explored in the literature. In this work a method of generating random weights and biases is proposed. This method generates the parameters of the hidden nodes in such a way that nonlinear fragments of the activation functions are located in the input space regions with data and can be used to construct the surface approximating a nonlinear target function. The weights and biases are dependent on the input data range and activation function type. The proposed methods allows us to control the generalization degree of the model. These all lead to improvement in approximation performance of the network. Several experiments show very promising results.
[ 1, 0, 0, 1, 0, 0 ]
Title: Representation of big data by dimension reduction, Abstract: Suppose the data consist of a set $S$ of points $x_j, 1 \leq j \leq J$, distributed in a bounded domain $D \subset R^N$, where $N$ and $J$ are large numbers. In this paper an algorithm is proposed for checking whether there exists a manifold $\mathbb{M}$ of low dimension near which many of the points of $S$ lie and finding such $\mathbb{M}$ if it exists. There are many dimension reduction algorithms, both linear and non-linear. Our algorithm is simple to implement and has some advantages compared with the known algorithms. If there is a manifold of low dimension near which most of the data points lie, the proposed algorithm will find it. Some numerical results are presented illustrating the algorithm and analyzing its performance compared to the classical PCA (principal component analysis) and Isomap.
[ 1, 0, 0, 1, 0, 0 ]
Title: Introduction to Plasma Physics, Abstract: These notes are intended to provide a brief primer in plasma physics, introducing common definitions, basic properties, and typical processes found in plasmas. These concepts are inherent in contemporary plasma-based accelerator schemes, and thus provide a foundation for the more advanced expositions that follow in this volume. No prior knowledge of plasma physics is required, but the reader is assumed to be familiar with basic electrodynamics and fluid mechanics.
[ 0, 1, 0, 0, 0, 0 ]
Title: Theoretical calculation of the fine-structure constant and the permittivity of the vacuum, Abstract: Light traveling through the vacuum interacts with virtual particles similarly to the way that light traveling through a dielectric interacts with ordinary matter. And just as the permittivity of a dielectric can be calculated, the permittivity $\epsilon_0$ of the vacuum can be calculated, yielding an equation for the fine-structure constant $\alpha$. The most important contributions to the value of $\alpha$ arise from interactions in the vacuum of photons with virtual, bound states of charged lepton-antilepton pairs. Considering only these contributions, the fully screened $\alpha \cong 1/(8^2\sqrt{3\pi/2}) \cong 1/139$.
[ 0, 1, 0, 0, 0, 0 ]
Title: Calibrated Projection in MATLAB: Users' Manual, Abstract: We present the calibrated-projection MATLAB package implementing the method to construct confidence intervals proposed by Kaido, Molinari and Stoye (2017). This manual provides details on how to use the package for inference on projections of partially identified parameters. It also explains how to use the MATLAB functions we developed to compute confidence intervals on solutions of nonlinear optimization problems with estimated constraints.
[ 0, 0, 0, 1, 0, 0 ]
Title: Atomic Clock Measurements of Quantum Scattering Phase Shifts Spanning Feshbach Resonances at Ultralow Fields, Abstract: We use an atomic fountain clock to measure quantum scattering phase shifts precisely through a series of narrow, low-field Feshbach resonances at average collision energies below $1\,\mu$K. Our low spread in collision energy yields phase variations of order $\pm \pi/2$ for target atoms in several $F,m_F$ states. We compare them to a theoretical model and establish the accuracy of the measurements and the theoretical uncertainties from the fitted potential. We find overall excellent agreement, with small statistically significant differences that remain unexplained.
[ 0, 1, 0, 0, 0, 0 ]
Title: Spin Distribution of Primordial Black Holes, Abstract: We estimate the spin distribution of primordial black holes based on the recent study of the critical phenomena in the gravitational collapse of a rotating radiation fluid. We find that primordial black holes are mostly slowly rotating.
[ 0, 1, 0, 0, 0, 0 ]
Title: Automated flow for compressing convolution neural networks for efficient edge-computation with FPGA, Abstract: Deep convolutional neural networks (CNN) based solutions are the current state- of-the-art for computer vision tasks. Due to the large size of these models, they are typically run on clusters of CPUs or GPUs. However, power requirements and cost budgets can be a major hindrance in adoption of CNN for IoT applications. Recent research highlights that CNN contain significant redundancy in their structure and can be quantized to lower bit-width parameters and activations, while maintaining acceptable accuracy. Low bit-width and especially single bit-width (binary) CNN are particularly suitable for mobile applications based on FPGA implementation, due to the bitwise logic operations involved in binarized CNN. Moreover, the transition to lower bit-widths opens new avenues for performance optimizations and model improvement. In this paper, we present an automatic flow from trained TensorFlow models to FPGA system on chip implementation of binarized CNN. This flow involves quantization of model parameters and activations, generation of network and model in embedded-C, followed by automatic generation of the FPGA accelerator for binary convolutions. The automated flow is demonstrated through implementation of binarized "YOLOV2" on the low cost, low power Cyclone- V FPGA device. Experiments on object detection using binarized YOLOV2 demonstrate significant performance benefit in terms of model size and inference speed on FPGA as compared to CPU and mobile CPU platforms. Furthermore, the entire automated flow from trained models to FPGA synthesis can be completed within one hour.
[ 1, 0, 0, 0, 0, 0 ]
Title: Foundation for a series of efficient simulation algorithms, Abstract: Compute the coarsest simulation preorder included in an initial preorder is used to reduce the resources needed to analyze a given transition system. This technique is applied on many models like Kripke structures, labeled graphs, labeled transition systems or even word and tree automata. Let (Q, $\rightarrow$) be a given transition system and Rinit be an initial preorder over Q. Until now, algorithms to compute Rsim , the coarsest simulation included in Rinit , are either memory efficient or time efficient but not both. In this paper we propose the foundation for a series of efficient simulation algorithms with the introduction of the notion of maximal transitions and the notion of stability of a preorder with respect to a coarser one. As an illustration we solve an open problem by providing the first algorithm with the best published time complexity, O(|Psim |.|$\rightarrow$|), and a bit space complexity in O(|Psim |^2. log(|Psim |) + |Q|. log(|Q|)), with Psim the partition induced by Rsim.
[ 1, 0, 0, 0, 0, 0 ]
Title: A Review of Macroscopic Motion in Thermodynamic Equilibrium, Abstract: A principle on the macroscopic motion of systems in thermodynamic equilibrium, rarely discussed in texts, is reviewed: Very small but still macroscopic parts of a fully isolated system in thermal equilibrium move as if points of a rigid body, macroscopic energy being dissipated to increase internal energy, and increase entropy along. It appears particularly important in Space physics, when dissipation involves long-range fields at Electromagnetism and Gravitation, rather than short-range contact forces. It is shown how new physics, Special Relativity as regards Electromagnetism, first Newtonian theory then General Relativity as regards Gravitation, determine different dissipative processes involved in the approach to that equilibrium.
[ 0, 1, 0, 0, 0, 0 ]
Title: Lord Kelvin's method of images approach to the Rotenberg model and its asymptotics, Abstract: We study a mathematical model of cell populations dynamics proposed by M. Rotenberg and investigated by M. Boulanouar. Here, a cell is characterized by her maturity and speed of maturation. The growth of cell populations is described by a partial differential equation with a boundary condition. In the first part of the paper we exploit semigroup theory approach and apply Lord Kelvin's method of images in order to give a new proof that the model is well posed. Next, we use a semi-explicit formula for the semigroup related to the model obtained by the method of images in order to give growth estimates for the semigroup. The main part of the paper is devoted to the asymptotic behaviour of the semigroup. We formulate conditions for the asymptotic stability of the semigroup in the case in which the average number of viable daughters per mitosis equals one. To this end we use methods developed by K. Pichór and R. Rudnicki.
[ 0, 0, 1, 0, 0, 0 ]
Title: Study of the Magnetizing Relationship of the Kickers for CSNS, Abstract: The extraction system of CSNS mainly consists of two kinds of magnets: eight kickers and one lambertson magnet. In this paper, firstly, the magnetic test results of the eight kickers were introduced and then the filed uniformity and magnetizing relationship of the kickers were given. Secondly, during the beam commissioning in the future, in order to obtain more accurate magnetizing relationship, a new method to measure the magnetizing coefficients of the kickers by the real extraction beam was given and the data analysis would also be processed.
[ 0, 1, 0, 0, 0, 0 ]
Title: Episodic memory for continual model learning, Abstract: Both the human brain and artificial learning agents operating in real-world or comparably complex environments are faced with the challenge of online model selection. In principle this challenge can be overcome: hierarchical Bayesian inference provides a principled method for model selection and it converges on the same posterior for both off-line (i.e. batch) and online learning. However, maintaining a parameter posterior for each model in parallel has in general an even higher memory cost than storing the entire data set and is consequently clearly unfeasible. Alternatively, maintaining only a limited set of models in memory could limit memory requirements. However, sufficient statistics for one model will usually be insufficient for fitting a different kind of model, meaning that the agent loses information with each model change. We propose that episodic memory can circumvent the challenge of limited memory-capacity online model selection by retaining a selected subset of data points. We design a method to compute the quantities necessary for model selection even when the data is discarded and only statistics of one (or few) learnt models are available. We demonstrate on a simple model that a limited-sized episodic memory buffer, when the content is optimised to retain data with statistics not matching the current representation, can resolve the fundamental challenge of online model selection.
[ 1, 0, 0, 1, 0, 0 ]
Title: Upper-Bounding the Regularization Constant for Convex Sparse Signal Reconstruction, Abstract: Consider reconstructing a signal $x$ by minimizing a weighted sum of a convex differentiable negative log-likelihood (NLL) (data-fidelity) term and a convex regularization term that imposes a convex-set constraint on $x$ and enforces its sparsity using $\ell_1$-norm analysis regularization. We compute upper bounds on the regularization tuning constant beyond which the regularization term overwhelmingly dominates the NLL term so that the set of minimum points of the objective function does not change. Necessary and sufficient conditions for irrelevance of sparse signal regularization and a condition for the existence of finite upper bounds are established. We formulate an optimization problem for finding these bounds when the regularization term can be globally minimized by a feasible $x$ and also develop an alternating direction method of multipliers (ADMM) type method for their computation. Simulation examples show that the derived and empirical bounds match.
[ 0, 0, 1, 1, 0, 0 ]
Title: On the Privacy of the Opal Data Release: A Response, Abstract: This document is a response to a report from the University of Melbourne on the privacy of the Opal dataset release. The Opal dataset was released by Data61 (CSIRO) in conjunction with the Transport for New South Wales (TfNSW). The data consists of two separate weeks of "tap-on/tap-off" data of individuals who used any of the four different modes of public transport from TfNSW: buses, light rail, train and ferries. These taps are recorded through the smart ticketing system, known as Opal, available in the state of New South Wales, Australia.
[ 1, 0, 0, 0, 0, 0 ]
Title: Long time behavior of Gross-Pitaevskii equation at positive temperature, Abstract: The stochastic Gross-Pitaevskii equation is used as a model to describe Bose-Einstein condensation at positive temperature. The equation is a complex Ginzburg Landau equation with a trapping potential and an additive space-time white noise. Two important questions for this system are the global existence of solutions in the support of the Gibbs measure, and the convergence of those solutions to the equilibrium for large time. In this paper, we give a proof of these two results in one space dimension. In order to prove the convergence to equilibrium, we use the associated purely dissipative equation as an auxiliary equation, for which the convergence may be obtained using standard techniques. Global existence is obtained for all initial data, and not almost surely with respect to the invariant measure.
[ 0, 0, 1, 0, 0, 0 ]
Title: On noncommutative geometry of the Standard Model: fermion multiplet as internal forms, Abstract: We unveil the geometric nature of the multiplet of fundamental fermions in the Standard Model of fundamental particles as a noncommutative analogue of de Rham forms on the internal finite quantum space.
[ 0, 0, 1, 0, 0, 0 ]
Title: A Review of Dynamic Network Models with Latent Variables, Abstract: We present a selective review of statistical modeling of dynamic networks. We focus on models with latent variables, specifically, the latent space models and the latent class models (or stochastic blockmodels), which investigate both the observed features and the unobserved structure of networks. We begin with an overview of the static models, and then we introduce the dynamic extensions. For each dynamic model, we also discuss its applications that have been studied in the literature, with the data source listed in Appendix. Based on the review, we summarize a list of open problems and challenges in dynamic network modeling with latent variables.
[ 0, 0, 0, 1, 0, 0 ]
Title: LevelHeaded: Making Worst-Case Optimal Joins Work in the Common Case, Abstract: Pipelines combining SQL-style business intelligence (BI) queries and linear algebra (LA) are becoming increasingly common in industry. As a result, there is a growing need to unify these workloads in a single framework. Unfortunately, existing solutions either sacrifice the inherent benefits of exclusively using a relational database (e.g. logical and physical independence) or incur orders of magnitude performance gaps compared to specialized engines (or both). In this work we study applying a new type of query processing architecture to standard BI and LA benchmarks. To do this we present a new in-memory query processing engine called LevelHeaded. LevelHeaded uses worst-case optimal joins as its core execution mechanism for both BI and LA queries. With LevelHeaded, we show how crucial optimizations for BI and LA queries can be captured in a worst-case optimal query architecture. Using these optimizations, LevelHeaded outperforms other relational database engines (LogicBlox, MonetDB, and HyPer) by orders of magnitude on standard LA benchmarks, while performing on average within 31% of the best-of-breed BI (HyPer) and LA (Intel MKL) solutions on their own benchmarks. Our results show that such a single query processing architecture is capable of delivering competitive performance on both BI and LA queries.
[ 1, 0, 0, 0, 0, 0 ]
Title: Introduction to Delay Models and Their Wave Solutions, Abstract: In this paper, a brief review of delay population models and their applications in ecology is provided. The inclusion of diffusion and nonlocality terms in delay models has given more capabilities to these models enabling them to capture several ecological phenomena such as the Allee effect, waves of invasive species and spatio-temporal competitions of interacting species. Moreover, recent advances in the studies of traveling and stationary wave solutions of delay models are outlined. In particular, the existence of stationary and traveling wave solutions of delay models, stability of wave solutions, formation of wavefronts in the special domain, and possible outcomes of delay models are discussed.
[ 0, 0, 1, 0, 0, 0 ]
Title: From a normal insulator to a topological insulator in plumbene, Abstract: Plumbene, similar to silicene, has a buckled honeycomb structure with a large band gap ($\sim 400$ meV). All previous studies have shown that it is a normal insulator. Here, we perform first-principles calculations and employ a sixteen-band tight-binding model with nearest-neighbor and next-nearest-neighbor hopping terms to investigate electronic structures and topological properties of the plumbene monolayer. We find that it can become a topological insulator with a large bulk gap ($\sim 200$ meV) through electron doping, and the nontrivial state is very robust with respect to external strain. Plumbene can be an ideal candidate for realizing the quantum spin Hall effect at room temperature. By investigating effects of external electric and magnetic fields on electronic structures and transport properties of plumbene, we present two rich phase diagrams with and without electron doping, and propose a theoretical design for a four-state spin-valley filter.
[ 0, 1, 0, 0, 0, 0 ]
Title: Bounding the composition length of primitive permutation groups and completely reducible linear groups, Abstract: We obtain upper bounds on the composition length of a finite permutation group in terms of the degree and the number of orbits, and analogous bounds for primitive, quasiprimitive and semiprimitive groups. Similarly, we obtain upper bounds on the composition length of a finite completely reducible linear group in terms of some of its parameters. In almost all cases we show that the bounds are sharp, and describe the extremal examples.
[ 0, 0, 1, 0, 0, 0 ]
Title: Dispersive Regimes of the Dicke Model, Abstract: We study two dispersive regimes in the dynamics of $N$ two-level atoms interacting with a bosonic mode for long interaction times. Firstly, we analyze the dispersive multiqubit quantum Rabi model for the regime in which the qubit frequencies are equal and smaller than the mode frequency, and for values of the coupling strength similar or larger than the mode frequency, namely, the deep strong coupling regime. Secondly, we address an interaction that is dependent on the photon number, where the coupling strength is comparable to the geometric mean of the qubit and mode frequencies. We show that the associated dynamics is analytically tractable and provide useful frameworks with which to analyze the system behavior. In the deep strong coupling regime, we unveil the structure of unexpected resonances for specific values of the coupling, present for $N\ge2$, and in the photon-number-dependent regime we demonstrate that all the nontrivial dynamical behavior occurs in the atomic degrees of freedom for a given Fock state. We verify these assertions with numerical simulations of the qubit population and photon-statistic dynamics.
[ 0, 1, 0, 0, 0, 0 ]
Title: ZebraLancer: Crowdsource Knowledge atop Open Blockchain, Privately and Anonymously, Abstract: We design and implement the first private and anonymous decentralized crowdsourcing system ZebraLancer. It realizes the fair exchange (i.e. security against malicious workers and dishonest requesters) without using any third-party arbiter. More importantly, it overcomes two fundamental challenges of decentralization, i.e. data leakage and identity breach. First, our outsource-then-prove methodology resolves the critical tension between blockchain transparency and data confidentiality without sacrificing the fairness of exchange. ZebraLancer ensures: a requester will not pay more than what data deserve, according to a policy announced when her task is published through the blockchain; each worker indeed gets a payment based on the policy, if submits data to the blockchain; the above properties are realized not only without a central arbiter, but also without leaking the data to blockchain network. Furthermore, the blockchain transparency might allow one to infer private information of workers/requesters through their participation history. ZebraLancer solves the problem by allowing anonymous participations without surrendering user accountability. Specifically, workers cannot misuse anonymity to submit multiple times to reap rewards, and an anonymous requester cannot maliciously submit colluded answers to herself to repudiate payments. The idea behind is a subtle linkability: if one authenticates twice in a task, everybody can tell, or else staying anonymous. To realize such delicate linkability, we put forth a novel cryptographic notion, the common-prefix-linkable anonymous authentication. Finally, we implement our protocol for a common image annotation task and deploy it in a test net of Ethereum. The experiment results show the applicability of our protocol and highlight subtleties of tailoring the protocol to be compatible with the existing real-world open blockchain.
[ 1, 0, 0, 0, 0, 0 ]
Title: Fast, Better Training Trick -- Random Gradient, Abstract: In this paper, we will show an unprecedented method to accelerate training and improve performance, which called random gradient (RG). This method can be easier to the training of any model without extra calculation cost, we use Image classification, Semantic segmentation, and GANs to confirm this method can improve speed which is training model in computer vision. The central idea is using the loss multiplied by a random number to random reduce the back-propagation gradient. We can use this method to produce a better result in Pascal VOC, Cifar, Cityscapes datasets.
[ 0, 0, 0, 1, 0, 0 ]
Title: Multiple VLAD encoding of CNNs for image classification, Abstract: Despite the effectiveness of convolutional neural networks (CNNs) especially in image classification tasks, the effect of convolution features on learned representations is still limited. It mostly focuses on the salient object of the images, but ignores the variation information on clutter and local. In this paper, we propose a special framework, which is the multiple VLAD encoding method with the CNNs features for image classification. Furthermore, in order to improve the performance of the VLAD coding method, we explore the multiplicity of VLAD encoding with the extension of three kinds of encoding algorithms, which are the VLAD-SA method, the VLAD-LSA and the VLAD-LLC method. Finally, we equip the spatial pyramid patch (SPM) on VLAD encoding to add the spatial information of CNNs feature. In particular, the power of SPM leads our framework to yield better performance compared to the existing method.
[ 1, 0, 0, 0, 0, 0 ]
Title: Centroid vetting of transiting planet candidates from the Next Generation Transit Survey, Abstract: The Next Generation Transit Survey (NGTS), operating in Paranal since 2016, is a wide-field survey to detect Neptunes and super-Earths transiting bright stars, which are suitable for precise radial velocity follow-up and characterisation. Thereby, its sub-mmag photometric precision and ability to identify false positives are crucial. Particularly, variable background objects blended in the photometric aperture frequently mimic Neptune-sized transits and are costly in follow-up time. These objects can best be identified with the centroiding technique: if the photometric flux is lost off-centre during an eclipse, the flux centroid shifts towards the centre of the target star. Although this method has successfully been employed by the Kepler mission, it has previously not been implemented from the ground. We present a fully-automated centroid vetting algorithm developed for NGTS, enabled by our high-precision auto-guiding. Our method allows detecting centroid shifts with an average precision of 0.75 milli-pixel, and down to 0.25 milli-pixel for specific targets, for a pixel size of 4.97 arcsec. The algorithm is now part of the NGTS candidate vetting pipeline and automatically employed for all detected signals. Further, we develop a joint Bayesian fitting model for all photometric and centroid data, allowing to disentangle which object (target or background) is causing the signal, and what its astrophysical parameters are. We demonstrate our method on two NGTS objects of interest. These achievements make NGTS the first ground-based wide-field transit survey ever to successfully apply the centroiding technique for automated candidate vetting, enabling the production of a robust candidate list before follow-up.
[ 0, 1, 0, 0, 0, 0 ]
Title: Large sums of Hecke eigenvalues of holomorphic cusp forms, Abstract: Let $f$ be a Hecke cusp form of weight $k$ for the full modular group, and let $\{\lambda_f(n)\}_{n\geq 1}$ be the sequence of its normalized Fourier coefficients. Motivated by the problem of the first sign change of $\lambda_f(n)$, we investigate the range of $x$ (in terms of $k$) for which there are cancellations in the sum $S_f(x)=\sum_{n\leq x} \lambda_f(n)$. We first show that $S_f(x)=o(x\log x)$ implies that $\lambda_f(n)<0$ for some $n\leq x$. We also prove that $S_f(x)=o(x\log x)$ in the range $\log x/\log\log k\to \infty$ assuming the Riemann hypothesis for $L(s, f)$, and furthermore that this range is best possible unconditionally. More precisely, we establish the existence of many Hecke cusp forms $f$ of large weight $k$, for which $S_f(x)\gg_A x\log x$, when $x=(\log k)^A.$ Our results are $GL_2$ analogues of work of Granville and Soundararajan for character sums, and could also be generalized to other families of automorphic forms.
[ 0, 0, 1, 0, 0, 0 ]
Title: Playtime Measurement with Survival Analysis, Abstract: Maximizing product use is a central goal of many businesses, which makes retention and monetization two central analytics metrics in games. Player retention may refer to various duration variables quantifying product use: total playtime or session playtime are popular research targets, and active playtime is well-suited for subscription games. Such research often has the goal of increasing player retention or conversely decreasing player churn. Survival analysis is a framework of powerful tools well suited for retention type data. This paper contributes new methods to game analytics on how playtime can be analyzed using survival analysis without covariates. Survival and hazard estimates provide both a visual and an analytic interpretation of the playtime phenomena as a funnel type nonparametric estimate. Metrics based on the survival curve can be used to aggregate this playtime information into a single statistic. Comparison of survival curves between cohorts provides a scientific AB-test. All these methods work on censored data and enable computation of confidence intervals. This is especially important in time and sample limited data which occurs during game development. Throughout this paper, we illustrate the application of these methods to real world game development problems on the Hipster Sheep mobile game.
[ 1, 0, 0, 1, 0, 0 ]
Title: Invariant-based inverse engineering of crane control parameters, Abstract: By applying invariant-based inverse engineering in the small-oscillations regime, we design the time dependence of the control parameters of an overhead crane (trolley displacement and rope length), to transport a load between two positions at different heights with minimal final energy excitation for a microcanonical ensemble of initial conditions. The analogies between ion transport in multisegmented traps or neutral atom transport in moving optical lattices and load manipulation by cranes opens a route for a useful transfer of techniques among very different fields.
[ 0, 1, 0, 0, 0, 0 ]
Title: Leaf Space Isometries of Singular Riemannian Foliations and Their Spectral Properties, Abstract: In this paper, the authors consider leaf spaces of singular Riemannian foliations $\mathcal{F}$ on compact manifolds $M$ and the associated $\mathcal{F}$-basic spectrum on $M$, $spec_B(M, \mathcal{F}),$ counted with multiplicities. Recently, a notion of smooth isometry $\varphi: M_1/\mathcal{F}_1\rightarrow M_2/\mathcal{F}_2$ between the leaf spaces of such singular Riemannian foliations $(M_1,\mathcal{F}_1)$ and $(M_2,\mathcal{F}_2)$ has appeared in the literature. In this paper, the authors provide an example to show that the existence a smooth isometry of leaf spaces as above is not sufficient to guarantee the equality of $spec_B(M_1,\mathcal{F}_1)$ and $spec_B(M_2,\mathcal{F}_2).$ The authors then prove that if some additional conditions involving the geometry of the leaves are satisfied, then the equality of $spec_B(M_1,\mathcal{F}_1)$ and $spec_B(M_2,\mathcal{F}_2)$ is guaranteed. Consequences and applications to orbifold spectral theory, isometric group actions, and their reductions are also explored.
[ 0, 0, 1, 0, 0, 0 ]
Title: Functional importance of noise in neuronal information processing, Abstract: Noise is an inherent part of neuronal dynamics, and thus of the brain. It can be observed in neuronal activity at different spatiotemporal scales, including in neuronal membrane potentials, local field potentials, electroencephalography, and magnetoencephalography. A central research topic in contemporary neuroscience is to elucidate the functional role of noise in neuronal information processing. Experimental studies have shown that a suitable level of noise may enhance the detection of weak neuronal signals by means of stochastic resonance. In response, theoretical research, based on the theory of stochastic processes, nonlinear dynamics, and statistical physics, has made great strides in elucidating the mechanism and the many benefits of stochastic resonance in neuronal systems. In this perspective, we review recent research dedicated to neuronal stochastic resonance in biophysical mathematical models. We also explore the regulation of neuronal stochastic resonance, and we outline important open questions and directions for future research. A deeper understanding of neuronal stochastic resonance may afford us new insights into the highly impressive information processing in the brain.
[ 0, 0, 0, 0, 1, 0 ]
Title: Self-consistent dynamical model of the Broad Line Region, Abstract: We develope a self-consistent description of the Broad Line Region based on the concept of the failed wind powered by the radiation pressure acting on dusty accretion disk atmosphere in Keplerian motion. The material raised high above the disk is illuminated, dust evaportes, and the matter falls back towards the disk. This material is the source of emission lines. The model predicts the inner and outer radius of the region, the cloud dynamics under the dust radiation pressure and, subsequently, just the gravitational field of the central black hole, which results in assymetry between the rise and fall. Knowledge of the dynamics allows to predict the shapes of the emission lines as functions of the basic parameters of an active nucleus: black hole mass, accretion rate, black hole spin (or accretion efficiency) and the viewing angle with respect to the symmetry axis. Here we show preliminary results based on analytical approximations to the cloud motion.
[ 0, 1, 0, 0, 0, 0 ]
Title: Pixelwise Instance Segmentation with a Dynamically Instantiated Network, Abstract: Semantic segmentation and object detection research have recently achieved rapid progress. However, the former task has no notion of different instances of the same object, and the latter operates at a coarse, bounding-box level. We propose an Instance Segmentation system that produces a segmentation map where each pixel is assigned an object class and instance identity label. Most approaches adapt object detectors to produce segments instead of boxes. In contrast, our method is based on an initial semantic segmentation module, which feeds into an instance subnetwork. This subnetwork uses the initial category-level segmentation, along with cues from the output of an object detector, within an end-to-end CRF to predict instances. This part of our model is dynamically instantiated to produce a variable number of instances per image. Our end-to-end approach requires no post-processing and considers the image holistically, instead of processing independent proposals. Therefore, unlike some related work, a pixel cannot belong to multiple instances. Furthermore, far more precise segmentations are achieved, as shown by our state-of-the-art results (particularly at high IoU thresholds) on the Pascal VOC and Cityscapes datasets.
[ 1, 0, 0, 0, 0, 0 ]
Title: Strong homotopy types of acyclic categories and $Δ$-complexes, Abstract: We extend the homotopy theories based on point reduction for finite spaces and simplicial complexes to finite acyclic categories and $\Delta$-complexes, respectively. The functors of classifying spaces and face posets are compatible with these homotopy theories. In contrast with the classical settings of finite spaces and simplicial complexes, the universality of morphisms and simplices plays a central role in this paper.
[ 0, 0, 1, 0, 0, 0 ]
Title: Bohm's approach to quantum mechanics: Alternative theory or practical picture?, Abstract: Since its inception Bohmian mechanics has been generally regarded as a hidden-variable theory aimed at providing an objective description of quantum phenomena. To date, this rather narrow conception of Bohm's proposal has caused it more rejection than acceptance. Now, after 65 years of Bohmian mechanics, should still be such an interpretational aspect the prevailing appraisal? Why not favoring a more pragmatic view, as a legitimate picture of quantum mechanics, on equal footing in all respects with any other more conventional quantum picture? These questions are used here to introduce a discussion on an alternative way to deal with Bohmian mechanics at present, enhancing its aspect as an efficient and useful picture or formulation to tackle, explore, describe and explain quantum phenomena where phase and correlation (entanglement) are key elements. This discussion is presented through two complementary blocks. The first block is aimed at briefly revisiting the historical context that gave rise to the appearance of Bohmian mechanics, and how this approach or analogous ones have been used in different physical contexts. This discussion is used to emphasize a more pragmatic view to the detriment of the more conventional hidden-variable (ontological) approach that has been a leitmotif within the quantum foundations. The second block focuses on some particular formal aspects of Bohmian mechanics supporting the view presented here, with special emphasis on the physical meaning of the local phase field and the associated velocity field encoded within the wave function. As an illustration, a simple model of Young's two-slit experiment is considered. The simplicity of this model allows to understand in an easy manner how the information conveyed by the Bohmian formulation relates to other more conventional concepts in quantum mechanics. This sort of pedagogical application is also aimed at ...
[ 0, 1, 0, 0, 0, 0 ]
Title: Improved Algorithms for Computing the Cycle of Minimum Cost-to-Time Ratio in Directed Graphs, Abstract: We study the problem of finding the cycle of minimum cost-to-time ratio in a directed graph with $ n $ nodes and $ m $ edges. This problem has a long history in combinatorial optimization and has recently seen interesting applications in the context of quantitative verification. We focus on strongly polynomial algorithms to cover the use-case where the weights are relatively large compared to the size of the graph. Our main result is an algorithm with running time $ \tilde O (m^{3/4} n^{3/2}) $, which gives the first improvement over Megiddo's $ \tilde O (n^3) $ algorithm [JACM'83] for sparse graphs. We further demonstrate how to obtain both an algorithm with running time $ n^3 / 2^{\Omega{(\sqrt{\log n})}} $ on general graphs and an algorithm with running time $ \tilde O (n) $ on constant treewidth graphs. To obtain our main result, we develop a parallel algorithm for negative cycle detection and single-source shortest paths that might be of independent interest.
[ 1, 0, 0, 0, 0, 0 ]
Title: Interplay of dust alignment, grain growth and magnetic fields in polarization: lessons from the emission-to-extinction ratio, Abstract: Polarized extinction and emission from dust in the interstellar medium (ISM) are hard to interpret, as they have a complex dependence on dust optical properties, grain alignment and magnetic field orientation. This is particularly true in molecular clouds. The data available today are not yet used to their full potential. The combination of emission and extinction, in particular, provides information not available from either of them alone. We combine data from the scientific literature on polarized dust extinction with Planck data on polarized emission and we use them to constrain the possible variations in dust and environmental conditions inside molecular clouds, and especially translucent lines of sight, taking into account magnetic field orientation. We focus on the dependence between \lambda_max -- the wavelength of maximum polarization in extinction -- and other observables such as the extinction polarization, the emission polarization and the ratio of the two. We set out to reproduce these correlations using Monte-Carlo simulations where the relevant quantities in a dust model -- grain alignment, size distribution and magnetic field orientation -- vary to mimic the diverse conditions expected inside molecular clouds. None of the quantities chosen can explain the observational data on its own: the best results are obtained when all quantities vary significantly across and within clouds. However, some of the data -- most notably the stars with low emission-to-extinction polarization ratio -- are not reproduced by our simulation. Our results suggest not only that dust evolution is necessary to explain polarization in molecular clouds, but that a simple change in size distribution is not sufficient to explain the data, and point the way for future and more sophisticated models.
[ 0, 1, 0, 0, 0, 0 ]
Title: Asynchronous Distributed Variational Gaussian Processes for Regression, Abstract: Gaussian processes (GPs) are powerful non-parametric function estimators. However, their applications are largely limited by the expensive computational cost of the inference procedures. Existing stochastic or distributed synchronous variational inferences, although have alleviated this issue by scaling up GPs to millions of samples, are still far from satisfactory for real-world large applications, where the data sizes are often orders of magnitudes larger, say, billions. To solve this problem, we propose ADVGP, the first Asynchronous Distributed Variational Gaussian Process inference for regression, on the recent large-scale machine learning platform, PARAMETERSERVER. ADVGP uses a novel, flexible variational framework based on a weight space augmentation, and implements the highly efficient, asynchronous proximal gradient optimization. While maintaining comparable or better predictive performance, ADVGP greatly improves upon the efficiency of the existing variational methods. With ADVGP, we effortlessly scale up GP regression to a real-world application with billions of samples and demonstrate an excellent, superior prediction accuracy to the popular linear models.
[ 0, 0, 0, 1, 0, 0 ]
Title: Stacco: Differentially Analyzing Side-Channel Traces for Detecting SSL/TLS Vulnerabilities in Secure Enclaves, Abstract: Intel Software Guard Extension (SGX) offers software applications enclave to protect their confidentiality and integrity from malicious operating systems. The SSL/TLS protocol, which is the de facto standard for protecting transport-layer network communications, has been broadly deployed for a secure communication channel. However, in this paper, we show that the marriage between SGX and SSL may not be smooth sailing. Particularly, we consider a category of side-channel attacks against SSL/TLS implementations in secure enclaves, which we call the control-flow inference attacks. In these attacks, the malicious operating system kernel may perform a powerful man-in-the-kernel attack to collect execution traces of the enclave programs at page, cacheline, or branch level, while positioning itself in the middle of the two communicating parties. At the center of our work is a differential analysis framework, dubbed Stacco, to dynamically analyze the SSL/TLS implementations and detect vulnerabilities that can be exploited as decryption oracles. Surprisingly, we found exploitable vulnerabilities in the latest versions of all the SSL/TLS libraries we have examined. To validate the detected vulnerabilities, we developed a man-in-the-kernel adversary to demonstrate Bleichenbacher attacks against the latest OpenSSL library running in the SGX enclave (with the help of Graphene) and completely broke the PreMasterSecret encrypted by a 4096-bit RSA public key with only 57286 queries. We also conducted CBC padding oracle attacks against the latest GnuTLS running in Graphene-SGX and an open-source SGX-implementation of mbedTLS (i.e., mbedTLS-SGX) that runs directly inside the enclave, and showed that it only needs 48388 and 25717 queries, respectively, to break one block of AES ciphertext. Empirical evaluation suggests these man-in-the-kernel attacks can be completed within 1 or 2 hours.
[ 1, 0, 0, 0, 0, 0 ]
Title: Representations of Super $W(2,2)$ algebra $\mathfrak{L}$, Abstract: In paper, we study the representation theory of super $W(2,2)$ algebra ${\mathfrak{L}}$. We prove that ${\mathfrak{L}}$ has no mixed irreducible modules and give the classification of irreducible modules of intermediate series. We determinate the conjugate-linear anti-involution of ${\mathfrak{L}}$ and give the unitary modules of intermediate series.
[ 0, 0, 1, 0, 0, 0 ]
Title: Effective Reformulation of Query for Code Search using Crowdsourced Knowledge and Extra-Large Data Analytics, Abstract: Software developers frequently issue generic natural language queries for code search while using code search engines (e.g., GitHub native search, Krugle). Such queries often do not lead to any relevant results due to vocabulary mismatch problems. In this paper, we propose a novel technique that automatically identifies relevant and specific API classes from Stack Overflow Q & A site for a programming task written as a natural language query, and then reformulates the query for improved code search. We first collect candidate API classes from Stack Overflow using pseudo-relevance feedback and two term weighting algorithms, and then rank the candidates using Borda count and semantic proximity between query keywords and the API classes. The semantic proximity has been determined by an analysis of 1.3 million questions and answers of Stack Overflow. Experiments using 310 code search queries report that our technique suggests relevant API classes with 48% precision and 58% recall which are 32% and 48% higher respectively than those of the state-of-the-art. Comparisons with two state-of-the-art studies and three popular search engines (e.g., Google, Stack Overflow, and GitHub native search) report that our reformulated queries (1) outperform the queries of the state-of-the-art, and (2) significantly improve the code search results provided by these contemporary search engines.
[ 1, 0, 0, 0, 0, 0 ]
Title: Superconductivity at 7.3 K in the 133-type Cr-based RbCr3As3 single crystals, Abstract: Here we report the preparation and superconductivity of the 133-type Cr-based quasi-one-dimensional (Q1D) RbCr3As3 single crystals. The samples were prepared by the deintercalation of Rb+ ions from the 233-type Rb2Cr3As3 crystals which were grown from a high-temperature solution growth method. The RbCr3As3 compound crystallizes in a centrosymmetric structure with the space group of P63/m (No. 176) different with its non-centrosymmetric Rb2Cr3As3 superconducting precursor, and the refined lattice parameters are a = 9.373(3) {\AA} and c = 4.203(7) {\AA}. Electrical resistivity and magnetic susceptibility characterizations reveal the occurrence of superconductivity with an interestingly higher onset Tc of 7.3 K than other Cr-based superconductors, and a high upper critical field Hc2(0) near 70 T in this 133-type RbCr3As3 crystals.
[ 0, 1, 0, 0, 0, 0 ]
Title: Neural-Network Quantum States, String-Bond States, and Chiral Topological States, Abstract: Neural-Network Quantum States have been recently introduced as an Ansatz for describing the wave function of quantum many-body systems. We show that there are strong connections between Neural-Network Quantum States in the form of Restricted Boltzmann Machines and some classes of Tensor-Network states in arbitrary dimensions. In particular we demonstrate that short-range Restricted Boltzmann Machines are Entangled Plaquette States, while fully connected Restricted Boltzmann Machines are String-Bond States with a nonlocal geometry and low bond dimension. These results shed light on the underlying architecture of Restricted Boltzmann Machines and their efficiency at representing many-body quantum states. String-Bond States also provide a generic way of enhancing the power of Neural-Network Quantum States and a natural generalization to systems with larger local Hilbert space. We compare the advantages and drawbacks of these different classes of states and present a method to combine them together. This allows us to benefit from both the entanglement structure of Tensor Networks and the efficiency of Neural-Network Quantum States into a single Ansatz capable of targeting the wave function of strongly correlated systems. While it remains a challenge to describe states with chiral topological order using traditional Tensor Networks, we show that Neural-Network Quantum States and their String-Bond States extension can describe a lattice Fractional Quantum Hall state exactly. In addition, we provide numerical evidence that Neural-Network Quantum States can approximate a chiral spin liquid with better accuracy than Entangled Plaquette States and local String-Bond States. Our results demonstrate the efficiency of neural networks to describe complex quantum wave functions and pave the way towards the use of String-Bond States as a tool in more traditional machine-learning applications.
[ 0, 1, 0, 1, 0, 0 ]
Title: End-to-End Information Extraction without Token-Level Supervision, Abstract: Most state-of-the-art information extraction approaches rely on token-level labels to find the areas of interest in text. Unfortunately, these labels are time-consuming and costly to create, and consequently, not available for many real-life IE tasks. To make matters worse, token-level labels are usually not the desired output, but just an intermediary step. End-to-end (E2E) models, which take raw text as input and produce the desired output directly, need not depend on token-level labels. We propose an E2E model based on pointer networks, which can be trained directly on pairs of raw input and output text. We evaluate our model on the ATIS data set, MIT restaurant corpus and the MIT movie corpus and compare to neural baselines that do use token-level labels. We achieve competitive results, within a few percentage points of the baselines, showing the feasibility of E2E information extraction without the need for token-level labels. This opens up new possibilities, as for many tasks currently addressed by human extractors, raw input and output data are available, but not token-level labels.
[ 1, 0, 0, 0, 0, 0 ]
Title: Lipschitz regularity of solutions to two-phase free boundary problems, Abstract: We prove Lipschitz continuity of viscosity solutions to a class of two-phase free boundary problems governed by fully nonlinear operators.
[ 0, 0, 1, 0, 0, 0 ]
Title: Efficient Localized Inference for Large Graphical Models, Abstract: We propose a new localized inference algorithm for answering marginalization queries in large graphical models with the correlation decay property. Given a query variable and a large graphical model, we define a much smaller model in a local region around the query variable in the target model so that the marginal distribution of the query variable can be accurately approximated. We introduce two approximation error bounds based on the Dobrushin's comparison theorem and apply our bounds to derive a greedy expansion algorithm that efficiently guides the selection of neighbor nodes for localized inference. We verify our theoretical bounds on various datasets and demonstrate that our localized inference algorithm can provide fast and accurate approximation for large graphical models.
[ 1, 0, 0, 1, 0, 0 ]
Title: Multi-kink collisions in the $ϕ^6$ model, Abstract: We study simultaneous collisions of two, three, and four kinks and antikinks of the $\phi^6$ model at the same spatial point. Unlike the $\phi^4$ kinks, the $\phi^6$ kinks are asymmetric and this enriches the variety of the collision scenarios. In our numerical simulations we observe both reflection and bound state formation depending on the number of kinks and on their spatial ordering in the initial configuration. We also analyze the extreme values of the energy densities and the field gradient observed during the collisions. Our results suggest that very high energy densities can be produced in multi-kink collisions in a controllable manner. Appearance of high energy density spots in multi-kink collisions can be important in various physical applications of the Klein-Gordon model.
[ 0, 1, 0, 0, 0, 0 ]
Title: Deep Multimodal Subspace Clustering Networks, Abstract: We present convolutional neural network (CNN) based approaches for unsupervised multimodal subspace clustering. The proposed framework consists of three main stages - multimodal encoder, self-expressive layer, and multimodal decoder. The encoder takes multimodal data as input and fuses them to a latent space representation. The self-expressive layer is responsible for enforcing the self-expressiveness property and acquiring an affinity matrix corresponding to the data points. The decoder reconstructs the original input data. The network uses the distance between the decoder's reconstruction and the original input in its training. We investigate early, late and intermediate fusion techniques and propose three different encoders corresponding to them for spatial fusion. The self-expressive layers and multimodal decoders are essentially the same for different spatial fusion-based approaches. In addition to various spatial fusion-based methods, an affinity fusion-based network is also proposed in which the self-expressive layer corresponding to different modalities is enforced to be the same. Extensive experiments on three datasets show that the proposed methods significantly outperform the state-of-the-art multimodal subspace clustering methods.
[ 0, 0, 0, 1, 0, 0 ]
Title: Recurrent Autoregressive Networks for Online Multi-Object Tracking, Abstract: The main challenge of online multi-object tracking is to reliably associate object trajectories with detections in each video frame based on their tracking history. In this work, we propose the Recurrent Autoregressive Network (RAN), a temporal generative modeling framework to characterize the appearance and motion dynamics of multiple objects over time. The RAN couples an external memory and an internal memory. The external memory explicitly stores previous inputs of each trajectory in a time window, while the internal memory learns to summarize long-term tracking history and associate detections by processing the external memory. We conduct experiments on the MOT 2015 and 2016 datasets to demonstrate the robustness of our tracking method in highly crowded and occluded scenes. Our method achieves top-ranked results on the two benchmarks.
[ 1, 0, 0, 0, 0, 0 ]
Title: The occurrence of transverse and longitudinal electric currents in the classical plasma under the action of N transverse electromagnetic waves, Abstract: Classical plasma with arbitrary degree of degeneration of electronic gas is considered. In plasma N (N>2) collinear electromagnatic waves are propagated. It is required to find the response of plasma to these waves. Distribution function in square-law approximation on quantities of two small parameters from Vlasov equation is received. The formula for electric current calculation is deduced. It is demonstrated that the nonlinearity account leads to occurrence of the longitudinal electric current directed along a wave vector. This longitudinal current is orthogonal to the known transversal current received at the linear analysis. The case of small values of wave number is considered.
[ 0, 1, 0, 0, 0, 0 ]
Title: Large deviation theorem for random covariance matrices, Abstract: We establish a large deviation theorem for the empirical spectral distribution of random covariance matrices whose entries are independent random variables with mean 0, variance 1 and having controlled forth moments. Some new properties of Laguerre polynomials are also given.
[ 0, 0, 1, 0, 0, 0 ]
Title: Abundances in photoionized nebulae of the Local Group and nucleosynthesis of intermediate mass stars, Abstract: Photoionized nebulae, comprising HII regions and planetary nebulae, are excellent laboratories to investigate the nucleosynthesis and chemical evolution of several elements in the Galaxy and other galaxies of the Local Group. Our purpose in this investigation is threefold: (i) compare the abundances of HII regions and planetary nebulae in each system in order to investigate the differences derived from the age and origin of these objects, (ii) compare the chemical evolution in different systems, such as the Milky Way, the Magellanic Clouds, and other galaxies of the Local Group, and (iii) investigate to what extent the nucleosynthesis contributions from the progenitor stars affect the observed abundances in planetary nebulae, which constrains the nucleosynthesis of intermediate mass stars. We show that all objects in the samples present similar trends concerning distance-independent correlations, and some constraints can be defined on the production of He and N by the PN progenitor stars.
[ 0, 1, 0, 0, 0, 0 ]
Title: Multi-scale analysis of lead-lag relationships in high-frequency financial markets, Abstract: We propose a novel estimation procedure for scale-by-scale lead-lag relationships of financial assets observed at a high-frequency in a non-synchronous manner. The proposed estimation procedure does not require any interpolation processing of the original data and is applicable to quite fine resolution data. The validity of the proposed estimators is shown under the continuous-time framework developed in our previous work Hayashi and Koike (2016). An empirical application shows promising results of the proposed approach.
[ 0, 0, 0, 1, 0, 0 ]
Title: Anomaly Detection Using Optimally-Placed Micro-PMU Sensors in Distribution Grids, Abstract: As the distribution grid moves toward a tightly-monitored network, it is important to automate the analysis of the enormous amount of data produced by the sensors to increase the operators situational awareness about the system. In this paper, focusing on Micro-Phasor Measurement Unit ($\mu$PMU) data, we propose a hierarchical architecture for monitoring the grid and establish a set of analytics and sensor fusion primitives for the detection of abnormal behavior in the control perimeter. Due to the key role of the $\mu$PMU devices in our architecture, a source-constrained optimal $\mu$PMU placement is also described that finds the best location of the devices with respect to our rules. The effectiveness of the proposed methods are tested through the synthetic and real $\mu$PMU data.
[ 1, 0, 0, 0, 0, 0 ]
Title: Ultrahigh Magnetic Field Phases in Frustrated Triangular-lattice Magnet CuCrO$_2$, Abstract: The magnetic phases of a triangular-lattice antiferromagnet, CuCrO$_2$, were investigated in magnetic fields along to the $c$ axis, $H$ // [001], up to 120 T. Faraday rotation and magneto-absorption spectroscopy were used to unveil the rich physics of magnetic phases. An up-up-down (UUD) magnetic structure phase was observed around 90--105 T at temperatures around 10 K. Additional distinct anomalies adjacent to the UUD phase were uncovered and the Y-shaped and the V-shaped phases are proposed to be viable candidates. These ordered phases are emerged as a result of the interplay of geometrical spin frustration, single ion anisotropy and thermal fluctuations in an environment of extremely high magnetic fields.
[ 0, 1, 0, 0, 0, 0 ]
Title: Preserving Differential Privacy in Convolutional Deep Belief Networks, Abstract: The remarkable development of deep learning in medicine and healthcare domain presents obvious privacy issues, when deep neural networks are built on users' personal and highly sensitive data, e.g., clinical records, user profiles, biomedical images, etc. However, only a few scientific studies on preserving privacy in deep learning have been conducted. In this paper, we focus on developing a private convolutional deep belief network (pCDBN), which essentially is a convolutional deep belief network (CDBN) under differential privacy. Our main idea of enforcing epsilon-differential privacy is to leverage the functional mechanism to perturb the energy-based objective functions of traditional CDBNs, rather than their results. One key contribution of this work is that we propose the use of Chebyshev expansion to derive the approximate polynomial representation of objective functions. Our theoretical analysis shows that we can further derive the sensitivity and error bounds of the approximate polynomial representation. As a result, preserving differential privacy in CDBNs is feasible. We applied our model in a health social network, i.e., YesiWell data, and in a handwriting digit dataset, i.e., MNIST data, for human behavior prediction, human behavior classification, and handwriting digit recognition tasks. Theoretical analysis and rigorous experimental evaluations show that the pCDBN is highly effective. It significantly outperforms existing solutions.
[ 1, 0, 0, 1, 0, 0 ]
Title: Injectivity almost everywhere and mappings with finite distortion in nonlinear elasticity, Abstract: We show that a sufficient condition for the weak limit of a sequence of $W^1_q$-homeomorphisms with finite distortion to be almost everywhere injective for $q \geq n-1$, can be stated by means of composition operators. Applying this result, we study nonlinear elasticity problems with respect to these new classes of mappings. Furthermore, we impose loose growth conditions on the stored-energy function for the class of $W^1_n$-homeomorphisms with finite distortion and integrable inner as well as outer distortion coefficients.
[ 0, 0, 1, 0, 0, 0 ]
Title: Achromatic super-oscillatory lenses with sub-wavelength focusing, Abstract: Lenses are crucial to light-enabled technologies. Conventional lenses have been perfected to achieve near-diffraction-limited resolution and minimal chromatic aberrations. However, such lenses are bulky and cannot focus light into a hotspot smaller than half wavelength of light. Pupil filters, initially suggested by Toraldo di Francia, can overcome the resolution constraints of conventional lenses, but are not intrinsically chromatically corrected. Here we report single-element planar lenses that not only deliver sub-wavelength focusing (beating the diffraction limit of conventional refractive lenses) but also focus light of different colors into the same hotspot. Using the principle of super-oscillations we designed and fabricated a range of binary dielectric and metallic lenses for visible and infrared parts of the spectrum that are manufactured on silicon wafers, silica substrates and optical fiber tips. Such low cost, compact lenses could be useful in mobile devices, data storage, surveillance, robotics, space applications, imaging, manufacturing with light, and spatially resolved nonlinear microscopies.
[ 0, 1, 0, 0, 0, 0 ]
Title: Wireless Power Transfer for Distributed Estimation in Sensor Networks, Abstract: This paper studies power allocation for distributed estimation of an unknown scalar random source in sensor networks with a multiple-antenna fusion center (FC), where wireless sensors are equipped with radio-frequency based energy harvesting technology. The sensors' observation is locally processed by using an uncoded amplify-and-forward scheme. The processed signals are then sent to the FC, and are coherently combined at the FC, at which the best linear unbiased estimator (BLUE) is adopted for reliable estimation. We aim to solve the following two power allocation problems: 1) minimizing distortion under various power constraints; and 2) minimizing total transmit power under distortion constraints, where the distortion is measured in terms of mean-squared error of the BLUE. Two iterative algorithms are developed to solve the non-convex problems, which converge at least to a local optimum. In particular, the above algorithms are designed to jointly optimize the amplification coefficients, energy beamforming, and receive filtering. For each problem, a suboptimal design, a single-antenna FC scenario, and a common harvester deployment for colocated sensors, are also studied. Using the powerful semidefinite relaxation framework, our result is shown to be valid for any number of sensors, each with different noise power, and for an arbitrarily number of antennas at the FC.
[ 1, 0, 1, 0, 0, 0 ]
Title: About a non-standard interpolation problem, Abstract: Using algebraic methods, and motivated by the one variable case, we study a multipoint interpolation problem in the setting of several complex variables. The duality realized by the residue generator associated with an underlying Gorenstein algebra, using the Lagrange interpolation polynomial, plays a key role in the arguments.
[ 0, 0, 1, 0, 0, 0 ]
Title: Quantum spin liquid signatures in Kitaev-like frustrated magnets, Abstract: Motivated by recent experiments on $\alpha$-RuCl$_3$, we investigate a possible quantum spin liquid ground state of the honeycomb-lattice spin model with bond-dependent interactions. We consider the $K-\Gamma$ model, where $K$ and $\Gamma$ represent the Kitaev and symmetric-anisotropic interactions between spin-1/2 moments on the honeycomb lattice. Using the infinite density matrix renormalization group (iDMRG), we provide compelling evidence for the existence of quantum spin liquid phases in an extended region of the phase diagram. In particular, we use transfer matrix spectra to show the evolution of two-particle excitations with well-defined two-dimensional dispersion, which is a strong signature of quantum spin liquid. These results are compared with predictions from Majorana mean-field theory and used to infer the quasiparticle excitation spectra. Further, we compute the dynamical structure factor using finite size cluster computations and show that the results resemble the scattering continuum seen in neutron scattering experiments on $\alpha$-RuCl$_3$. We discuss these results in light of recent and future experiments.
[ 0, 1, 0, 0, 0, 0 ]
Title: Charge polarization effects on the optical response of blue-emitting superlattices, Abstract: In the new approach to study the optical response of periodic structures, successfully applied to study the optical properties of blue-emitting InGaN/GaN superlattices, the spontaneous charge polarization was neglected. To search the effect of this quantum confined Stark phenomenon we study the optical response, assuming parabolic band edge modulations in the conduction and valence bands. We discuss the consequences on the eigenfunction symmetries and the ensuing optical transition selection rules. Using the new approach in the WKB approximation of the finite periodic systems theory, we determine the energy eigenvalues, their corresponding eigenfunctions and the subband structures in the conduction and valence bands. We calculate the photoluminescence as a function of the charge localization strength, and compare with the experimental result. We show that for subbands close to the barrier edge the optical response and the surface states are sensitive to charge polarization strength.
[ 0, 1, 0, 0, 0, 0 ]
Title: System Description: Russell - A Logical Framework for Deductive Systems, Abstract: Russell is a logical framework for the specification and implementation of deductive systems. It is a high-level language with respect to Metamath language, so inherently it uses a Metamath foundations, i.e. it doesn't rely on any particular formal calculus, but rather is a pure logical framework. The main difference with Metamath is in the proof language and approach to syntax: the proofs have a declarative form, i.e. consist of actual expressions, which are used in proofs, while syntactic grammar rules are separated from the meaningful rules of inference. Russell is implemented in c++14 and is distributed under GPL v3 license. The repository contains translators from Metamath to Russell and back. Original Metamath theorem base (almost 30 000 theorems) can be translated to Russell, verified, translated back to Metamath and verified with the original Metamath verifier. Russell can be downloaded from the repository this https URL
[ 1, 0, 1, 0, 0, 0 ]
Title: Short-Time Nonlinear Effects in the Exciton-Polariton System, Abstract: In the exciton-polariton system, a linear dispersive photon field is coupled to a nonlinear exciton field. Short-time analysis of the lossless system shows that, when the photon field is excited, the time required for that field to exhibit nonlinear effects is longer than the time required for the nonlinear Schrödinger equation, in which the photon field itself is nonlinear. When the initial condition is scaled by $\epsilon^\alpha$, it is found that the relative error committed by omitting the nonlinear term in the exciton-polariton system remains within $\epsilon$ for all times up to $t=C\epsilon^\beta$, where $\beta=(1-\alpha(p-1))/(p+2)$. This is in contrast to $\beta=1-\alpha(p-1)$ for the nonlinear Schrödinger equation.
[ 0, 0, 1, 0, 0, 0 ]
Title: GTC Observations of an Overdense Region of LAEs at z=6.5, Abstract: We present the results of our search for the faint galaxies near the end of the Reionisation Epoch. This has been done using very deep OSIRIS images obtained at the Gran Telescopio Canarias (GTC). Our observations focus around two close, massive Lyman Alpha Emitters (LAEs) at redshift 6.5, discovered in the SXDS field within a large-scale overdense region (Ouchi et al. 2010). The total GTC observing time in three medium band filters (F883w35, F913w25 and F941w33) is over 34 hours covering $7.0\times8.5$ arcmin$^2$ (or $\sim30,000$ Mpc$^3$ at $z=6.5$). In addition to the two spectroscopically confirmed LAEs in the field, we have identified 45 other LAE candidates. The preliminary luminosity function derived from our observations, assuming a spectroscopic confirmation success rate of $\frac{2}{3}$ as in previous surveys, suggests this area is about 2 times denser than the general field galaxy population at $z=6.5$. If confirmed spectroscopically, our results will imply the discovery of one of the earliest protoclusters in the universe, which will evolve to resemble the most massive galaxy clusters today.
[ 0, 1, 0, 0, 0, 0 ]
Title: Temporal Action Localization by Structured Maximal Sums, Abstract: We address the problem of temporal action localization in videos. We pose action localization as a structured prediction over arbitrary-length temporal windows, where each window is scored as the sum of frame-wise classification scores. Additionally, our model classifies the start, middle, and end of each action as separate components, allowing our system to explicitly model each action's temporal evolution and take advantage of informative temporal dependencies present in this structure. In this framework, we localize actions by searching for the structured maximal sum, a problem for which we develop a novel, provably-efficient algorithmic solution. The frame-wise classification scores are computed using features from a deep Convolutional Neural Network (CNN), which are trained end-to-end to directly optimize for a novel structured objective. We evaluate our system on the THUMOS 14 action detection benchmark and achieve competitive performance.
[ 1, 0, 0, 0, 0, 0 ]
Title: Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments, Abstract: Ability to continuously learn and adapt from limited experience in nonstationary environments is an important milestone on the path towards general intelligence. In this paper, we cast the problem of continuous adaptation into the learning-to-learn framework. We develop a simple gradient-based meta-learning algorithm suitable for adaptation in dynamically changing and adversarial scenarios. Additionally, we design a new multi-agent competitive environment, RoboSumo, and define iterated adaptation games for testing various aspects of continuous adaptation strategies. We demonstrate that meta-learning enables significantly more efficient adaptation than reactive baselines in the few-shot regime. Our experiments with a population of agents that learn and compete suggest that meta-learners are the fittest.
[ 1, 0, 0, 0, 0, 0 ]
Title: Alpha-Divergences in Variational Dropout, Abstract: We investigate the use of alternative divergences to Kullback-Leibler (KL) in variational inference(VI), based on the Variational Dropout \cite{kingma2015}. Stochastic gradient variational Bayes (SGVB) \cite{aevb} is a general framework for estimating the evidence lower bound (ELBO) in Variational Bayes. In this work, we extend the SGVB estimator with using Alpha-Divergences, which are alternative to divergences to VI' KL objective. The Gaussian dropout can be seen as a local reparametrization trick of the SGVB objective. We extend the Variational Dropout to use alpha divergences for variational inference. Our results compare $\alpha$-divergence variational dropout with standard variational dropout with correlated and uncorrelated weight noise. We show that the $\alpha$-divergence with $\alpha \rightarrow 1$ (or KL divergence) is still a good measure for use in variational inference, in spite of the efficient use of Alpha-divergences for Dropout VI \cite{Li17}. $\alpha \rightarrow 1$ can yield the lowest training error, and optimizes a good lower bound for the evidence lower bound (ELBO) among all values of the parameter $\alpha \in [0,\infty)$.
[ 1, 0, 0, 1, 0, 0 ]
Title: Dehn invariant of flexible polyhedra, Abstract: We prove that the Dehn invariant of any flexible polyhedron in Euclidean space of dimension greater than or equal to 3 is constant during the flexion. In dimensions 3 and 4 this implies that any flexible polyhedron remains scissors congruent to itself during the flexion. This proves the Strong Bellows Conjecture posed by Connelly in 1979. It was believed that this conjecture was disproved by Alexandrov and Connelly in 2009. However, we find an error in their counterexample. Further, we show that the Dehn invariant of a flexible polyhedron in either sphere or Lobachevsky space of dimension greater than or equal to 3 is constant during the flexion if and only if this polyhedron satisfies the usual Bellows Conjecture, i.e., its volume is constant during every flexion of it. Using previous results due to the first listed author, we deduce that the Dehn invariant is constant during the flexion for every bounded flexible polyhedron in odd-dimensional Lobachevsky space and for every flexible polyhedron with sufficiently small edge lengths in any space of constant curvature of dimension greater than or equal to 3.
[ 0, 0, 1, 0, 0, 0 ]
Title: Single Magnetic Impurity in Tilted Dirac Surface States, Abstract: We utilize variational method to investigate the Kondo screening of a spin-1/2 magnetic impurity in tilted Dirac surface states with the Dirac cone tilted along the $k_y$-axis. We mainly study about the effect of the tilting term on the binding energy and the spin-spin correlation between magnetic impurity and conduction electrons, and compare the results with the counterparts in a two dimensional helical metal. The binding energy has a critical value while the Dirac cone is slightly tilted. However, as the tilting term increases, the density of states around the Fermi surface becomes significant, such that the impurity and the host material always favor a bound state. The diagonal and the off-diagonal terms of the spin-spin correlation between the magnetic impurity and conduction electrons are also studied. Due to the spin-orbit coupling and the tilting of the spectra, various components of spin-spin correlation show very strong anisotropy in coordinate space, and are of power-law decay with respect to the spatial displacements.
[ 0, 1, 0, 0, 0, 0 ]
Title: Leveraging the Path Signature for Skeleton-based Human Action Recognition, Abstract: Human action recognition in videos is one of the most challenging tasks in computer vision. One important issue is how to design discriminative features for representing spatial context and temporal dynamics. Here, we introduce a path signature feature to encode information from intra-frame and inter-frame contexts. A key step towards leveraging this feature is to construct the proper trajectories (paths) for the data steam. In each frame, the correlated constraints of human joints are treated as small paths, then the spatial path signature features are extracted from them. In video data, the evolution of these spatial features over time can also be regarded as paths from which the temporal path signature features are extracted. Eventually, all these features are concatenated to constitute the input vector of a fully connected neural network for action classification. Experimental results on four standard benchmark action datasets, J-HMDB, SBU Dataset, Berkeley MHAD, and NTURGB+D demonstrate that the proposed approach achieves state-of-the-art accuracy even in comparison with recent deep learning based models.
[ 1, 0, 0, 0, 0, 0 ]
Title: How Many Subpopulations is Too Many? Exponential Lower Bounds for Inferring Population Histories, Abstract: Reconstruction of population histories is a central problem in population genetics. Existing coalescent-based methods, like the seminal work of Li and Durbin (Nature, 2011), attempt to solve this problem using sequence data but have no rigorous guarantees. Determining the amount of data needed to correctly reconstruct population histories is a major challenge. Using a variety of tools from information theory, the theory of extremal polynomials, and approximation theory, we prove new sharp information-theoretic lower bounds on the problem of reconstructing population structure -- the history of multiple subpopulations that merge, split and change sizes over time. Our lower bounds are exponential in the number of subpopulations, even when reconstructing recent histories. We demonstrate the sharpness of our lower bounds by providing algorithms for distinguishing and learning population histories with matching dependence on the number of subpopulations.
[ 0, 0, 0, 0, 1, 0 ]
Title: Source localization in an ocean waveguide using supervised machine learning, Abstract: Source localization in ocean acoustics is posed as a machine learning problem in which data-driven methods learn source ranges directly from observed acoustic data. The pressure received by a vertical linear array is preprocessed by constructing a normalized sample covariance matrix (SCM) and used as the input. Three machine learning methods (feed-forward neural networks (FNN), support vector machines (SVM) and random forests (RF)) are investigated in this paper, with focus on the FNN. The range estimation problem is solved both as a classification problem and as a regression problem by these three machine learning algorithms. The results of range estimation for the Noise09 experiment are compared for FNN, SVM, RF and conventional matched-field processing and demonstrate the potential of machine learning for underwater source localization..
[ 1, 1, 0, 0, 0, 0 ]
Title: Mining Illegal Insider Trading of Stocks: A Proactive Approach, Abstract: Illegal insider trading of stocks is based on releasing non-public information (e.g., new product launch, quarterly financial report, acquisition or merger plan) before the information is made public. Detecting illegal insider trading is difficult due to the complex, nonlinear, and non-stationary nature of the stock market. In this work, we present an approach that detects and predicts illegal insider trading proactively from large heterogeneous sources of structured and unstructured data using a deep-learning based approach combined with discrete signal processing on the time series data. In addition, we use a tree-based approach that visualizes events and actions to aid analysts in their understanding of large amounts of unstructured data. Using existing data, we have discovered that our approach has a good success rate in detecting illegal insider trading patterns.
[ 0, 0, 0, 1, 0, 1 ]
Title: Predictive Simulations for Tuning Electronic and Optical Properties of SubPc Derivatives, Abstract: Boron subphthalocyanine chloride is an electron donor material used in small molecule organic photovoltaics with an unusually large molecular dipole moment. Using first-principles calculations, we investigate enhancing the electronic and optical properties of boron subphthalocyanine chloride, by substituting the boron and chlorine atoms with other trivalent and halogen atoms in order to modify the molecular dipole moment. Gas phase molecular structures and properties are predicted with hybrid functionals. Using positions and orientations of the known compounds as the starting coordinates for these molecules, stable crystalline structures are derived following a procedure that involves perturbation and accurate total energy minimization. Electronic structure and photonic properties of the predicted crystals are computed using the GW method and the Bethe-Salpeter equation, respectively. Finally, a simple transport model is use to demonstrate the importance of molecular dipole moments on device performance.
[ 0, 1, 0, 0, 0, 0 ]
Title: Free energy of formation of a crystal nucleus in incongruent solidification: Implication for modeling the crystallization of aqueous nitric acid droplets in type 1 polar stratospheric clouds, Abstract: Using the formalism of the classical nucleation theory, we derive an expression for the reversible work $W_*$ of formation of a binary crystal nucleus in a liquid binary solution of non-stoichiometric composition (incongruent crystallization). Applied to the crystallization of aqueous nitric acid (NA) droplets, the new expression more adequately takes account of the effect of nitric acid vapor compared to the conventional expression of MacKenzie, Kulmala, Laaksonen, and Vesala (MKLV) [J.Geophys.Res. 102, 19729 (1997)]. The predictions of both MKLV and modified expressions for the average liquid-solid interfacial tension $\sigma^{ls}$ of nitric acid dihydrate (NAD) crystals are compared by using existing experimental data on the incongruent crystallization of aqueous NA droplets of composition relevant to polar stratospheric clouds (PSCs). The predictions based on the MKLV expression are higher by about 5% compared to predictions based on our modified expression. This results in similar differences between the predictions of both expressions for the solid-vapor interfacial tension $\sigma^{sv}$ of NAD crystal nuclei. The latter can be obtained by analyzing of experimental data on crystal nucleation rates in aqueous NA droplets and exploiting the dominance of the surface-stimulated mode of crystal nucleation in small droplets and its negligibility in large ones. Applying that method, our expression for $W_*$ provides an estimate for $\sigma^{sv}$ of NAD in the range from 92 dyn/cm to 100 dyn/cm, while the MKLV expression predicts it in the range from 95 dyn/cm to 105 dyn/cm. The predictions of both expressions for $W_*$ become identical in the case of congruent crystallization; this was also demonstrated by applying our method to the nucleation of nitric acid trihydrate (NAT) crystals in PSC droplets of stoichiometric composition.
[ 0, 1, 0, 0, 0, 0 ]
Title: Ensemble learning with Conformal Predictors: Targeting credible predictions of conversion from Mild Cognitive Impairment to Alzheimer's Disease, Abstract: Most machine learning classifiers give predictions for new examples accurately, yet without indicating how trustworthy predictions are. In the medical domain, this hampers their integration in decision support systems, which could be useful in the clinical practice. We use a supervised learning approach that combines Ensemble learning with Conformal Predictors to predict conversion from Mild Cognitive Impairment to Alzheimer's Disease. Our goal is to enhance the classification performance (Ensemble learning) and complement each prediction with a measure of credibility (Conformal Predictors). Our results showed the superiority of the proposed approach over a similar ensemble framework with standard classifiers.
[ 0, 0, 0, 1, 0, 0 ]
Title: Repair Strategies for Storage on Mobile Clouds, Abstract: We study the data reliability problem for a community of devices forming a mobile cloud storage system. We consider the application of regenerating codes for file maintenance within a geographically-limited area. Such codes require lower bandwidth to regenerate lost data fragments compared to file replication or reconstruction. We investigate threshold-based repair strategies where data repair is initiated after a threshold number of data fragments have been lost due to node mobility. We show that at a low departure-to-repair rate regime, a lazy repair strategy in which repairs are initiated after several nodes have left the system outperforms eager repair in which repairs are initiated after a single departure. This optimality is reversed when nodes are highly mobile. We further compare distributed and centralized repair strategies and derive the optimal repair threshold for minimizing the average repair cost per unit of time, as a function of underlying code parameters. In addition, we examine cooperative repair strategies and show performance improvements compared to non-cooperative codes. We investigate several models for the time needed for node repair including a simple fixed time model that allows for the computation of closed-form expressions and a more realistic model that takes into account the number of repaired nodes. We derive the conditions under which the former model approximates the latter. Finally, an extended model where additional failures are allowed during the repair process is investigated. Overall, our results establish the joint effect of code design and repair algorithms on the maintenance cost of distributed storage systems.
[ 1, 0, 0, 0, 0, 0 ]
Title: Learning from MOM's principles: Le Cam's approach, Abstract: We obtain estimation error rates for estimators obtained by aggregation of regularized median-of-means tests, following a construction of Le Cam. The results hold with exponentially large probability -- as in the gaussian framework with independent noise- under only weak moments assumptions on data and without assuming independence between noise and design. Any norm may be used for regularization. When it has some sparsity inducing power we recover sparse rates of convergence. The procedure is robust since a large part of data may be corrupted, these outliers have nothing to do with the oracle we want to reconstruct. Our general risk bound is of order \begin{equation*} \max\left(\mbox{minimax rate in the i.i.d. setup}, \frac{\text{number of outliers}}{\text{number of observations}}\right) \enspace. \end{equation*}In particular, the number of outliers may be as large as (number of data) $\times$(minimax rate) without affecting this rate. The other data do not have to be identically distributed but should only have equivalent $L^1$ and $L^2$ moments. For example, the minimax rate $s \log(ed/s)/N$ of recovery of a $s$-sparse vector in $\mathbb{R}^d$ is achieved with exponentially large probability by a median-of-means version of the LASSO when the noise has $q_0$ moments for some $q_0>2$, the entries of the design matrix should have $C_0\log(ed)$ moments and the dataset can be corrupted up to $C_1 s \log(ed/s)$ outliers.
[ 0, 0, 1, 1, 0, 0 ]
Title: Generalized Log-sine integrals and Bell polynomials, Abstract: In this paper, we investigate the integral of $x^n\log^m(\sin(x))$ for natural numbers $m$ and $n$. In doing so, we recover some well-known results and remark on some relations to the log-sine integral $\operatorname{Ls}_{n+m+1}^{(n)}(\theta)$. Later, we use properties of Bell polynomials to find a closed expression for the derivative of the central binomial and shifted central binomial coefficients in terms of polygamma functions and harmonic numbers.
[ 0, 0, 1, 0, 0, 0 ]
Title: Towards a realistic NNLIF model: Analysis and numerical solver for excitatory-inhibitory networks with delay and refractory periods, Abstract: The Network of Noisy Leaky Integrate and Fire (NNLIF) model describes the behavior of a neural network at mesoscopic level. It is one of the simplest self-contained mean-field models considered for that purpose. Even so, to study the mathematical properties of the model some simplifications were necessary Cáceres-Carrillo-Perthame(2011), Cáceres-Perthame(2014), Cáceres-Schneider(2017), which disregard crucial phenomena. In this work we deal with the general NNLIF model without simplifications. It involves a network with two populations (excitatory and inhibitory), with transmission delays between the neurons and where the neurons remain in a refractory state for a certain time. We have studied the number of steady states in terms of the model parameters, the long time behaviour via the entropy method and Poincaré's inequality, blow-up phenomena, and the importance of transmission delays between excitatory neurons to prevent blow-up and to give rise to synchronous solutions. Besides analytical results, we have presented a numerical resolutor for this model, based on high order flux-splitting WENO schemes and an explicit third order TVD Runge-Kutta method, in order to describe the wide range of phenomena exhibited by the network: blow-up, asynchronous/synchronous solutions and instability/stability of the steady states; the solver also allows us to observe the time evolution of the firing rates, refractory states and the probability distributions of the excitatory and inhibitory populations.
[ 0, 0, 1, 0, 0, 0 ]