ID
int64
1
21k
TITLE
stringlengths
7
239
ABSTRACT
stringlengths
7
2.76k
Computer Science
int64
0
1
Physics
int64
0
1
Mathematics
int64
0
1
Statistics
int64
0
1
Quantitative Biology
int64
0
1
Quantitative Finance
int64
0
1
16,201
An Ensemble Quadratic Echo State Network for Nonlinear Spatio-Temporal Forecasting
Spatio-temporal data and processes are prevalent across a wide variety of scientific disciplines. These processes are often characterized by nonlinear time dynamics that include interactions across multiple scales of spatial and temporal variability. The data sets associated with many of these processes are increasing in size due to advances in automated data measurement, management, and numerical simulator output. Non- linear spatio-temporal models have only recently seen interest in statistics, but there are many classes of such models in the engineering and geophysical sciences. Tradi- tionally, these models are more heuristic than those that have been presented in the statistics literature, but are often intuitive and quite efficient computationally. We show here that with fairly simple, but important, enhancements, the echo state net- work (ESN) machine learning approach can be used to generate long-lead forecasts of nonlinear spatio-temporal processes, with reasonable uncertainty quantification, and at only a fraction of the computational expense of a traditional parametric nonlinear spatio-temporal models.
0
0
0
1
0
0
16,202
Profit Maximization for Online Advertising Demand-Side Platforms
We develop an optimization model and corresponding algorithm for the management of a demand-side platform (DSP), whereby the DSP aims to maximize its own profit while acquiring valuable impressions for its advertiser clients. We formulate the problem of profit maximization for a DSP interacting with ad exchanges in a real-time bidding environment in a cost-per-click/cost-per-action pricing model. Our proposed formulation leads to a nonconvex optimization problem due to the joint optimization over both impression allocation and bid price decisions. We use Lagrangian relaxation to develop a tractable convex dual problem, which, due to the properties of second-price auctions, may be solved efficiently with subgradient methods. We propose a two-phase solution procedure, whereby in the first phase we solve the convex dual problem using a subgradient algorithm, and in the second phase we use the previously computed dual solution to set bid prices and then solve a linear optimization problem to obtain the allocation probability variables. On several synthetic examples, we demonstrate that our proposed solution approach leads to superior performance over a baseline method that is used in practice.
1
0
1
0
0
0
16,203
Traces of surfactants can severely limit the drag reduction of superhydrophobic surfaces
Superhydrophobic surfaces (SHSs) have the potential to achieve large drag reduction for internal and external flow applications. However, experiments have shown inconsistent results, with many studies reporting significantly reduced performance. Recently, it has been proposed that surfactants, ubiquitous in flow applications, could be responsible, by creating adverse Marangoni stresses. Yet, testing this hypothesis is challenging. Careful experiments with purified water show large interfacial stresses and, paradoxically, adding surfactants yields barely measurable drag increases. This suggests that other physical processes, such as thermal Marangoni stresses or interface deflection, could explain the lower performance. To test the surfactant hypothesis, we perform the first numerical simulations of flows over a SHS inclusive of surfactant kinetics. These simulations reveal that surfactant-induced stresses are significant at extremely low concentrations, potentially yielding a no-slip boundary condition on the air--water interface (the "plastron") for surfactant amounts below typical environmental values. These stresses decrease as the streamwise distance between plastron stagnation points increases. We perform microchannel experiments with thermally-controlled SHSs consisting of streamwise parallel gratings, which confirm this numerical prediction. We introduce a new, unsteady test of surfactant effects. When we rapidly remove the driving pressure following a loading phase, a backflow develops at the plastron, which can only be explained by surfactant gradients formed in the loading phase. This demonstrates the significance of surfactants in deteriorating drag reduction, and thus the importance of including surfactant stresses in SHS models. Our time-dependent protocol can assess the impact of surfactants in SHS testing and guide future mitigating designs.
0
1
0
0
0
0
16,204
Non-Convex Rank/Sparsity Regularization and Local Minima
This paper considers the problem of recovering either a low rank matrix or a sparse vector from observations of linear combinations of the vector or matrix elements. Recent methods replace the non-convex regularization with $\ell_1$ or nuclear norm relaxations. It is well known that this approach can be guaranteed to recover a near optimal solutions if a so called restricted isometry property (RIP) holds. On the other hand it is also known to perform soft thresholding which results in a shrinking bias which can degrade the solution. In this paper we study an alternative non-convex regularization term. This formulation does not penalize elements that are larger than a certain threshold making it much less prone to small solutions. Our main theoretical results show that if a RIP holds then the stationary points are often well separated, in the sense that their differences must be of high cardinality/rank. Thus, with a suitable initial solution the approach is unlikely to fall into a bad local minima. Our numerical tests show that the approach is likely to converge to a better solution than standard $\ell_1$/nuclear-norm relaxation even when starting from trivial initializations. In many cases our results can also be used to verify global optimality of our method.
0
0
1
0
0
0
16,205
Supervised Speech Separation Based on Deep Learning: An Overview
Speech separation is the task of separating target speech from background interference. Traditionally, speech separation is studied as a signal processing problem. A more recent approach formulates speech separation as a supervised learning problem, where the discriminative patterns of speech, speakers, and background noise are learned from training data. Over the past decade, many supervised separation algorithms have been put forward. In particular, the recent introduction of deep learning to supervised speech separation has dramatically accelerated progress and boosted separation performance. This article provides a comprehensive overview of the research on deep learning based supervised speech separation in the last several years. We first introduce the background of speech separation and the formulation of supervised separation. Then we discuss three main components of supervised separation: learning machines, training targets, and acoustic features. Much of the overview is on separation algorithms where we review monaural methods, including speech enhancement (speech-nonspeech separation), speaker separation (multi-talker separation), and speech dereverberation, as well as multi-microphone techniques. The important issue of generalization, unique to supervised learning, is discussed. This overview provides a historical perspective on how advances are made. In addition, we discuss a number of conceptual issues, including what constitutes the target source.
1
0
0
0
0
0
16,206
Sample Efficient Feature Selection for Factored MDPs
In reinforcement learning, the state of the real world is often represented by feature vectors. However, not all of the features may be pertinent for solving the current task. We propose Feature Selection Explore and Exploit (FS-EE), an algorithm that automatically selects the necessary features while learning a Factored Markov Decision Process, and prove that under mild assumptions, its sample complexity scales with the in-degree of the dynamics of just the necessary features, rather than the in-degree of all features. This can result in a much better sample complexity when the in-degree of the necessary features is smaller than the in-degree of all features.
1
0
0
1
0
0
16,207
Sgoldstino-less inflation and low energy SUSY breaking
We assess the range of validity of sgoldstino-less inflation in a scenario of low energy supersymmetry breaking. We first analyze the consistency conditions that an effective theory of the inflaton and goldstino superfields should satisfy in order to be faithfully described by a sgoldstino-less model. Enlarging the scope of previous studies, we investigate the case where the effective field theory cut-off, and hence also the sgoldstino mass, are inflaton-dependent. We then introduce a UV complete model where one can realize successfully sgoldstino-less inflation and gauge mediation of supersymmetry breaking, combining the alpha-attractor mechanism and a weakly coupled model of spontaneous breaking of supersymmetry. In this class of models we find that, given current limits on superpartner masses, the gravitino mass has a lower bound of the order of the MeV, i.e. we cannot reach very low supersymmetry breaking scales. On the plus side, we recognize that in this framework, one can derive the complete superpartner spectrum as well as compute inflation observables, the reheating temperature, and address the gravitino overabundance problem. We then show that further constraints come from collider results and inflation observables. Their non trivial interplay seems a staple feature of phenomenological studies of supersymmetric inflationary models.
0
1
0
0
0
0
16,208
SideEye: A Generative Neural Network Based Simulator of Human Peripheral Vision
Foveal vision makes up less than 1% of the visual field. The other 99% is peripheral vision. Precisely what human beings see in the periphery is both obvious and mysterious in that we see it with our own eyes but can't visualize what we see, except in controlled lab experiments. Degradation of information in the periphery is far more complex than what might be mimicked with a radial blur. Rather, behaviorally-validated models hypothesize that peripheral vision measures a large number of local texture statistics in pooling regions that overlap and grow with eccentricity. In this work, we develop a new method for peripheral vision simulation by training a generative neural network on a behaviorally-validated full-field synthesis model. By achieving a 21,000 fold reduction in running time, our approach is the first to combine realism and speed of peripheral vision simulation to a degree that provides a whole new way to approach visual design: through peripheral visualization.
1
0
0
0
0
0
16,209
Carbon stars in the X-Shooter Spectral Library: II. Comparison with models
In a previous paper, we assembled a collection of medium-resolution spectra of 35 carbon stars, covering optical and near-infrared wavelengths from 400 to 2400 nm. The sample includes stars from the Milky Way and the Magellanic Clouds, with a variety of $(J-K_s)$ colors and pulsation properties. In the present paper, we compare these observations to a new set of high-resolution synthetic spectra, based on hydrostatic model atmospheres. We find that the broad-band colors and the molecular-band strengths measured by spectrophotometric indices match those of the models when $(J-K_s)$ is bluer than about 1.6, while the redder stars require either additional reddening or dust emission or both. Using a grid of models to fit the full observed spectra, we estimate the most likely atmospheric parameters $T_\mathrm{eff}$, $\log(g)$, $[\mathrm{Fe/H}]$ and C/O. These parameters derived independently in the optical and near-infrared are generally consistent when $(J-K_s)<1.6$. The temperatures found based on either wavelength range are typically within $\pm$100K of each other, and $\log(g)$ and $[\mathrm{Fe/H}]$ are consistent with the values expected for this sample. The reddest stars ($(J-K_s)$ $>$ 1.6) are divided into two families, characterized by the presence or absence of an absorption feature at 1.53\,$\mu$m, generally associated with HCN and C$_2$H$_2$. Stars from the first family begin to be more affected by circumstellar extinction. The parameters found using optical or near-infrared wavelengths are still compatible with each other, but the error bars become larger. In stars showing the 1.53\,$\mu$m feature, which are all large-amplitude variables, the effects of pulsation are strong and the spectra are poorly matched with hydrostatic models. For these, atmospheric parameters could not be derived reliably, and dynamical models are needed for proper interpretation.
0
1
0
0
0
0
16,210
Four-variable expanders over the prime fields
Let $\mathbb{F}_p$ be a prime field of order $p>2$, and $A$ be a set in $\mathbb{F}_p$ with very small size in terms of $p$. In this note, we show that the number of distinct cubic distances determined by points in $A\times A$ satisfies \[|(A-A)^3+(A-A)^3|\gg |A|^{8/7},\] which improves a result due to Yazici, Murphy, Rudnev, and Shkredov. In addition, we investigate some new families of expanders in four and five variables. We also give an explicit exponent of a problem of Bukh and Tsimerman, namely, we prove that \[\max \left\lbrace |A+A|, |f(A, A)|\right\rbrace\gg |A|^{6/5},\] where $f(x, y)$ is a quadratic polynomial in $\mathbb{F}_p[x, y]$ that is not of the form $g(\alpha x+\beta y)$ for some univariate polynomial $g$.
0
0
1
0
0
0
16,211
Limit multiplicities for ${\rm SL}_2(\mathcal{O}_F)$ in ${\rm SL}_2(\mathbb{R}^{r_1}\oplus\mathbb{C}^{r_2})$
We prove that the family of lattices ${\rm SL}_2(\mathcal{O}_F)$, $F$ running over number fields with fixed archimedean signature $(r_1, r_2)$, in ${\rm SL}_2(\mathbb{R}^{r_1}\oplus\mathbb{C}^{r_2})$ has the limit multiplicity property.
0
0
1
0
0
0
16,212
On comparing clusterings: an element-centric framework unifies overlaps and hierarchy
Clustering is one of the most universal approaches for understanding complex data. A pivotal aspect of clustering analysis is quantitatively comparing clusterings; clustering comparison is the basis for tasks such as clustering evaluation, consensus clustering, and tracking the temporal evolution of clusters. For example, the extrinsic evaluation of clustering methods requires comparing the uncovered clusterings to planted clusterings or known metadata. Yet, as we demonstrate, existing clustering comparison measures have critical biases which un- dermine their usefulness, and no measure accommodates both overlapping and hierarchical clusterings. Here we unify the comparison of disjoint, overlapping, and hierarchically struc- tured clusterings by proposing a new element-centric framework: elements are compared based on the relationships induced by the cluster structure, as opposed to the traditional cluster-centric philosophy. We demonstrate that, in contrast to standard clustering simi- larity measures, our framework does not suffer from critical biases and naturally provides unique insights into how the clusterings differ. We illustrate the strengths of our framework by revealing new insights into the organization of clusters in two applications: the improved classification of schizophrenia based on the overlapping and hierarchical community struc- ture of fMRI brain networks, and the disentanglement of various social homophily factors in Facebook social networks. The universality of clustering suggests far-reaching impact of our framework throughout all areas of science.
1
0
0
1
0
0
16,213
Parametric Oscillatory Instability in a Fabry-Perot Cavity of the Einstein Telescope with different mirror's materials
We discuss the parametric oscillatory instability in a Fabry-Perot cavity of the Einstein Telescope. Unstable combinations of elastic and optical modes for two possible configurations of gravitational wave third-generation detector are deduced. The results are compared with the results for gravita- tional wave interferometers LIGO and LIGO Voyager.
0
1
0
0
0
0
16,214
Mapping properties of the Hilbert and Fubini--Study maps in Kähler geometry
Suppose that we have a compact Kähler manifold $X$ with a very ample line bundle $\mathcal{L}$. We prove that any positive definite hermitian form on the space $H^0 (X,\mathcal{L})$ of holomorphic sections can be written as an $L^2$-inner product with respect to an appropriate hermitian metric on $\mathcal{L}$. We apply this result to show that the Fubini--Study map, which associates a hermitian metric on $\mathcal{L}$ to a hermitian form on $H^0 (X,\mathcal{L})$, is injective.
0
0
1
0
0
0
16,215
Space-time Constructivism vs. Modal Provincialism: Or, How Special Relativistic Theories Needn't Show Minkowski Chronogeometry
In 1835 Lobachevski entertained the possibility of multiple (rival) geometries. This idea has reappeared on occasion (e.g., Poincaré) but didn't become key in space-time foundations prior to Brown's \emph{Physical Relativity} (at the end, the interpretive key to the book). A crucial difference between his constructivism and orthodox "space-time realism" is modal scope. Constructivism applies to all local classical field theories, including those with multiple geometries. But the orthodox view provincially assumes a unique geometry, as familiar theories (Newton, Special Relativity, Nordström, and GR) have. They serve as the orthodox "canon." Their historical roles suggest a story of inevitable progress. Physics literature after c. 1920 is relevant to orthodoxy mostly as commentary on the canon, which closed in the 1910s. The orthodox view explains the behavior of matter as the manifestation of the real space-time geometry, which works within the canon. The orthodox view, Whiggish history, and the canon relate symbiotically. If one considers a theory outside the canon, space-time realism sheds little light on matter's behavior. Worse, it gives the wrong answer when applied to an example arguably in the canon, massive scalar gravity with universal coupling. Which is the true geometry---the flat metric from the Poincaré symmetry, the conformally flat metric exhibited by material rods and clocks, or both---or is the question bad? How does space-time realism explain that all matter fields see the same curved geometry, given so many ways to mix and match? Constructivist attention to dynamical details is vindicated; geometrical shortcuts disappoint. The more exhaustive exploration of relativistic field theories (especially massive) in particle physics is an underused resource for foundations.
0
1
0
0
0
0
16,216
On the representation of integers by binary quadratic forms
In this note we show that for a given irreducible binary quadratic form $f(x,y)$ with integer coefficients, whenever we have $f(x,y) = f(u,v)$ for integers $x,y,u,v$, there exists a rational automorphism of $f$ which sends $(x,y)$ to $(u,v)$.
0
0
1
0
0
0
16,217
The First Planetary Microlensing Event with Two Microlensed Source Stars
We present the analysis of microlensing event MOA-2010-BLG-117, and show that the light curve can only be explained by the gravitational lensing of a binary source star system by a star with a Jupiter mass ratio planet. It was necessary to modify standard microlensing modeling methods to find the correct light curve solution for this binary-source, binary-lens event. We are able to measure a strong microlensing parallax signal, which yields the masses of the host star, $M_* = 0.58\pm 0.11 M_\odot$, and planet $m_p = 0.54\pm 0.10 M_{\rm Jup}$ at a projected star-planet separation of $a_\perp = 2.42\pm 0.26\,$AU, corresponding to a semi-major axis of $a = 2.9{+1.6\atop -0.6}\,$AU. Thus, the system resembles a half-scale model of the Sun-Jupiter system with a half-Jupiter mass planet orbiting a half-solar mass star at very roughly half of Jupiter's orbital distance from the Sun. The source stars are slightly evolved, and by requiring them to lie on the same isochrone, we can constrain the source to lie in the near side of the bulge at a distance of $D_S = 6.9 \pm 0.7\,$kpc, which implies a distance to the planetary lens system of $D_L = 3.5\pm 0.4\,$kpc. The ability to model unusual planetary microlensing events, like this one, will be necessary to extract precise statistical information from the planned large exoplanet microlensing surveys, such as the WFIRST microlensing survey.
0
1
0
0
0
0
16,218
A Learning-Based Approach for Lane Departure Warning Systems with a Personalized Driver Model
Misunderstanding of driver correction behaviors (DCB) is the primary reason for false warnings of lane-departure-prediction systems. We propose a learning-based approach to predicting unintended lane-departure behaviors (LDB) and the chance for drivers to bring the vehicle back to the lane. First, in this approach, a personalized driver model for lane-departure and lane-keeping behavior is established by combining the Gaussian mixture model and the hidden Markov model. Second, based on this model, we develop an online model-based prediction algorithm to predict the forthcoming vehicle trajectory and judge whether the driver will demonstrate an LDB or a DCB. We also develop a warning strategy based on the model-based prediction algorithm that allows the lane-departure warning system to be acceptable for drivers according to the predicted trajectory. In addition, the naturalistic driving data of 10 drivers is collected through the University of Michigan Safety Pilot Model Deployment program to train the personalized driver model and validate this approach. We compare the proposed method with a basic time-to-lane-crossing (TLC) method and a TLC-directional sequence of piecewise lateral slopes (TLC-DSPLS) method. The results show that the proposed approach can reduce the false-warning rate to 3.07\%.
1
0
0
0
0
0
16,219
Internet of Things: Survey on Security and Privacy
The Internet of Things (IoT) is intended for ubiquitous connectivity among different entities or "things". While its purpose is to provide effective and efficient solutions, security of the devices and network is a challenging issue. The number of devices connected along with the ad-hoc nature of the system further exacerbates the situation. Therefore, security and privacy has emerged as a significant challenge for the IoT. In this paper,we aim to provide a thorough survey related to the privacy and security challenges of the IoT. This document addresses these challenges from the perspective of technologies and architecture used. This work focuses also in IoT intrinsic vulnerabilities as well as the security challenges of various layers based on the security principles of data confidentiality, integrity and availability. This survey analyzes articles published for the IoT at the time and relates it to the security conjuncture of the field and its projection to the future.
1
0
0
0
0
0
16,220
A Connectedness Constraint for Learning Sparse Graphs
Graphs are naturally sparse objects that are used to study many problems involving networks, for example, distributed learning and graph signal processing. In some cases, the graph is not given, but must be learned from the problem and available data. Often it is desirable to learn sparse graphs. However, making a graph highly sparse can split the graph into several disconnected components, leading to several separate networks. The main difficulty is that connectedness is often treated as a combinatorial property, making it hard to enforce in e.g. convex optimization problems. In this article, we show how connectedness of undirected graphs can be formulated as an analytical property and can be enforced as a convex constraint. We especially show how the constraint relates to the distributed consensus problem and graph Laplacian learning. Using simulated and real data, we perform experiments to learn sparse and connected graphs from data.
0
0
0
1
0
0
16,221
Quotients of finite-dimensional operators by symmetry representations
A finite dimensional operator that commutes with some symmetry group admits quotient operators, which are determined by the choice of associated representation. Taking the quotient isolates the part of the spectrum supporting the chosen representation and reduces the complexity of the problem, however it is not uniquely defined. Here we present a computationally simple way of choosing a special basis for the space of intertwiners, allowing us to construct a quotient that reflects the structure of the original operator. This quotient construction generalizes previous definitions for discrete graphs, which either dealt with restricted group actions or only with the trivial representation. We also extend the method to quantum graphs, which simplifies previous constructions within this context, answers an open question regarding self-adjointness and offers alternative viewpoints in terms of a scattering approach. Applications to isospectrality are discussed, together with numerous examples and comparisons with previous results.
0
0
1
0
0
0
16,222
A probabilistic framework for the control of systems with discrete states and stochastic excitation
A probabilistic framework is proposed for the optimization of efficient switched control strategies for physical systems dominated by stochastic excitation. In this framework, the equation for the state trajectory is replaced with an equivalent equation for its probability distribution function in the constrained optimization setting. This allows for a large class of control rules to be considered, including hysteresis and a mix of continuous and discrete random variables. The problem of steering atmospheric balloons within a stratified flowfield is a motivating application; the same approach can be extended to a variety of mixed-variable stochastic systems and to new classes of control rules.
1
0
1
0
0
0
16,223
Supersonic Flow onto Solid Wedges, Multidimensional Shock Waves and Free Boundary Problems
When an upstream steady uniform supersonic flow impinges onto a symmetric straight-sided wedge, governed by the Euler equations, there are two possible steady oblique shock configurations if the wedge angle is less than the detachment angle -- the steady weak shock with supersonic or subsonic downstream flow (determined by the wedge angle that is less or larger than the sonic angle) and the steady strong shock with subsonic downstream flow, both of which satisfy the entropy condition. The fundamental issue -- whether one or both of the steady weak and strong shocks are physically admissible solutions -- has been vigorously debated over the past eight decades. In this paper, we survey some recent developments on the stability analysis of the steady shock solutions in both the steady and dynamic regimes. For the static stability, we first show how the stability problem can be formulated as an initial-boundary value type problem and then reformulate it into a free boundary problem when the perturbation of both the upstream steady supersonic flow and the wedge boundary are suitably regular and small, and we finally present some recent results on the static stability of the steady supersonic and transonic shocks. For the dynamic stability for potential flow, we first show how the stability problem can be formulated as an initial-boundary value problem and then use the self-similarity of the problem to reduce it into a boundary value problem and further reformulate it into a free boundary problem, and we finally survey some recent developments in solving this free boundary problem for the existence of the Prandtl-Meyer configurations that tend to the steady weak supersonic or transonic oblique shock solutions as time goes to infinity. Some further developments and mathematical challenges in this direction are also discussed.
0
1
1
0
0
0
16,224
A Zero-Shot Learning application in Deep Drawing process using Hyper-Process Model
One of the consequences of passing from mass production to mass customization paradigm in the nowadays industrialized world is the need to increase flexibility and responsiveness of manufacturing companies. The high-mix / low-volume production forces constant accommodations of unknown product variants, which ultimately leads to high periods of machine calibration. The difficulty related with machine calibration is that experience is required together with a set of experiments to meet the final product quality. Unfortunately, all possible combinations of machine parameters is so high that is difficult to build empirical knowledge. Due to this fact, normally trial and error approaches are taken making one-of-a-kind products not viable. Therefore, a Zero-Shot Learning (ZSL) based approach called hyper-process model (HPM) to learn the relation among multiple tasks is used as a way to shorten the calibration phase. Assuming each product variant is a task to solve, first, a shape analysis on data to learn common modes of deformation between tasks is made, and secondly, a mapping between these modes and task descriptions is performed. Ultimately, the present work has two main contributions: 1) Formulation of an industrial problem into a ZSL setting where new process models can be generated for process optimization and 2) the definition of a regression problem in the domain of ZSL. For that purpose, a 2-d deep drawing simulated process was used based on data collected from the Abaqus simulator, where a significant number of process models were collected to test the effectiveness of the approach. The obtained results show that is possible to learn new tasks without any available data (both labeled and unlabeled) by leveraging information about already existing tasks, allowing to speed up the calibration phase and make a quicker integration of new products into manufacturing systems.
1
0
0
1
0
0
16,225
Fracton topological phases from strongly coupled spin chains
We provide a new perspective on fracton topological phases, a class of three-dimensional topologically ordered phases with unconventional fractionalized excitations that are either completely immobile or only mobile along particular lines or planes. We demonstrate that a wide range of these fracton phases can be constructed by strongly coupling mutually intersecting spin chains and explain via a concrete example how such a coupled-spin-chain construction illuminates the generic properties of a fracton phase. In particular, we describe a systematic translation from each coupled-spin-chain construction into a parton construction where the partons correspond to the excitations that are mobile along lines. Remarkably, our construction of fracton phases is inherently based on spin models involving only two-spin interactions and thus brings us closer to their experimental realization.
0
1
0
0
0
0
16,226
The Internet as Quantitative Social Science Platform: Insights from a Trillion Observations
With the large-scale penetration of the internet, for the first time, humanity has become linked by a single, open, communications platform. Harnessing this fact, we report insights arising from a unified internet activity and location dataset of an unparalleled scope and accuracy drawn from over a trillion (1.5$\times 10^{12}$) observations of end-user internet connections, with temporal resolution of just 15min over 2006-2012. We first apply this dataset to the expansion of the internet itself over 1,647 urban agglomerations globally. We find that unique IP per capita counts reach saturation at approximately one IP per three people, and take, on average, 16.1 years to achieve; eclipsing the estimated 100- and 60- year saturation times for steam-power and electrification respectively. Next, we use intra-diurnal internet activity features to up-scale traditional over-night sleep observations, producing the first global estimate of over-night sleep duration in 645 cities over 7 years. We find statistically significant variation between continental, national and regional sleep durations including some evidence of global sleep duration convergence. Finally, we estimate the relationship between internet concentration and economic outcomes in 411 OECD regions and find that the internet's expansion is associated with negative or positive productivity gains, depending strongly on sectoral considerations. To our knowledge, our study is the first of its kind to use online/offline activity of the entire internet to infer social science insights, demonstrating the unparalleled potential of the internet as a social data-science platform.
1
1
0
1
0
0
16,227
Deep Embedding Kernel
In this paper, we propose a novel supervised learning method that is called Deep Embedding Kernel (DEK). DEK combines the advantages of deep learning and kernel methods in a unified framework. More specifically, DEK is a learnable kernel represented by a newly designed deep architecture. Compared with pre-defined kernels, this kernel can be explicitly trained to map data to an optimized high-level feature space where data may have favorable features toward the application. Compared with typical deep learning using SoftMax or logistic regression as the top layer, DEK is expected to be more generalizable to new data. Experimental results show that DEK has superior performance than typical machine learning methods in identity detection, classification, regression, dimension reduction, and transfer learning.
0
0
0
1
0
0
16,228
A Radio-Inertial Localization and Tracking System with BLE Beacons Prior Maps
In this paper, we develop a system for the low-cost indoor localization and tracking problem using radio signal strength indicator, Inertial Measurement Unit (IMU), and magnetometer sensors. We develop a novel and simplified probabilistic IMU motion model as the proposal distribution of the sequential Monte-Carlo technique to track the robot trajectory. Our algorithm can globally localize and track a robot with a priori unknown location, given an informative prior map of the Bluetooth Low Energy (BLE) beacons. Also, we formulate the problem as an optimization problem that serves as the Back-end of the algorithm mentioned above (Front-end). Thus, by simultaneously solving for the robot trajectory and the map of BLE beacons, we recover a continuous and smooth trajectory of the robot, corrected locations of the BLE beacons, and the time-varying IMU bias. The evaluations achieved using hardware show that through the proposed closed-loop system the localization performance can be improved; furthermore, the system becomes robust to the error in the map of beacons by feeding back the optimized map to the Front-end.
1
0
0
0
0
0
16,229
Simple groups, generation and probabilistic methods
It is well known that every finite simple group can be generated by two elements and this leads to a wide range of problems that have been the focus of intensive research in recent years. In this survey article we discuss some of the extraordinary generation properties of simple groups, focussing on topics such as random generation, $(a,b)$-generation and spread, as well as highlighting the application of probabilistic methods in the proofs of many of the main results. We also present some recent work on the minimal generation of maximal and second maximal subgroups of simple groups, which has applications to the study of subgroup growth and the generation of primitive permutation groups.
0
0
1
0
0
0
16,230
Large polaron evolution in anatase TiO2 due to carrier and temperature dependence of electron-phonon coupling
The electronic and magneto transport properties of reduced anatase TiO2 epitaxial thin films are analyzed considering various polaronic effects. Unexpectedly, with increasing carrier concentration, the mobility increases, which rarely happens in common metallic systems. We find that the screening of the electron-phonon (e-ph) coupling by excess carriers is necessary to explain this unusual dependence. We also find that the magnetoresistance (MR) could be decomposed into a linear and a quadratic component, separately characterizing the transport and trap behavior of carriers as a function of temperature. The various transport behaviors could be organized into a single phase diagram which clarifies the nature of large polaron in this material.
0
1
0
0
0
0
16,231
AMPNet: Asynchronous Model-Parallel Training for Dynamic Neural Networks
New types of machine learning hardware in development and entering the market hold the promise of revolutionizing deep learning in a manner as profound as GPUs. However, existing software frameworks and training algorithms for deep learning have yet to evolve to fully leverage the capability of the new wave of silicon. We already see the limitations of existing algorithms for models that exploit structured input via complex and instance-dependent control flow, which prohibits minibatching. We present an asynchronous model-parallel (AMP) training algorithm that is specifically motivated by training on networks of interconnected devices. Through an implementation on multi-core CPUs, we show that AMP training converges to the same accuracy as conventional synchronous training algorithms in a similar number of epochs, but utilizes the available hardware more efficiently even for small minibatch sizes, resulting in significantly shorter overall training times. Our framework opens the door for scaling up a new class of deep learning models that cannot be efficiently trained today.
1
0
0
1
0
0
16,232
Random data wave equations
Nowadays we have many methods allowing to exploit the regularising properties of the linear part of a nonlinear dispersive equation (such as the KdV equation, the nonlinear wave or the nonlinear Schroedinger equations) in order to prove well-posedness in low regularity Sobolev spaces. By well-posedness in low regularity Sobolev spaces we mean that less regularity than the one imposed by the energy methods is required (the energy methods do not exploit the dispersive properties of the linear part of the equation). In many cases these methods to prove well-posedness in low regularity Sobolev spaces lead to optimal results in terms of the regularity of the initial data. By optimal we mean that if one requires slightly less regularity then the corresponding Cauchy problem becomes ill-posed in the Hadamard sense. We call the Sobolev spaces in which these ill-posedness results hold spaces of supercritical regularity. More recently, methods to prove probabilistic well-posedness in Sobolev spaces of supercritical regularity were developed. More precisely, by probabilistic well-posedness we mean that one endows the corresponding Sobolev space of supercritical regularity with a non degenerate probability measure and then one shows that almost surely with respect to this measure one can define a (unique) global flow. However, in most of the cases when the methods to prove probabilistic well-posedness apply, there is no information about the measure transported by the flow. Very recently, a method to prove that the transported measure is absolutely continuous with respect to the initial measure was developed. In such a situation, we have a measure which is quasi-invariant under the corresponding flow. The aim of these lectures is to present all of the above described developments in the context of the nonlinear wave equation.
0
0
1
0
0
0
16,233
The Persistent Homotopy Type Distance
We introduce the persistent homotopy type distance dHT to compare real valued functions defined on possibly different homotopy equivalent topological spaces. The underlying idea in the definition of dHT is to measure the minimal shift that is necessary to apply to one of the two functions in order that the sublevel sets of the two functions become homotopically equivalent. This distance is interesting in connection with persistent homology. Indeed, our main result states that dHT still provides an upper bound for the bottleneck distance between the persistence diagrams of the intervening functions. Moreover, because homotopy equivalences are weaker than homeomorphisms, this implies a lifting of the standard stability results provided by the L-infty distance and the natural pseudo-distance dNP. From a different standpoint, we prove that dHT extends the L-infty distance and dNP in two ways. First, we show that, appropriately restricting the category of objects to which dHT applies, it can be made to coincide with the other two distances. Finally, we show that dHT has an interpretation in terms of interleavings that naturally places it in the family of distances used in persistence theory.
1
0
1
0
0
0
16,234
Optimal algorithms for smooth and strongly convex distributed optimization in networks
In this paper, we determine the optimal convergence rates for strongly convex and smooth distributed optimization in two settings: centralized and decentralized communications over a network. For centralized (i.e. master/slave) algorithms, we show that distributing Nesterov's accelerated gradient descent is optimal and achieves a precision $\varepsilon > 0$ in time $O(\sqrt{\kappa_g}(1+\Delta\tau)\ln(1/\varepsilon))$, where $\kappa_g$ is the condition number of the (global) function to optimize, $\Delta$ is the diameter of the network, and $\tau$ (resp. $1$) is the time needed to communicate values between two neighbors (resp. perform local computations). For decentralized algorithms based on gossip, we provide the first optimal algorithm, called the multi-step dual accelerated (MSDA) method, that achieves a precision $\varepsilon > 0$ in time $O(\sqrt{\kappa_l}(1+\frac{\tau}{\sqrt{\gamma}})\ln(1/\varepsilon))$, where $\kappa_l$ is the condition number of the local functions and $\gamma$ is the (normalized) eigengap of the gossip matrix used for communication between nodes. We then verify the efficiency of MSDA against state-of-the-art methods for two problems: least-squares regression and classification by logistic regression.
0
0
1
1
0
0
16,235
Superconductivity in ultra-thin carbon nanotubes and carbyne-nanotube composites: an ab-initio approach
The superconductivity of the 4-angstrom single-walled carbon nanotubes (SWCNTs) was discovered more than a decade ago, and marked the breakthrough of finding superconductivity in pure elemental undoped carbon compounds. The van Hove singularities in the electronic density of states at the Fermi level in combination with a large Debye temperature of the SWCNTs are expected to cause an impressively large superconducting gap. We have developed an innovative computational algorithm specially tailored for the investigation of superconductivity in ultrathin SWCNTs. We predict the superconducting transition temperature of various thin carbon nanotubes resulting from electron-phonon coupling by an ab-initio method, taking into account the effect of radial pressure, symmetry, chirality (N,M) and bond lengths. By optimizing the geometry of the carbon nanotubes, a maximum Tc of 60K is found. We also use our method to calculate the Tc of a linear carbon chain embedded in the center of (5,0) SWCNTs. The strong curvature in the (5,0) carbon nanotubes in the presence of the inner carbon chain provides an alternative path to increase the Tc of this carbon composite by a factor of 2.2 with respect to the empty (5,0) SWCNTs.
0
1
0
0
0
0
16,236
Exponential lower bounds for history-based simplex pivot rules on abstract cubes
The behavior of the simplex algorithm is a widely studied subject. Specifically, the question of the existence of a polynomial pivot rule for the simplex algorithm is of major importance. Here, we give exponential lower bounds for three history-based pivot rules. Those rules decide their next step based on memory of the past steps. In particular, we study Zadeh's least entered rule, Johnson's least-recently basic rule and Cunningham's least-recently considered (or round-robin) rule. We give exponential lower bounds on Acyclic Unique Sink Orientations (AUSO) of the abstract cube, for all of these pivot rules. For Johnson's rule our bound is the first superpolynomial one in any context; for Zadeh's it is the first one for AUSO. Those two are our main results.
1
0
0
0
0
0
16,237
Machine learning regression on hyperspectral data to estimate multiple water parameters
In this paper, we present a regression framework involving several machine learning models to estimate water parameters based on hyperspectral data. Measurements from a multi-sensor field campaign, conducted on the River Elbe, Germany, represent the benchmark dataset. It contains hyperspectral data and the five water parameters chlorophyll a, green algae, diatoms, CDOM and turbidity. We apply a PCA for the high-dimensional data as a possible preprocessing step. Then, we evaluate the performance of the regression framework with and without this preprocessing step. The regression results of the framework clearly reveal the potential of estimating water parameters based on hyperspectral data with machine learning. The proposed framework provides the basis for further investigations, such as adapting the framework to estimate water parameters of different inland waters.
0
0
0
0
1
0
16,238
Critical factors and enablers of food quality and safety compliance risk management in the Vietnamese seafood supply chain
Recently, along with the emergence of food scandals, food supply chains have to face with ever-increasing pressure from compliance with food quality and safety regulations and standards. This paper aims to explore critical factors of compliance risk in food supply chain with an illustrated case in Vietnamese seafood industry. To this end, this study takes advantage of both primary and secondary data sources through a comprehensive literature research of industrial and scientific papers, combined with expert interview. Findings showed that there are three main critical factor groups influencing on compliance risk including challenges originating from Vietnamese food supply chain itself, characteristics of regulation and standards, and business environment. Furthermore, author proposed enablers to eliminate compliance risks to food supply chain managers as well as recommendations to government and other influencers and supporters.
0
0
0
0
0
1
16,239
Asymptotic behavior of semilinear parabolic equations on the circle with time almost-periodic/recurrent dependence
We study topological structure of the $\omega$-limit sets of the skew-product semiflow generated by the following scalar reaction-diffusion equation \begin{equation*} u_{t}=u_{xx}+f(t,u,u_{x}),\,\,t>0,\,x\in S^{1}=\mathbb{R}/2\pi \mathbb{Z}, \end{equation*} where $f(t,u,u_x)$ is $C^2$-admissible with time-recurrent structure including almost-periodicity and almost-automorphy. Contrary to the time-periodic cases (for which any $\omega$-limit set can be imbedded into a periodically forced circle flow), it is shown that one cannot expect that any $\omega$-limit set can be imbedded into an almost-periodically forced circle flow even if $f$ is uniformly almost-periodic in $t$. More precisely, we prove that, for a given $\omega$-limit set $\Omega$, if ${\rm dim}V^c(\Omega)\leq 1$ ($V^c(\Omega)$ is the center space associated with $\Omega$), then $\Omega$ is either spatially-homogeneous or spatially-inhomogeneous; and moreover, any spatially-inhomogeneous $\Omega$ can be imbedded into a time-recurrently forced circle flow (resp. imbedded into an almost periodically-forced circle flow if $f$ is uniformly almost-periodic in $t$). On the other hand, when ${\rm dim}V^c(\Omega>1$, it is pointed out that the above embedding property cannot hold anymore. Furthermore, we also show the new phenomena of the residual imbedding into a time-recurrently forced circle flow (resp. into an almost automorphically-forced circle flow if $f$ is uniformly almost-periodic in $t$) provided that $\dim V^c(\Omega)=2$ and $\dim V^u(\Omega)$ is odd. All these results reveal that for such system there are essential differences between time-periodic cases and non-periodic cases.
0
0
1
0
0
0
16,240
Exploring 4D Quantum Hall Physics with a 2D Topological Charge Pump
The discovery of topological states of matter has profoundly augmented our understanding of phase transitions in physical systems. Instead of local order parameters, topological phases are described by global topological invariants and are therefore robust against perturbations. A prominent example thereof is the two-dimensional integer quantum Hall effect. It is characterized by the first Chern number which manifests in the quantized Hall response induced by an external electric field. Generalizing the quantum Hall effect to four-dimensional systems leads to the appearance of a novel non-linear Hall response that is quantized as well, but described by a 4D topological invariant - the second Chern number. Here, we report on the first observation of a bulk response with intrinsic 4D topology and the measurement of the associated second Chern number. By implementing a 2D topological charge pump with ultracold bosonic atoms in an angled optical superlattice, we realize a dynamical version of the 4D integer quantum Hall effect. Using a small atom cloud as a local probe, we fully characterize the non-linear response of the system by in-situ imaging and site-resolved band mapping. Our findings pave the way to experimentally probe higher-dimensional quantum Hall systems, where new topological phases with exotic excitations are predicted.
0
1
0
0
0
0
16,241
Non-hermitian operator modelling of basic cancer cell dynamics
We propose a dynamical system of tumor cells proliferation based on operatorial methods. The approach we propose is quantum-like: we use ladder and number operators to describe healthy and tumor cells birth and death, and the evolution is ruled by a non-hermitian Hamiltonian which includes, in a non reversible way, the basic biological mechanisms we consider for the system. We show that this approach is rather efficient in describing some processes of the cells. We further add some medical treatment, described by adding a suitable term in the Hamiltonian, which controls and limits the growth of tumor cells, and we propose an optimal approach to stop, and reverse, this growth.
0
0
0
0
1
0
16,242
Giant ripples on comet 67P/Churyumov-Gerasimenko sculpted by sunset thermal wind
Explaining the unexpected presence of dune-like patterns at the surface of the comet 67P/Churyumov-Gerasimenko requires conceptual and quantitative advances in the understanding of surface and outgassing processes. We show here that vapor flow emitted by the comet around its perihelion spreads laterally in a surface layer, due to the strong pressure difference between zones illuminated by sunlight and those in shadow. For such thermal winds to be dense enough to transport grains -- ten times greater than previous estimates -- outgassing must take place through a surface porous granular layer, and that layer must be composed of grains whose roughness lowers cohesion consistently with contact mechanics. The linear stability analysis of the problem, entirely tested against laboratory experiments, quantitatively predicts the emergence of bedforms in the observed wavelength range, and their propagation at the scale of a comet revolution. Although generated by a rarefied atmosphere, they are paradoxically analogous to ripples emerging on granular beds submitted to viscous shear flows. This quantitative agreement shows that our understanding of the coupling between hydrodynamics and sediment transport is able to account for bedform emergence in extreme conditions and provides a reliable tool to predict the erosion and accretion processes controlling the evolution of small solar system bodies.
0
1
0
0
0
0
16,243
A ROS multi-ontology references services: OWL reasoners and application prototyping issues
The challenge of sharing and communicating information is crucial in complex human-robot interaction (HRI) scenarios. Ontologies and symbolic reasoning are the state-of-the-art approaches for a natural representation of knowledge, especially within the Semantic Web domain. In such a context, scripted paradigms have been adopted to achieve high expressiveness. Nevertheless, since symbolic reasoning is a high complexity problem, optimizing its performance requires a careful design of the knowledge. Specifically, a robot architecture requires the integration of several components implementing different behaviors and generating a series of beliefs. Most of the components are expected to access, manipulate, and reason upon a run-time generated semantic representation of knowledge grounding robot behaviors and perceptions through formal axioms, with soft real-time requirements.
1
0
0
0
0
0
16,244
Efficient Covariance Approximations for Large Sparse Precision Matrices
The use of sparse precision (inverse covariance) matrices has become popular because they allow for efficient algorithms for joint inference in high-dimensional models. Many applications require the computation of certain elements of the covariance matrix, such as the marginal variances, which may be non-trivial to obtain when the dimension is large. This paper introduces a fast Rao-Blackwellized Monte Carlo sampling based method for efficiently approximating selected elements of the covariance matrix. The variance and confidence bounds of the approximations can be precisely estimated without additional computational costs. Furthermore, a method that iterates over subdomains is introduced, and is shown to additionally reduce the approximation errors to practically negligible levels in an application on functional magnetic resonance imaging data. Both methods have low memory requirements, which is typically the bottleneck for competing direct methods.
0
0
0
1
0
0
16,245
The Density of Numbers Represented by Diagonal Forms of Large Degree
Let $s \geq 3$ be a fixed positive integer and $a_1,\dots,a_s \in \mathbb{Z}$ be arbitrary. We show that, on average over $k$, the density of numbers represented by the degree $k$ diagonal form \[ a_1 x_1^k + \cdots + a_s x_s^k \] decays rapidly with respect to $k$.
0
0
1
0
0
0
16,246
Meta-Learning for Resampling Recommendation Systems
One possible approach to tackle the class imbalance in classification tasks is to resample a training dataset, i.e., to drop some of its elements or to synthesize new ones. There exist several widely-used resampling methods. Recent research showed that the choice of resampling method significantly affects the quality of classification, which raises resampling selection problem. Exhaustive search for optimal resampling is time-consuming and hence it is of limited use. In this paper, we describe an alternative approach to the resampling selection. We follow the meta-learning concept to build resampling recommendation systems, i.e., algorithms recommending resampling for datasets on the basis of their properties.
1
0
0
1
0
0
16,247
Robustness Analysis of Systems' Safety through a New Notion of Input-to-State Safety
In this paper, we propose a new robustness notion that is applicable for certifying systems' safety with respect to external disturbance signals. The proposed input-to-state safety (ISSf) notion allows us to certify systems' safety in the presence of the disturbances which is analogous to the notion of input-to-state stability (ISS) for analyzing systems' stability.
1
0
1
0
0
0
16,248
Estimating ground-level PM2.5 by fusing satellite and station observations: A geo-intelligent deep learning approach
Fusing satellite observations and station measurements to estimate ground-level PM2.5 is promising for monitoring PM2.5 pollution. A geo-intelligent approach, which incorporates geographical correlation into an intelligent deep learning architecture, is developed to estimate PM2.5. Specifically, it considers geographical distance and spatiotemporally correlated PM2.5 in a deep belief network (denoted as Geoi-DBN). Geoi-DBN can capture the essential features associated with PM2.5 from latent factors. It was trained and tested with data from China in 2015. The results show that Geoi-DBN performs significantly better than the traditional neural network. The cross-validation R increases from 0.63 to 0.94, and RMSE decreases from 29.56 to 13.68${\mu}$g/m3. On the basis of the derived PM2.5 distribution, it is predicted that over 80% of the Chinese population live in areas with an annual mean PM2.5 of greater than 35${\mu}$g/m3. This study provides a new perspective for air pollution monitoring in large geographic regions.
0
1
0
0
0
0
16,249
An Overview of Recent Progress in Laser Wakefield Acceleration Experiments
The goal of this paper is to examine experimental progress in laser wakefield acceleration over the past decade (2004-2014), and to use trends in the data to understand some of the important physical processes. By examining a set of over 50 experiments, various trends concerning the relationship between plasma density, accelerator length, laser power and the final electron beam en- ergy are revealed. The data suggest that current experiments are limited by dephasing and that current experiments typically require some pulse evolution to reach the trapping threshold.
0
1
0
0
0
0
16,250
MindID: Person Identification from Brain Waves through Attention-based Recurrent Neural Network
Person identification technology recognizes individuals by exploiting their unique, measurable physiological and behavioral characteristics. However, the state-of-the-art person identification systems have been shown to be vulnerable, e.g., contact lenses can trick iris recognition and fingerprint films can deceive fingerprint sensors. EEG (Electroencephalography)-based identification, which utilizes the users brainwave signals for identification and offers a more resilient solution, draw a lot of attention recently. However, the accuracy still requires improvement and very little work is focusing on the robustness and adaptability of the identification system. We propose MindID, an EEG-based biometric identification approach, achieves higher accuracy and better characteristics. At first, the EEG data patterns are analyzed and the results show that the Delta pattern contains the most distinctive information for user identification. Then the decomposed Delta pattern is fed into an attention-based Encoder-Decoder RNNs (Recurrent Neural Networks) structure which assigns varies attention weights to different EEG channels based on the channels importance. The discriminative representations learned from the attention-based RNN are used to recognize the user identification through a boosting classifier. The proposed approach is evaluated over 3 datasets (two local and one public). One local dataset (EID-M) is used for performance assessment and the result illustrate that our model achieves the accuracy of 0.982 which outperforms the baselines and the state-of-the-art. Another local dataset (EID-S) and a public dataset (EEG-S) are utilized to demonstrate the robustness and adaptability, respectively. The results indicate that the proposed approach has the potential to be largely deployment in practice environment.
1
0
0
0
0
0
16,251
Beyond black-boxes in Bayesian inverse problems and model validation: applications in solid mechanics of elastography
The present paper is motivated by one of the most fundamental challenges in inverse problems, that of quantifying model discrepancies and errors. While significant strides have been made in calibrating model parameters, the overwhelming majority of pertinent methods is based on the assumption of a perfect model. Motivated by problems in solid mechanics which, as all problems in continuum thermodynamics, are described by conservation laws and phenomenological constitutive closures, we argue that in order to quantify model uncertainty in a physically meaningful manner, one should break open the black-box forward model. In particular we propose formulating an undirected probabilistic model that explicitly accounts for the governing equations and their validity. This recasts the solution of both forward and inverse problems as probabilistic inference tasks where the problem's state variables should not only be compatible with the data but also with the governing equations as well. Even though the probability densities involved do not contain any black-box terms, they live in much higher-dimensional spaces. In combination with the intractability of the normalization constant of the undirected model employed, this poses significant challenges which we propose to address with a linearly-scaling, double-layer of Stochastic Variational Inference. We demonstrate the capabilities and efficacy of the proposed model in synthetic forward and inverse problems (with and without model error) in elastography.
0
0
0
1
0
0
16,252
On the Spectral Properties of Symmetric Functions
We characterize the approximate monomial complexity, sign monomial complexity , and the approximate L 1 norm of symmetric functions in terms of simple combinatorial measures of the functions. Our characterization of the approximate L 1 norm solves the main conjecture in [AFH12]. As an application of the characterization of the sign monomial complexity, we prove a conjecture in [ZS09] and provide a characterization for the unbounded-error communication complexity of symmetric-xor functions.
1
0
0
0
0
0
16,253
Strong light shifts from near-resonant and polychromatic fields: comparison of Floquet theory and experiment
We present a non-perturbative numerical technique for calculating strong light shifts in atoms under the influence of multiple optical fields with arbitrary polarization. We confirm our technique experimentally by performing spectroscopy of a cloud of cold $^{87}$Rb atoms subjected to $\sim$ kW/cm$^2$ intensities of light at 1560.492 nm simultaneous with 1529.269 nm or 1529.282 nm. In these conditions the excited state resonances at 1529.26 nm and 1529.36 nm induce strong level mixing and the shifts are highly nonlinear. By absorption spectroscopy, we observe that the induced shifts of the 5P3/2 hyperfine Zeeman sublevels agree well with our theoretical predictions.. We propose the application of our theory and experiment to accurate measurements of excited-state electric-dipole matrix elements.
0
1
0
0
0
0
16,254
Positive-rank elliptic curves arising pythagorean triples
In the present paper, we introduce some new families of elliptic curves with positive rank arrising from Pythagorean triples.
0
0
1
0
0
0
16,255
Modeling Relational Data with Graph Convolutional Networks
Knowledge graphs enable a wide variety of applications, including question answering and information retrieval. Despite the great effort invested in their creation and maintenance, even the largest (e.g., Yago, DBPedia or Wikidata) remain incomplete. We introduce Relational Graph Convolutional Networks (R-GCNs) and apply them to two standard knowledge base completion tasks: Link prediction (recovery of missing facts, i.e. subject-predicate-object triples) and entity classification (recovery of missing entity attributes). R-GCNs are related to a recent class of neural networks operating on graphs, and are developed specifically to deal with the highly multi-relational data characteristic of realistic knowledge bases. We demonstrate the effectiveness of R-GCNs as a stand-alone model for entity classification. We further show that factorization models for link prediction such as DistMult can be significantly improved by enriching them with an encoder model to accumulate evidence over multiple inference steps in the relational graph, demonstrating a large improvement of 29.8% on FB15k-237 over a decoder-only baseline.
1
0
0
1
0
0
16,256
Application of shifted-Laplace preconditioners for heterogenous Helmholtz equation- part 1: Data modelling
In several geophysical applications, such as full waveform inversion and data modelling, we are facing the solution of inhomogeneous Helmholtz equation. The difficulties of solving the Helmholtz equa- tion are two fold. Firstly, in the case of large scale problems we cannot calculate the inverse of the Helmholtz operator directly. Hence, iterative algorithms should be implemented. Secondly, the Helmholtz operator is non-unitary and non-diagonalizable which in turn deteriorates the performances of the iterative algorithms (especially for high wavenumbers). To overcome this issue, we need to im- plement proper preconditioners for a Krylov subspace method to solve the problem efficiently. In this paper we incorporated shifted-Laplace operators to precondition the system of equations and then generalized minimal residual (GMRES) method used to solve the problem iteratively. The numerical results show the performance of the preconditioning operator in improving the convergence rate of the GMRES algorithm for data modelling case. In the companion paper we discussed the application of preconditioned data modelling algorithm in the context of frequency domain full waveform inversion. However, the analysis of the degree of suitability of the preconditioners in the solution of Helmholtz equation is an ongoing field of study.
0
1
0
0
0
0
16,257
Algorithms For Longest Chains In Pseudo- Transitive Graphs
A directed acyclic graph G = (V, E) is pseudo-transitive with respect to a given subset of edges E1, if for any edge ab in E1 and any edge bc in E, we have ac in E. We give algorithms for computing longest chains and demonstrate geometric applications that unify and improves some important past results. (For specific applications see the introduction.)
1
0
1
0
0
0
16,258
From time-series to complex networks: Application to the cerebrovascular flow patterns in atrial fibrillation
A network-based approach is presented to investigate the cerebrovascular flow patterns during atrial fibrillation (AF) with respect to normal sinus rhythm (NSR). AF, the most common cardiac arrhythmia with faster and irregular beating, has been recently and independently associated with the increased risk of dementia. However, the underlying hemodynamic mechanisms relating the two pathologies remain mainly undetermined so far; thus the contribution of modeling and refined statistical tools is valuable. Pressure and flow rate temporal series in NSR and AF are here evaluated along representative cerebral sites (from carotid arteries to capillary brain circulation), exploiting reliable artificially built signals recently obtained from an in silico approach. The complex network analysis evidences, in a synthetic and original way, a dramatic signal variation towards the distal/capillary cerebral regions during AF, which has no counterpart in NSR conditions. At the large artery level, networks obtained from both AF and NSR hemodynamic signals exhibit elongated and chained features, which are typical of pseudo-periodic series. These aspects are almost completely lost towards the microcirculation during AF, where the networks are topologically more circular and present random-like characteristics. As a consequence, all the physiological phenomena at microcerebral level ruled by periodicity - such as regular perfusion, mean pressure per beat, and average nutrient supply at cellular level - can be strongly compromised, since the AF hemodynamic signals assume irregular behaviour and random-like features. Through a powerful approach which is complementary to the classical statistical tools, the present findings further strengthen the potential link between AF hemodynamic and cognitive decline.
0
1
0
0
0
0
16,259
On the matrix $pth$ root functions and generalized Fibonacci sequences
This study is devoted to the polynomial representation of the matrix $p$th root functions. The Fibonacci-Hörner decomposition of the matrix powers and some techniques arisen from properties of generalized Fibonacci sequences, notably the Binet formula, serves as a triggering factor to provide explicit formulas for the matrix $p$th roots. Special cases and illustrative numerical examples are given.
0
0
1
0
0
0
16,260
Investigation and Automating Extraction of Thumbnails Produced by Image viewers
Today, in digital forensics, images normally provide important information within an investigation. However, not all images may still be available within a forensic digital investigation as they were all deleted for example. Data carving can be used in this case to retrieve deleted images but the carving time is normally significant and these images can be moreover overwritten by other data. One of the solutions is to look at thumbnails of images that are no longer available. These thumbnails can often be found within databases created by either operating systems or image viewers. In literature, most research and practical focus on the extraction of thumbnails from databases created by the operating system. There is a little research working on the thumbnails created by the image reviewers as these thumbnails are application-driven in terms of pre-defined sizes, adjustments and storage location. Eventually, thumbnail databases from image viewers are significant forensic artefacts for investigators as these programs deal with large amounts of images. However, investigating these databases so far is still manual or semi-automatic task that leads to the huge amount of forensic time. Therefore, in this paper we propose a new approach of automating extraction of thumbnails produced by image viewers. We also test our approach with popular image viewers in different storage structures and locations to show its robustness.
1
0
0
0
0
0
16,261
Weighted network estimation by the use of topological graph metrics
Topological metrics of graphs provide a natural way to describe the prominent features of various types of networks. Graph metrics describe the structure and interplay of graph edges and have found applications in many scientific fields. In this work, graph metrics are used in network estimation by developing optimisation methods that incorporate prior knowledge of a network's topology. The derivatives of graph metrics are used in gradient descent schemes for weighted undirected network denoising, network completion, and network decomposition. The successful performance of our methodology is shown in a number of toy examples and real-world datasets. Most notably, our work establishes a new link between graph theory, network science and optimisation.
1
0
0
0
0
0
16,262
Petrophysical property estimation from seismic data using recurrent neural networks
Reservoir characterization involves the estimation petrophysical properties from well-log data and seismic data. Estimating such properties is a challenging task due to the non-linearity and heterogeneity of the subsurface. Various attempts have been made to estimate petrophysical properties using machine learning techniques such as feed-forward neural networks and support vector regression (SVR). Recent advances in machine learning have shown promising results for recurrent neural networks (RNN) in modeling complex sequential data such as videos and speech signals. In this work, we propose an algorithm for property estimation from seismic data using recurrent neural networks. An applications of the proposed workflow to estimate density and p-wave impedance using seismic data shows promising results compared to feed-forward neural networks.
1
0
0
0
0
0
16,263
Deep Learning for Spatio-Temporal Modeling: Dynamic Traffic Flows and High Frequency Trading
Deep learning applies hierarchical layers of hidden variables to construct nonlinear high dimensional predictors. Our goal is to develop and train deep learning architectures for spatio-temporal modeling. Training a deep architecture is achieved by stochastic gradient descent (SGD) and drop-out (DO) for parameter regularization with a goal of minimizing out-of-sample predictive mean squared error. To illustrate our methodology, we predict the sharp discontinuities in traffic flow data, and secondly, we develop a classification rule to predict short-term futures market prices as a function of the order book depth. Finally, we conclude with directions for future research.
0
0
0
1
0
0
16,264
Multi-objective Bandits: Optimizing the Generalized Gini Index
We study the multi-armed bandit (MAB) problem where the agent receives a vectorial feedback that encodes many possibly competing objectives to be optimized. The goal of the agent is to find a policy, which can optimize these objectives simultaneously in a fair way. This multi-objective online optimization problem is formalized by using the Generalized Gini Index (GGI) aggregation function. We propose an online gradient descent algorithm which exploits the convexity of the GGI aggregation function, and controls the exploration in a careful way achieving a distribution-free regret $\tilde{\bigO} (T^{-1/2} )$ with high probability. We test our algorithm on synthetic data as well as on an electric battery control problem where the goal is to trade off the use of the different cells of a battery in order to balance their respective degradation rates.
1
0
0
0
0
0
16,265
Floquet multi-Weyl points in crossing-nodal-line semimetals
Weyl points with monopole charge $\pm 1$ have been extensively studied, however, real materials of multi-Weyl points, whose monopole charges are higher than $1$, have yet to be found. In this Rapid Communication, we show that nodal-line semimetals with nontrivial line connectivity provide natural platforms for realizing Floquet multi-Weyl points. In particular, we show that driving crossing nodal lines by circularly polarized light generates double-Weyl points. Furthermore, we show that monopole combination and annihilation can be observed in crossing-nodal-line semimetals and nodal-chain semimetals. These proposals can be experimentally verified in pump-probe angle-resolved photoemission spectroscopy.
0
1
0
0
0
0
16,266
Impact Ionization in $β-Ga_2O_3$
A theoretical investigation of extremely high field transport in an emerging wide-bandgap material $\beta-Ga_2O_3$ is reported from first principles. The signature high-field effect explored here is impact ionization. Interaction between a valence-band electron and an excited electron is computed from the matrix elements of a screened Coulomb operator. Maximally localized Wannier functions (MLWF) are utilized in computing the impact ionization rates. A full-band Monte Carlo (FBMC) simulation is carried out incorporating the impact ionization rates, and electron-phonon scattering rates. This work brings out valuable insights on the impact ionization coefficient (IIC) of electrons in $\beta-Ga_2O_3$. The isolation of the $\Gamma$ point conduction band minimum by a significantly high energy from other satellite band pockets play a vital role in determining ionization co-efficients. IICs are calculated for electric fields ranging up to 8 MV/cm for different crystal directions. A Chynoweth fitting of the computed IICs is done to calibrate ionization models in device simulators.
0
1
0
0
0
0
16,267
Entombed: An archaeological examination of an Atari 2600 game
The act and experience of programming is, at its heart, a fundamentally human activity that results in the production of artifacts. When considering programming, therefore, it would be a glaring omission to not involve people who specialize in studying artifacts and the human activity that yields them: archaeologists. Here we consider this with respect to computer games, the focus of archaeology's nascent subarea of archaeogaming. One type of archaeogaming research is digital excavation, a technical examination of the code and techniques used in old games' implementation. We apply that in a case study of Entombed, an Atari 2600 game released in 1982 by US Games. The player in this game is, appropriately, an archaeologist who must make their way through a zombie-infested maze. Maze generation is a fruitful area for comparative retrogame archaeology, because a number of early games on different platforms featured mazes, and their variety of approaches can be compared. The maze in Entombed is particularly interesting: it is shaped in part by the extensive real-time constraints of the Atari 2600 platform, and also had to be generated efficiently and use next to no memory. We reverse engineered key areas of the game's code to uncover its unusual maze-generation algorithm, which we have also built a reconstruction of, and analyzed the mysterious table that drives it. In addition, we discovered what appears to be a 35-year-old bug in the code, as well as direct evidence of code-reuse practices amongst game developers. What further makes this game's development interesting is that, in an era where video games were typically solo projects, a total of five people were involved in various ways with Entombed. We piece together some of the backstory of the game's development and intoxicant-fueled design using interviews to complement our technical work. Finally, we contextualize this example in archaeology and lay the groundwork for a broader interdisciplinary discussion about programming, one that includes both computer scientists and archaeologists.
1
0
0
0
0
0
16,268
Particle Filters for Partially-Observed Boolean Dynamical Systems
Partially-observed Boolean dynamical systems (POBDS) are a general class of nonlinear models with application in estimation and control of Boolean processes based on noisy and incomplete measurements. The optimal minimum mean square error (MMSE) algorithms for POBDS state estimation, namely, the Boolean Kalman filter (BKF) and Boolean Kalman smoother (BKS), are intractable in the case of large systems, due to computational and memory requirements. To address this, we propose approximate MMSE filtering and smoothing algorithms based on the auxiliary particle filter (APF) method from sequential Monte-Carlo theory. These algorithms are used jointly with maximum-likelihood (ML) methods for simultaneous state and parameter estimation in POBDS models. In the presence of continuous parameters, ML estimation is performed using the expectation-maximization (EM) algorithm; we develop for this purpose a special smoother which reduces the computational complexity of the EM algorithm. The resulting particle-based adaptive filter is applied to a POBDS model of Boolean gene regulatory networks observed through noisy RNA-Seq time series data, and performance is assessed through a series of numerical experiments using the well-known cell cycle gene regulatory model.
0
0
1
1
0
0
16,269
Cross-National Measurement of Polarization in Political Discourse: Analyzing floor debate in the U.S. and the Japanese legislatures
Political polarization in public space can seriously hamper the function and the integrity of contemporary democratic societies. In this paper, we propose a novel measure of such polarization, which, by way of simple topic modelling, quantifies differences in collective articulation of public agendas among relevant political actors. Unlike most other polarization measures, our measure allows cross-national comparison. Analyzing a large amount of speech records of legislative debate in the United States Congress and the Japanese Diet over a long period of time, we have reached two intriguing findings. First, on average, Japanese political actors are far more polarized in their issue articulation than their counterparts in the U.S., which is somewhat surprising given the recent notion of U.S. politics as highly polarized. Second, the polarization in each country shows its own temporal dynamics in response to a different set of factors. In Japan, structural factors such as the roles of the ruling party and the opposition often dominate such dynamics, whereas the U.S. legislature suffers from persistent ideological differences over particular issues between major political parties. The analysis confirms a strong influence of institutional differences on legislative debate in parliamentary democracies.
1
0
0
0
0
0
16,270
Size-Change Termination as a Contract
Program termination is an undecidable, yet important, property relevant to program verification, optimization, debugging, partial evaluation, and dependently-typed programming, among many other topics. This has given rise to a large body of work on static methods for conservatively predicting or enforcing termination. A simple effective approach is the size-change termination (SCT) method, which operates in two-phases: (1) abstract programs into "size-change graphs," and (2) check these graphs for the size-change property: the existence of paths that lead to infinitely decreasing value sequences. This paper explores the termination problem starting from a different vantage point: we propose transposing the two phases of the SCT analysis by developing an operational semantics that accounts for the run time checking of the size-change property, postponing program abstraction or avoiding it entirely. This choice has two important consequences: SCT can be monitored and enforced at run-time and termination analysis can be rephrased as a traditional safety property and computed using existing abstract interpretation methods. We formulate this run-time size-change check as a contract. This contributes the first run-time mechanism for checking termination in a general-purporse programming language. The result nicely compliments existing contracts that enforce partial correctness to obtain the first contracts for total correctness. Our approach combines the robustness of SCT with precise information available at run-time. To obtain a sound and computable analysis, it is possible to apply existing abstract interpretation techniques directly to the operational semantics; there is no need for an abstraction tailored to size-change graphs. We apply higher-order symbolic execution to obtain a novel termination analysis that is competitive with existing, purpose-built termination analyzers.
1
0
0
0
0
0
16,271
Disorder Dependent Valley Properties in Monolayer WSe2
We investigate the effect on disorder potential on exciton valley polarization and valley coherence in monolayer WSe2. By analyzing polarization properties of photoluminescence, the valley coherence (VC) and valley polarization (VP) is quantified across the inhomogeneously broadened exciton resonance. We find that disorder plays a critical role in the exciton VC, while minimally affecting VP. For different monolayer samples with disorder characterized by their Stokes Shift (SS), VC decreases in samples with higher SS while VP again remains unchanged. These two methods consistently demonstrate that VC as defined by the degree of linearly polarized photoluminescence is more sensitive to disorder potential, motivating further theoretical studies.
0
1
0
0
0
0
16,272
Results on the Hilbert coefficients and reduction numbers
Let $(R,\frak{m})$ be a $d$-dimensional Cohen-Macaulay local ring, $I$ an $\frak{m}$-primary ideal and $J$ a minimal reduction of $I$. In this paper we study the independence of reduction ideals and the behavior of the higher Hilbert coefficients. In addition, we give some examples in this regards.
0
0
1
0
0
0
16,273
Progress on Experiments towards LWFA-driven Transverse Gradient Undulator-Based FELs
Free Electron Lasers (FEL) are commonly regarded as the potential key application of laser wakefield accelerators (LWFA). It has been found that electron bunches exiting from state-of-the-art LWFAs exhibit a normalized 6-dimensional beam brightness comparable to those in conventional linear accelerators. Effectively exploiting this beneficial beam property for LWFA-based FELs is challenging due to the extreme initial conditions particularly in terms of beam divergence and energy spread. Several different approaches for capturing, reshaping and matching LWFA beams to suited undulators, such as bunch decompression or transverse-gradient undulator schemes, are currently being explored. In this article the transverse gradient undulator concept will be discussed with a focus on recent experimental achievements.
0
1
0
0
0
0
16,274
Non-escaping endpoints do not explode
The family of exponential maps $f_a(z)= e^z+a$ is of fundamental importance in the study of transcendental dynamics. Here we consider the topological structure of certain subsets of the Julia set $J(f_a)$. When $a\in (-\infty,-1)$, and more generally when $a$ belongs to the Fatou set of $f_a$, it is known that $J(f_a)$ can be written as a union of "hairs" and "endpoints" of these hairs. In 1990, Mayer proved for $a\in (-\infty,-1)$ that, while the set of endpoints is totally separated, its union with infinity is a connected set. Recently, Alhabib and the second author extended this result to the case where $a \in F(f_a)$, and showed that it holds even for the smaller set of all escaping endpoints. We show that, in contrast, the set of non-escaping endpoints together with infinity is totally separated. It turns out that this property is closely related to a topological structure known as a `spider's web'; in particular we give a new topological characterisation of spiders' webs that may be of independent interest. We also show how our results can be applied to Fatou's function, $z\mapsto z + 1 + e^{-z}$.
0
0
1
0
0
0
16,275
The Fornax Deep Survey with VST. II. Fornax A: a two-phase assembly caught on act
As part of the Fornax Deep Survey with the ESO VLT Survey Telescope, we present new $g$ and $r$ bands mosaics of the SW group of the Fornax cluster. It covers an area of $3 \times 2$ square degrees around the central galaxy NGC1316. The deep photometry, the high spatial resolution of OmegaCam and the large covered area allow us to study the galaxy structure, to trace stellar halo formation and look at the galaxy environment. We map the surface brightness profile out to 33arcmin ($\sim 200$kpc $\sim15R_e$) from the galaxy centre, down to $\mu_g \sim 31$ mag arcsec$^{-2}$ and $\mu_r \sim 29$ mag arcsec$^{-2}$. This allow us to estimate the scales of the main components dominating the light distribution, which are the central spheroid, inside 5.5 arcmin ($\sim33$ kpc), and the outer stellar envelope. Data analysis suggests that we are catching in act the second phase of the mass assembly in this galaxy, since the accretion of smaller satellites is going on in both components. The outer envelope of NGC1316 still hosts the remnants of the accreted satellite galaxies that are forming the stellar halo. We discuss the possible formation scenarios for NGC1316, by comparing the observed properties (morphology, colors, gas content, kinematics and dynamics) with predictions from cosmological simulations of galaxy formation. We find that {\it i)} the central spheroid could result from at least one merging event, it could be a pre-existing early-type disk galaxy with a lower mass companion, and {\it ii)} the stellar envelope comes from the gradual accretion of small satellites.
0
1
0
0
0
0
16,276
Local decoding and testing of polynomials over grids
The well-known DeMillo-Lipton-Schwartz-Zippel lemma says that $n$-variate polynomials of total degree at most $d$ over grids, i.e. sets of the form $A_1 \times A_2 \times \cdots \times A_n$, form error-correcting codes (of distance at least $2^{-d}$ provided $\min_i\{|A_i|\}\geq 2$). In this work we explore their local decodability and (tolerant) local testability. While these aspects have been studied extensively when $A_1 = \cdots = A_n = \mathbb{F}_q$ are the same finite field, the setting when $A_i$'s are not the full field does not seem to have been explored before. In this work we focus on the case $A_i = \{0,1\}$ for every $i$. We show that for every field (finite or otherwise) there is a test whose query complexity depends only on the degree (and not on the number of variables). In contrast we show that decodability is possible over fields of positive characteristic (with query complexity growing with the degree of the polynomial and the characteristic), but not over the reals, where the query complexity must grow with $n$. As a consequence we get a natural example of a code (one with a transitive group of symmetries) that is locally testable but not locally decodable. Classical results on local decoding and testing of polynomials have relied on the 2-transitive symmetries of the space of low-degree polynomials (under affine transformations). Grids do not possess this symmetry: So we introduce some new techniques to overcome this handicap and in particular use the hypercontractivity of the (constant weight) noise operator on the Hamming cube.
1
0
0
0
0
0
16,277
A New Backpressure Algorithm for Joint Rate Control and Routing with Vanishing Utility Optimality Gaps and Finite Queue Lengths
The backpressure algorithm has been widely used as a distributed solution to the problem of joint rate control and routing in multi-hop data networks. By controlling a parameter $V$ in the algorithm, the backpressure algorithm can achieve an arbitrarily small utility optimality gap. However, this in turn brings in a large queue length at each node and hence causes large network delay. This phenomenon is known as the fundamental utility-delay tradeoff. The best known utility-delay tradeoff for general networks is $[O(1/V), O(V)]$ and is attained by a backpressure algorithm based on a drift-plus-penalty technique. This may suggest that to achieve an arbitrarily small utility optimality gap, the existing backpressure algorithms necessarily yield an arbitrarily large queue length. However, this paper proposes a new backpressure algorithm that has a vanishing utility optimality gap, so utility converges to exact optimality as the algorithm keeps running, while queue lengths are bounded throughout by a finite constant. The technique uses backpressure and drift concepts with a new method for convex programming.
1
0
1
0
0
0
16,278
Lifshitz transition from valence fluctuations in YbAl3
In Kondo lattice systems with mixed valence, such as YbAl3, interactions between localized electrons in a partially filled f shell and delocalized conduction electrons can lead to fluctuations between two different valence configurations with changing temperature or pressure. The impact of this change on the momentum-space electronic structure and Fermi surface topology is essential for understanding their emergent properties, but has remained enigmatic due to a lack of appropriate experimental probes. Here by employing a combination of molecular beam epitaxy (MBE) and in situ angle-resolved photoemission spectroscopy (ARPES) we show that valence fluctuations can lead to dramatic changes in the Fermi surface topology, even resulting in a Lifshitz transition. As the temperature is lowered, a small electron pocket in YbAl3 becomes completely unoccupied while the low-energy ytterbium (Yb) 4f states become increasingly itinerant, acquiring additional spectral weight, longer lifetimes, and well-defined dispersions. Our work presents the first unified picture of how local valence fluctuations connect to momentum space concepts including band filling and Fermi surface topology in the longstanding problem of mixed-valence systems.
0
1
0
0
0
0
16,279
Diversity from the Topology of Citation Networks
We study transitivity in directed acyclic graphs and its usefulness in capturing nodes that act as bridges between more densely interconnected parts in such type of network. In transitively reduced citation networks degree centrality could be used as a measure of interdisciplinarity or diversity. We study the measure's ability to capture "diverse" nodes in random directed acyclic graphs and citation networks. We show that transitively reduced degree centrality is capable of capturing "diverse" nodes, thus this measure could be a timely alternative to text analysis techniques for retrieving papers, influential in a variety of research fields.
1
0
0
0
0
0
16,280
Superluminal transmission of phase modulation information by a long macroscopic pulse propagating through interstellar space
A method of transmitting information in interstellar space at superluminal, or $> c$, speeds is proposed. The information is encoded as phase modulation of an electromagnetic wave of constant intensity, i.e. fluctuations in the rate of energy transport plays no role in the communication, and no energy is transported at speed $>$ c. Of course, such a constant wave can ultimately last only the duration of its enveloping wave packet. However, as a unique feature of this paper, we assume the source is sufficiently steady to be capable of emitting wave packets, or pulses, of size much larger than the separation between sender and receiver. Therefore, if a pre-existing and enduring wave envelope already connects the two sides, the subluminal nature of the envelope's group velocity will no longer slow down the communication, which is now limited by the speed at which information encoded as phase modulation propagates through the plasma, i.e. the phase velocity $v_p > c$. The method involves no sharp structure in either time or frequency. As a working example, we considered two spaceships separated by 1 lt-s in the local hot bubble. Provided the bandwidth of the extra Fourier modes generated by the phase modulation is much smaller than the carrier wave frequency, the radio communication of a message, encoded as a specific alignment between the carrier wave phase and the anomalous (modulated) phase, can take place at a speed in excess of light by a few parts in 10$^{11}$ at $\nu\approx 1$~GHz, and higher at smaller $\nu$.
0
1
0
0
0
0
16,281
Causal Regularization
In application domains such as healthcare, we want accurate predictive models that are also causally interpretable. In pursuit of such models, we propose a causal regularizer to steer predictive models towards causally-interpretable solutions and theoretically study its properties. In a large-scale analysis of Electronic Health Records (EHR), our causally-regularized model outperforms its L1-regularized counterpart in causal accuracy and is competitive in predictive performance. We perform non-linear causality analysis by causally regularizing a special neural network architecture. We also show that the proposed causal regularizer can be used together with neural representation learning algorithms to yield up to 20% improvement over multilayer perceptron in detecting multivariate causation, a situation common in healthcare, where many causal factors should occur simultaneously to have an effect on the target variable.
1
0
0
1
0
0
16,282
Voice Disorder Detection Using Long Short Term Memory (LSTM) Model
Automated detection of voice disorders with computational methods is a recent research area in the medical domain since it requires a rigorous endoscopy for the accurate diagnosis. Efficient screening methods are required for the diagnosis of voice disorders so as to provide timely medical facilities in minimal resources. Detecting Voice disorder using computational methods is a challenging problem since audio data is continuous due to which extracting relevant features and applying machine learning is hard and unreliable. This paper proposes a Long short term memory model (LSTM) to detect pathological voice disorders and evaluates its performance in a real 400 testing samples without any labels. Different feature extraction methods are used to provide the best set of features before applying LSTM model for classification. The paper describes the approach and experiments that show promising results with 22% sensitivity, 97% specificity and 56% unweighted average recall.
1
0
0
0
0
0
16,283
Topology of two-dimensional turbulent flows of dust and gas
We perform direct numerical simulations (DNS) of passive heavy inertial particles (dust) in homogeneous and isotropic two-dimensional turbulent flows (gas) for a range of Stokes number, ${\rm St} < 1$, using both Lagrangian and Eulerian approach (with a shock-capturing scheme). We find that: The dust-density field in our Eulerian simulations have the same correlation dimension $d_2$ as obtained from the clustering of particles in the Lagrangian simulations for ${\rm St} < 1$; The cumulative probability distribution function of the dust-density coarse-grained over a scale $r$ in the inertial range has a left-tail with a power-law fall-off indicating presence of voids; The energy spectrum of the dust-velocity has a power-law range with an exponent that is same as the gas-velocity spectrum except at very high Fourier modes; The compressibility of the dust-velocity field is proportional to ${\rm St}^2$. We quantify the topological properties of the dust-velocity and the gas-velocity through their gradient matrices, called $\mathcal{A}$ and $\mathcal{B}$, respectively. The topological properties of $\mathcal{B}$ are the same in Eulerian and Lagrangian frames only if the Eulerian data are weighed by the dust-density -- a correspondence that we use to study Lagrangian properties of $\mathcal{A}$. In the Lagrangian frame, the mean value of the trace of $\mathcal{A} \sim - \exp(-C/{\rm St}$, with a constant $C\approx 0.1$. The topology of the dust-velocity fields shows that as ${\rm St} increases the contribution to negative divergence comes mostly from saddles and the contribution to positive divergence comes from both vortices and saddles. Compared to the Eulerian case, the density-weighed Eulerian case has less inward spirals and more converging saddles. Outward spirals are the least probable topological structures in both cases.
0
1
0
0
0
0
16,284
Trimming the Independent Fat: Sufficient Statistics, Mutual Information, and Predictability from Effective Channel States
One of the most fundamental questions one can ask about a pair of random variables X and Y is the value of their mutual information. Unfortunately, this task is often stymied by the extremely large dimension of the variables. We might hope to replace each variable by a lower-dimensional representation that preserves the relationship with the other variable. The theoretically ideal implementation is the use of minimal sufficient statistics, where it is well-known that either X or Y can be replaced by their minimal sufficient statistic about the other while preserving the mutual information. While intuitively reasonable, it is not obvious or straightforward that both variables can be replaced simultaneously. We demonstrate that this is in fact possible: the information X's minimal sufficient statistic preserves about Y is exactly the information that Y's minimal sufficient statistic preserves about X. As an important corollary, we consider the case where one variable is a stochastic process' past and the other its future and the present is viewed as a memoryful channel. In this case, the mutual information is the channel transmission rate between the channel's effective states. That is, the past-future mutual information (the excess entropy) is the amount of information about the future that can be predicted using the past. Translating our result about minimal sufficient statistics, this is equivalent to the mutual information between the forward- and reverse-time causal states of computational mechanics. We close by discussing multivariate extensions to this use of minimal sufficient statistics.
1
1
0
1
0
0
16,285
Statistical Physics of the Symmetric Group
Ordered chains (such as chains of amino acids) are ubiquitous in biological cells, and these chains perform specific functions contingent on the sequence of their components. Using the existence and general properties of such sequences as a theoretical motivation, we study the statistical physics of systems whose state space is defined by the possible permutations of an ordered list, i.e., the symmetric group, and whose energy is a function of how certain permutations deviate from some chosen correct ordering. Such a non-factorizable state space is quite different from the state spaces typically considered in statistical physics systems and consequently has novel behavior in systems with interacting and even non-interacting Hamiltonians. Various parameter choices of a mean-field model reveal the system to contain five different physical regimes defined by two transition temperatures, a triple point, and a quadruple point. Finally, we conclude by discussing how the general analysis can be extended to state spaces with more complex combinatorial properties and to other standard questions of statistical mechanics models.
0
1
0
0
0
0
16,286
A New Phosphorus Allotrope with Direct Band Gap and High Mobility
Based on ab initio evolutionary crystal structure search computation, we report a new phase of phosphorus called green phosphorus ({\lambda}-P), which exhibits the direct band gaps ranging from 0.7 to 2.4 eV and the strong anisotropy in optical and transport properties. Free energy calculations show that a single-layer form, termed green phosphorene, is energetically more stable than blue phosphorene and a phase transition from black to green phosphorene can occur at temperatures above 87 K. Due to its buckled structure, green phosphorene can be synthesized on corrugated metal surfaces rather than clean surfaces.
0
1
0
0
0
0
16,287
An open shop approach in approximating optimal data transmission duration in WDM networks
In the past decade Optical WDM Networks (Wavelength Division Multiplexing) are being used quite often and especially as far as broadband applications are concerned. Message packets transmitted through such networks can be interrupted using time slots in order to maximize network usage and minimize the time required for all messages to reach their destination. However, preempting a packet will result in time cost. The problem of scheduling message packets through such a network is referred to as PBS and is known to be NP-Hard. In this paper we have reduced PBS to Open Shop Scheduling and designed variations of polynomially solvable instances of Open Shop to approximate PBS. We have combined these variations and called the induced algorithm HSA (Hybridic Scheduling Algorithm). We ran experiments to establish the efficiency of HSA and found that in all datasets used it produces schedules very close to the optimal. To further establish HSAs efficiency we ran tests to compare it to SGA, another algorithm which when tested in the past has yielded excellent results.
1
0
0
0
0
0
16,288
Optimal prediction in the linearly transformed spiked model
We consider the linearly transformed spiked model, where observations $Y_i$ are noisy linear transforms of unobserved signals of interest $X_i$: \begin{align*} Y_i = A_i X_i + \varepsilon_i, \end{align*} for $i=1,\ldots,n$. The transform matrices $A_i$ are also observed. We model $X_i$ as random vectors lying on an unknown low-dimensional space. How should we predict the unobserved signals (regression coefficients) $X_i$? The naive approach of performing regression for each observation separately is inaccurate due to the large noise. Instead, we develop optimal linear empirical Bayes methods for predicting $X_i$ by "borrowing strength" across the different samples. Our methods are applicable to large datasets and rely on weak moment assumptions. The analysis is based on random matrix theory. We discuss applications to signal processing, deconvolution, cryo-electron microscopy, and missing data in the high-noise regime. For missing data, we show in simulations that our methods are faster, more robust to noise and to unequal sampling than well-known matrix completion methods.
0
0
1
1
0
0
16,289
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
1
0
0
1
0
0
16,290
Real-time Fault Localization in Power Grids With Convolutional Neural Networks
Diverse fault types, fast re-closures and complicated transient states after a fault event make real-time fault location in power grids challenging. Existing localization techniques in this area rely on simplistic assumptions, such as static loads, or require much higher sampling rates or total measurement availability. This paper proposes a data-driven localization method based on a Convolutional Neural Network (CNN) classifier using bus voltages. Unlike prior data-driven methods, the proposed classifier is based on features with physical interpretations that are described in details. The accuracy of our CNN based localization tool is demonstrably superior to other machine learning classifiers in the literature. To further improve the location performance, a novel phasor measurement units (PMU) placement strategy is proposed and validated against other methods. A significant aspect of our methodology is that under very low observability (7% of buses), the algorithm is still able to localize the faulted line to a small neighborhood with high probability. The performance of our scheme is validated through simulations of faults of various types in the IEEE 68-bus power system under varying load conditions, system observability and measurement quality.
1
0
0
0
0
0
16,291
Effective identifiability criteria for tensors and polynomials
A tensor $T$, in a given tensor space, is said to be $h$-identifiable if it admits a unique decomposition as a sum of $h$ rank one tensors. A criterion for $h$-identifiability is called effective if it is satisfied in a dense, open subset of the set of rank $h$ tensors. In this paper we give effective $h$-identifiability criteria for a large class of tensors. We then improve these criteria for some symmetric tensors. For instance, this allows us to give a complete set of effective identifiability criteria for ternary quintic polynomial. Finally, we implement our identifiability algorithms in Macaulay2.
1
0
1
0
0
0
16,292
Functoriality properties of the dual group
Let $G$ be a connected reductive group. In a previous paper, arXiv:1702.08264, is was shown that the dual group $G^\vee_X$ attached to a $G$-variety $X$ admits a natural homomorphism with finite kernel to the Langlands dual group $G^\vee$ of $G$. Here, we prove that the dual group is functorial in the following sense: if there is a dominant $G$-morphism $X\to Y$ or an injective $G$-morphism $Y\to X$ then there is a canonical homomorphism $G^\vee_Y\to G^\vee_X$ which is compatible with the homomorphisms to $G^\vee$.
0
0
1
0
0
0
16,293
Chaotic dynamics around cometary nuclei
We apply a generalized Kepler map theory to describe the qualitative chaotic dynamics around cometary nuclei, based on accessible observational data for five comets whose nuclei are well-documented to resemble dumb-bells. The sizes of chaotic zones around the nuclei and the Lyapunov times of the motion inside these zones are estimated. In the case of Comet 1P/Halley, the circumnuclear chaotic zone seems to engulf an essential part of the Hill sphere, at least for orbits of moderate to high eccentricity.
0
1
0
0
0
0
16,294
Riemannian curvature measures
A famous theorem of Weyl states that if $M$ is a compact submanifold of euclidean space, then the volumes of small tubes about $M$ are given by a polynomial in the radius $r$, with coefficients that are expressible as integrals of certain scalar invariants of the curvature tensor of $M$ with respect to the induced metric. It is natural to interpret this phenomenon in terms of curvature measures and smooth valuations, in the sense of Alesker, canonically associated to the Riemannian structure of $M$. This perspective yields a fundamental new structure in Riemannian geometry, in the form of a certain abstract module over the polynomial algebra $\mathbb R[t]$ that reflects the behavior of Alesker multiplication. This module encodes a key piece of the array of kinematic formulas of any Riemannian manifold on which a group of isometries acts transitively on the sphere bundle. We illustrate this principle in precise terms in the case where $M$ is a complex space form.
0
0
1
0
0
0
16,295
A new lower bound for the on-line coloring of intervals with bandwidth
The on-line interval coloring and its variants are important combinatorial problems with many applications in network multiplexing, resource allocation and job scheduling. In this paper we present a new lower bound of $4.1626$ for the competitive ratio for the on-line coloring of intervals with bandwidth which improves the best known lower bound of $\frac{24}{7}$. For the on-line coloring of unit intervals with bandwidth we improve the lower bound of $1.831$ to $2$.
1
0
1
0
0
0
16,296
Orbits of monomials and factorization into products of linear forms
This paper is devoted to the factorization of multivariate polynomials into products of linear forms, a problem which has applications to differential algebra, to the resolution of systems of polynomial equations and to Waring decomposition (i.e., decomposition in sums of d-th powers of linear forms; this problem is also known as symmetric tensor decomposition). We provide three black box algorithms for this problem. Our main contribution is an algorithm motivated by the application to Waring decomposition. This algorithm reduces the corresponding factorization problem to simultaenous matrix diagonalization, a standard task in linear algebra. The algorithm relies on ideas from invariant theory, and more specifically on Lie algebras. Our second algorithm reconstructs a factorization from several bi-variate projections. Our third algorithm reconstructs it from the determination of the zero set of the input polynomial, which is a union of hyperplanes.
1
0
0
0
0
0
16,297
Monodromy and Vinberg fusion for the principal degeneration of the space of G-bundles
We study the geometry and the singularities of the principal direction of the Drinfeld-Lafforgue-Vinberg degeneration of the moduli space of G-bundles Bun_G for an arbitrary reductive group G, and their relationship to the Langlands dual group of G. In the first part of the article we study the monodromy action on the nearby cycles sheaf along the principal degeneration of Bun_G. We describe the weight-monodromy filtration in terms of the combinatorics of the Langlands dual group of G and generalizations of the Picard-Lefschetz oscillators found in [Sch1]. Our proofs use certain local models for the principal degeneration whose geometry is studied in the second part. Our local models simultaneously provide two types of degenerations of the Zastava spaces, which together equip the Zastava spaces with the geometric analog of a Hopf algebra structure. The first degeneration corresponds to the usual Beilinson-Drinfeld fusion of divisors on the curve. The second degeneration is new and corresponds to what we call Vinberg fusion: It is obtained not by degenerating divisors on the curve, but by degenerating the group G via the Vinberg semigroup. On the level of cohomology the Vinberg fusion gives rise to an algebra structure, while the Beilinson-Drinfeld fusion gives rise to a coalgebra structure; the Hopf algebra axiom is a consequence of the underlying geometry. It is natural to conjecture that this Hopf algebra agrees with the universal enveloping algebra of the positive part of the Langlands dual Lie algebra. The above procedure would then yield a novel and highly geometric way to pass to the Langlands dual side: Elements of the Langlands dual Lie algebra are represented as cycles on the above moduli spaces, and the Lie bracket of two elements is obtained by deforming the cartesian product cycle along the Vinberg degeneration.
0
0
1
0
0
0
16,298
Sketched Subspace Clustering
The immense amount of daily generated and communicated data presents unique challenges in their processing. Clustering, the grouping of data without the presence of ground-truth labels, is an important tool for drawing inferences from data. Subspace clustering (SC) is a relatively recent method that is able to successfully classify nonlinearly separable data in a multitude of settings. In spite of their high clustering accuracy, SC methods incur prohibitively high computational complexity when processing large volumes of high-dimensional data. Inspired by random sketching approaches for dimensionality reduction, the present paper introduces a randomized scheme for SC, termed Sketch-SC, tailored for large volumes of high-dimensional data. Sketch-SC accelerates the computationally heavy parts of state-of-the-art SC approaches by compressing the data matrix across both dimensions using random projections, thus enabling fast and accurate large-scale SC. Performance analysis as well as extensive numerical tests on real data corroborate the potential of Sketch-SC and its competitive performance relative to state-of-the-art scalable SC approaches.
1
0
0
1
0
0
16,299
An infinite class of unsaturated rooted trees corresponding to designable RNA secondary structures
An RNA secondary structure is designable if there is an RNA sequence which can attain its maximum number of base pairs only by adopting that structure. The combinatorial RNA design problem, introduced by Haleš et al. in 2016, is to determine whether or not a given RNA secondary structure is designable. Haleš et al. identified certain classes of designable and non-designable secondary structures by reference to their corresponding rooted trees. We introduce an infinite class of rooted trees containing unpaired nucleotides at the greatest height, and prove constructively that their corresponding secondary structures are designable. This complements previous results for the combinatorial RNA design problem.
1
0
0
0
0
0
16,300
Any Baumslag-Solitar action on surfaces with a pseudo-Anosov element has a finite orbit
We consider $f, h$ homeomorphims generating a faithful $BS(1,n)$-action on a closed surface $S$, that is, $h f h^{-1} = f^n$, for some $ n\geq 2$. According to \cite{GL}, after replacing $f$ by a suitable iterate if necessary, we can assume that there exists a minimal set $\Lambda$ of the action, included in $Fix(f)$. Here, we suppose that $f$ and $h$ are $C^1$ in neighbourhood of $\Lambda$ and any point $x\in\Lambda$ admits an $h$-unstable manifold $W^u(x)$. Using Bonatti's techniques, we prove that either there exists an integer $N$ such that $W^u(x)$ is included in $Fix(f^N)$ or there is a lower bound for the norm of the differential of $h$ only depending on $n$ and the Riemannian metric on $S$. Combining last statement with a result of \cite{AGX}, we show that any faithful action of $BS(1, n)$ on $S$ with $h$ a pseudo-Anosov homeomorphism has a finite orbit. As a consequence, there is no faithful $C^1$-action of $BS(1, n)$ on the torus with $h$ an Anosov.
0
0
1
0
0
0