ID
int64 1
21k
| TITLE
stringlengths 7
239
| ABSTRACT
stringlengths 7
2.76k
| Computer Science
int64 0
1
| Physics
int64 0
1
| Mathematics
int64 0
1
| Statistics
int64 0
1
| Quantitative Biology
int64 0
1
| Quantitative Finance
int64 0
1
|
---|---|---|---|---|---|---|---|---|
18,901 | Fermi bubbles: high latitude X-ray supersonic shell | The nature of the bipolar, $\gamma$-ray Fermi bubbles (FB) is still unclear,
in part because their faint, high-latitude X-ray counterpart has until now
eluded a clear detection. We stack ROSAT data at varying distances from the FB
edges, thus boosting the signal and identifying an expanding shell behind the
southwest, southeast, and northwest edges, albeit not in the dusty northeast
sector near Loop I. A Primakoff-like model for the underlying flow is invoked
to show that the signals are consistent with halo gas heated by a strong,
forward shock to $\sim$keV temperatures. Assuming ion--electron thermal
equilibrium then implies a $\sim10^{56}$ erg event near the Galactic centre
$\sim7$ Myr ago. However, the reported high absorption-line velocities suggest
a preferential shock-heating of ions, and thus more energetic ($\sim 10^{57}$
erg), younger ($\lesssim 3$ Myr) FBs.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,902 | Passivity-Based Generalization of Primal-Dual Dynamics for Non-Strictly Convex Cost Functions | In this paper, we revisit primal-dual dynamics for convex optimization and
present a generalization of the dynamics based on the concept of passivity. It
is then proved that supplying a stable zero to one of the integrators in the
dynamics allows one to eliminate the assumption of strict convexity on the cost
function based on the passivity paradigm together with the invariance principle
for Caratheodory systems. We then show that the present algorithm is also a
generalization of existing augmented Lagrangian-based primal-dual dynamics, and
discuss the benefit of the present generalization in terms of noise reduction
and convergence speed.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,903 | An integral formula for the powered sum of the independent, identically and normally distributed random variables | The distribution of the sum of r-th power of standard normal random variables
is a generalization of the chi-squared distribution. In this paper, we
represent the probability density function of the random variable by an
one-dimensional absolutely convergent integral with the characteristic
function. Our integral formula is expected to be applied for evaluation of the
density function. Our integral formula is based on the inversion formula, and
we utilize a summation method. We also discuss on our formula in the view point
of hyperfunctions.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,904 | A Dynamic Boosted Ensemble Learning Method Based on Random Forest | We propose a dynamic boosted ensemble learning method based on random forest
(DBRF), a novel ensemble algorithm that incorporates the notion of hard example
mining into Random Forest (RF) and thus combines the high accuracy of Boosting
algorithm with the strong generalization of Bagging algorithm. Specifically, we
propose to measure the quality of each leaf node of every decision tree in the
random forest to determine hard examples. By iteratively training and then
removing easy examples from training data, we evolve the random forest to focus
on hard examples dynamically so as to learn decision boundaries better. Data
can be cascaded through these random forests learned in each iteration in
sequence to generate predictions, thus making RF deep. We also propose to use
evolution mechanism and smart iteration mechanism to improve the performance of
the model. DBRF outperforms RF on three UCI datasets and achieved
state-of-the-art results compared to other deep models. Moreover, we show that
DBRF is also a new way of sampling and can be very useful when learning from
imbalanced data.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,905 | Inter-Operator Resource Management for Millimeter Wave, Multi-Hop Backhaul Networks | In this paper, a novel framework is proposed for optimizing the operation and
performance of a large-scale, multi-hop millimeter wave (mmW) backhaul within a
wireless small cell network (SCN) that encompasses multiple mobile network
operators (MNOs). The proposed framework enables the small base stations (SBSs)
to jointly decide on forming the multi-hop, mmW links over backhaul
infrastructure that belongs to multiple, independent MNOs, while properly
allocating resources across those links. In this regard, the problem is
addressed using a novel framework based on matching theory that is composed to
two, highly inter-related stages: a multi-hop network formation stage and a
resource management stage. One unique feature of this framework is that it
jointly accounts for both wireless channel characteristics and economic factors
during both network formation and resource management. The multi-hop network
formation stage is formulated as a one-to-many matching game which is solved
using a novel algorithm, that builds on the so-called deferred acceptance
algorithm and is shown to yield a stable and Pareto optimal multi-hop mmW
backhaul network. Then, a one-to-many matching game is formulated to enable
proper resource allocation across the formed multi-hop network. This game is
then shown to exhibit peer effects and, as such, a novel algorithm is developed
to find a stable and optimal resource management solution that can properly
cope with these peer effects. Simulation results show that the proposed
framework yields substantial gains, in terms of the average sum rate, reaching
up to 27% and 54%, respectively, compared to a non-cooperative scheme in which
inter-operator sharing is not allowed and a random allocation approach. The
results also show that our framework provides insights on how to manage pricing
and the cost of the cooperative mmW backhaul network for the MNOs.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,906 | An Online Learning Approach to Generative Adversarial Networks | We consider the problem of training generative models with a Generative
Adversarial Network (GAN). Although GANs can accurately model complex
distributions, they are known to be difficult to train due to instabilities
caused by a difficult minimax optimization problem. In this paper, we view the
problem of training GANs as finding a mixed strategy in a zero-sum game.
Building on ideas from online learning we propose a novel training method named
Chekhov GAN 1 . On the theory side, we show that our method provably converges
to an equilibrium for semi-shallow GAN architectures, i.e. architectures where
the discriminator is a one layer network and the generator is arbitrary. On the
practical side, we develop an efficient heuristic guided by our theoretical
results, which we apply to commonly used deep GAN architectures. On several
real world tasks our approach exhibits improved stability and performance
compared to standard GAN training.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,907 | Weakly supervised CRNN system for sound event detection with large-scale unlabeled in-domain data | Sound event detection (SED) is typically posed as a supervised learning
problem requiring training data with strong temporal labels of sound events.
However, the production of datasets with strong labels normally requires
unaffordable labor cost. It limits the practical application of supervised SED
methods. The recent advances in SED approaches focuses on detecting sound
events by taking advantages of weakly labeled or unlabeled training data. In
this paper, we propose a joint framework to solve the SED task using
large-scale unlabeled in-domain data. In particular, a state-of-the-art general
audio tagging model is first employed to predict weak labels for unlabeled
data. On the other hand, a weakly supervised architecture based on the
convolutional recurrent neural network (CRNN) is developed to solve the strong
annotations of sound events with the aid of the unlabeled data with predicted
labels. It is found that the SED performance generally increases as more
unlabeled data is added into the training. To address the noisy label problem
of unlabeled data, an ensemble strategy is applied to increase the system
robustness. The proposed system is evaluated on the SED dataset of DCASE 2018
challenge. It reaches a F1-score of 21.0%, resulting in an improvement of 10%
over the baseline system.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,908 | Revealing the Coulomb interaction strength in a cuprate superconductor | We study optimally doped
Bi$_{2}$Sr$_{2}$Ca$_{0.92}$Y$_{0.08}$Cu$_{2}$O$_{8+\delta}$ (Bi2212) using
angle-resolved two-photon photoemission spectroscopy. Three spectral features
are resolved near 1.5, 2.7, and 3.6 eV above the Fermi level. By tuning the
photon energy, we determine that the 2.7-eV feature arises predominantly from
unoccupied states. The 1.5- and 3.6-eV features reflect unoccupied states whose
spectral intensities are strongly modulated by the corresponding occupied
states. These unoccupied states are consistent with the prediction from a
cluster perturbation theory based on the single-band Hubbard model. Through
this comparison, a Coulomb interaction strength U of 2.7 eV is extracted. Our
study complements equilibrium photoemission spectroscopy and provides a direct
spectroscopic measurement of the unoccupied states in cuprates. The determined
Coulomb U indicates that the charge-transfer gap of optimally doped Bi2212 is
1.1 eV.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,909 | Ordering dynamics of self-propelled particles in an inhomogeneous medium | Ordering dynamics of self-propelled particles in an inhomogeneous medium in
two-dimensions is studied. We write coarse-grained hydrodynamic equations of
motion for coarse-grained density and velocity fields in the presence of an
external random disorder field, which is quenched in time. The strength of
inhomogeneity is tuned from zero disorder (clean system) to large disorder. In
the clean system, the velocity field grows algebraically as $L_{\rm V} \sim
t^{0.5}$. The density field does not show clean power-law growth; however, it
follows $L_{\rm \rho} \sim t^{0.8}$ approximately. In the inhomogeneous system,
we find a disorder dependent growth. For both the density and the velocity,
growth slow down with increasing strength of disorder. The velocity shows a
disorder dependent power-law growth $L_{\rm V}(t,\Delta) \sim t^{1/\bar z_{\rm
V}(\Delta)}$ for intermediate times. At late times, there is a crossover to
logarithmic growth $L_{\rm V}(t,\Delta) \sim (\ln t)^{1/\varphi}$, where
$\varphi$ is a disorder independent exponent. Two-point correlation functions
for the velocity shows dynamical scaling, but the density does not.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,910 | Inflationary preheating dynamics with ultracold atoms | We discuss the amplification of loop corrections in quantum many-body systems
through dynamical instabilities. As an example, we investigate both
analytically and numerically a two-component ultracold atom system in one
spatial dimension. The model features a tachyonic instability, which
incorporates characteristic aspects of the mechanisms for particle production
in early-universe inflaton models. We establish a direct correspondence between
measureable macroscopic growth rates for occupation numbers of the ultracold
Bose gas and the underlying microscopic processes in terms of Feynman loop
diagrams. We analyze several existing ultracold atom setups featuring dynamical
instabilities and propose optimized protocols for their experimental
realization. We demonstrate that relevant dynamical processes can be enhanced
using a seeding procedure for unstable modes and clarify the role of initial
quantum fluctuations and the generation of a non-linear secondary stage for the
amplification of modes.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,911 | Autonomous Extracting a Hierarchical Structure of Tasks in Reinforcement Learning and Multi-task Reinforcement Learning | Reinforcement learning (RL), while often powerful, can suffer from slow
learning speeds, particularly in high dimensional spaces. The autonomous
decomposition of tasks and use of hierarchical methods hold the potential to
significantly speed up learning in such domains. This paper proposes a novel
practical method that can autonomously decompose tasks, by leveraging
association rule mining, which discovers hidden relationship among entities in
data mining. We introduce a novel method called ARM-HSTRL (Association Rule
Mining to extract Hierarchical Structure of Tasks in Reinforcement Learning).
It extracts temporal and structural relationships of sub-goals in RL, and
multi-task RL. In particular,it finds sub-goals and relationship among them. It
is shown the significant efficiency and performance of the proposed method in
two main topics of RL.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,912 | Quantification of tumour evolution and heterogeneity via Bayesian epiallele detection | Motivation: Epigenetic heterogeneity within a tumour can play an important
role in tumour evolution and the emergence of resistance to treatment. It is
increasingly recognised that the study of DNA methylation (DNAm) patterns along
the genome -- so-called `epialleles' -- offers greater insight into epigenetic
dynamics than conventional analyses which examine DNAm marks individually.
Results: We have developed a Bayesian model to infer which epialleles are
present in multiple regions of the same tumour. We apply our method to reduced
representation bisulfite sequencing (RRBS) data from multiple regions of one
lung cancer tumour and a matched normal sample. The model borrows information
from all tumour regions to leverage greater statistical power. The total number
of epialleles, the epiallele DNAm patterns, and a noise hyperparameter are all
automatically inferred from the data. Uncertainty as to which epiallele an
observed sequencing read originated from is explicitly incorporated by
marginalising over the appropriate posterior densities. The degree to which
tumour samples are contaminated with normal tissue can be estimated and
corrected for. By tracing the distribution of epialleles throughout the tumour
we can infer the phylogenetic history of the tumour, identify epialleles that
differ between normal and cancer tissue, and define a measure of global
epigenetic disorder.
| 0 | 0 | 1 | 1 | 0 | 0 |
18,913 | A bootstrap test to detect prominent Granger-causalities across frequencies | Granger-causality in the frequency domain is an emerging tool to analyze the
causal relationship between two time series. We propose a bootstrap test on
unconditional and conditional Granger-causality spectra, as well as on their
difference, to catch particularly prominent causality cycles in relative terms.
In particular, we consider a stochastic process derived applying independently
the stationary bootstrap to the original series. Our null hypothesis is that
each causality or causality difference is equal to the median across
frequencies computed on that process. In this way, we are able to disambiguate
causalities which depart significantly from the median one obtained ignoring
the causality structure. Our test shows power one as the process tends to
non-stationarity, thus being more conservative than parametric alternatives. As
an example, we infer about the relationship between money stock and GDP in the
Euro Area via our approach, considering inflation, unemployment and interest
rates as conditioning variables. We point out that during the period 1999-2017
the money stock aggregate M1 had a significant impact on economic output at all
frequencies, while the opposite relationship is significant only at high
frequencies.
| 0 | 0 | 0 | 1 | 0 | 1 |
18,914 | Optimal segregation of proteins: phase transitions and symmetry breaking | Asymmetric segregation of key proteins at cell division -- be it a beneficial
or deleterious protein -- is ubiquitous in unicellular organisms and often
considered as an evolved trait to increase fitness in a stressed environment.
Here, we provide a general framework to describe the evolutionary origin of
this asymmetric segregation. We compute the population fitness as a function of
the protein segregation asymmetry $a$, and show that the value of $a$ which
optimizes the population growth manifests a phase transition between symmetric
and asymmetric partitioning phases. Surprisingly, the nature of phase
transition is different for the case of beneficial proteins as opposed to
proteins which decrease the single-cell growth rate. Our study elucidates the
optimization problem faced by evolution in the context of protein segregation,
and motivates further investigation of asymmetric protein segregation in
biological systems.
| 0 | 0 | 0 | 0 | 1 | 0 |
18,915 | Optimal make-take fees for market making regulation | We consider an exchange who wishes to set suitable make-take fees to attract
liquidity on its platform. Using a principal-agent approach, we are able to
describe in quasi-explicit form the optimal contract to propose to a market
maker. This contract depends essentially on the market maker inventory
trajectory and on the volatility of the asset. We also provide the optimal
quotes that should be displayed by the market maker. The simplicity of our
formulas allows us to analyze in details the effects of optimal contracting
with an exchange, compared to a situation without contract. We show in
particular that it leads to higher quality liquidity and lower trading costs
for investors.
| 0 | 0 | 0 | 0 | 0 | 1 |
18,916 | Spontaneous symmetry breaking due to the trade-off between attractive and repulsive couplings | Spontaneous symmetry breaking (SSB) is an important phenomenon observed in
various fields including physics and biology. In this connection, we here show
that the trade-off between attractive and repulsive couplings can induce
spontaneous symmetry breaking in a homogeneous system of coupled oscillators.
With a simple model of a system of two coupled Stuart-Landau oscillators, we
demonstrate how the tendency of attractive coupling in inducing in-phase
synchronized (IPS) oscillations and the tendency of repulsive coupling in
inducing out-of-phase synchronized (OPS) oscillations compete with each other
and give rise to symmetry breaking oscillatory (SBO) states and interesting
multistabilities. Further, we provide explicit expressions for synchronized and
anti-synchronized oscillatory states as well as the so called oscillation death
(OD) state and study their stability. If the Hopf bifurcation parameter
(${\lambda}$) is greater than the natural frequency ($\omega$) of the system,
the attractive coupling favours the emergence of an anti-symmetric OD state via
a Hopf bifurcation whereas the repulsive coupling favours the emergence of a
similar state through a saddle-node bifurcation. We show that an increase in
the repulsive coupling not only destabilizes the IPS state but also facilitates
the re-entrance of the IPS state.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,917 | Evolutionary Image Composition Using Feature Covariance Matrices | Evolutionary algorithms have recently been used to create a wide range of
artistic work. In this paper, we propose a new approach for the composition of
new images from existing ones, that retain some salient features of the
original images. We introduce evolutionary algorithms that create new images
based on a fitness function that incorporates feature covariance matrices
associated with different parts of the images. This approach is very flexible
in that it can work with a wide range of features and enables targeting
specific regions in the images. For the creation of the new images, we propose
a population-based evolutionary algorithm with mutation and crossover operators
based on random walks. Our experimental results reveal a spectrum of
aesthetically pleasing images that can be obtained with the aid of our
evolutionary process.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,918 | Community Detection with Colored Edges | In this paper, we prove a sharp limit on the community detection problem with
colored edges. We assume two equal-sized communities and there are $m$
different types of edges. If two vertices are in the same community, the
distribution of edges follows $p_i=\alpha_i\log{n}/n$ for $1\leq i \leq m$,
otherwise the distribution of edges is $q_i=\beta_i\log{n}/n$ for $1\leq i \leq
m$, where $\alpha_i$ and $\beta_i$ are positive constants and $n$ is the total
number of vertices. Under these assumptions, a fundamental limit on community
detection is characterized using the Hellinger distance between the two
distributions. If $\sum_{i=1}^{m} {(\sqrt{\alpha_i} - \sqrt{\beta_i})}^2 >2$,
then the community detection via maximum likelihood (ML) estimator is possible
with high probability. If $\sum_{i=1}^m {(\sqrt{\alpha_i} - \sqrt{\beta_i})}^2
< 2$, the probability that the ML estimator fails to detect the communities
does not go to zero.
| 1 | 1 | 0 | 0 | 0 | 0 |
18,919 | Nonlocal Neural Networks, Nonlocal Diffusion and Nonlocal Modeling | Nonlocal neural networks have been proposed and shown to be effective in
several computer vision tasks, where the nonlocal operations can directly
capture long-range dependencies in the feature space. In this paper, we study
the nature of diffusion and damping effect of nonlocal networks by doing
spectrum analysis on the weight matrices of the well-trained networks, and then
propose a new formulation of the nonlocal block. The new block not only learns
the nonlocal interactions but also has stable dynamics, thus allowing deeper
nonlocal structures. Moreover, we interpret our formulation from the general
nonlocal modeling perspective, where we make connections between the proposed
nonlocal network and other nonlocal models, such as nonlocal diffusion process
and Markov jump process.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,920 | Proof of FLT by Algebra Identities and Linear Algebra | The main aim of the present paper is to represent an exact and simple proof
for FLT by using properties of the algebra identities and linear algebra.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,921 | An Observational Diagnostic for Distinguishing Between Clouds and Haze in Hot Exoplanet Atmospheres | The nature of aerosols in hot exoplanet atmospheres is one of the primary
vexing questions facing the exoplanet field. The complex chemistry, multiple
formation pathways, and lack of easily identifiable spectral features
associated with aerosols make it especially challenging to constrain their key
properties. We propose a transmission spectroscopy technique to identify the
primary aerosol formation mechanism for the most highly irradiated hot Jupiters
(HIHJs). The technique is based on the expectation that the two key types of
aerosols -- photochemically generated hazes and equilibrium condensate clouds
-- are expected to form and persist in different regions of a highly irradiated
planet's atmosphere. Haze can only be produced on the permanent daysides of
tidally-locked hot Jupiters, and will be carried downwind by atmospheric
dynamics to the evening terminator (seen as the trailing limb during transit).
Clouds can only form in cooler regions on the night side and morning terminator
of HIHJs (seen as the leading limb during transit). Because opposite limbs are
expected to be impacted by different types of aerosols, ingress and egress
spectra, which primarily probe opposing sides of the planet, will reveal the
dominant aerosol formation mechanism. We show that the benchmark HIHJ,
WASP-121b, has a transmission spectrum consistent with partial aerosol coverage
and that ingress-egress spectroscopy would constrain the location and formation
mechanism of those aerosols. In general, using this diagnostic we find that
observations with JWST and potentially with HST should be able to distinguish
between clouds and haze for currently known HIHJs.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,922 | $\mathcal{P}$-schemes and Deterministic Polynomial Factoring over Finite Fields | We introduce a family of mathematical objects called $\mathcal{P}$-schemes,
where $\mathcal{P}$ is a poset of subgroups of a finite group $G$. A
$\mathcal{P}$-scheme is a collection of partitions of the right coset spaces
$H\backslash G$, indexed by $H\in\mathcal{P}$, that satisfies a list of axioms.
These objects generalize the classical notion of association schemes as well as
the notion of $m$-schemes (Ivanyos et al. 2009).
Based on $\mathcal{P}$-schemes, we develop a unifying framework for the
problem of deterministic factoring of univariate polynomials over finite fields
under the generalized Riemann hypothesis (GRH).
| 1 | 0 | 1 | 0 | 0 | 0 |
18,923 | Similarity Preserving Representation Learning for Time Series Analysis | A considerable amount of machine learning algorithms take instance-feature
matrices as their inputs. As such, they cannot directly analyze time series
data due to its temporal nature, usually unequal lengths, and complex
properties. This is a great pity since many of these algorithms are effective,
robust, efficient, and easy to use. In this paper, we bridge this gap by
proposing an efficient representation learning framework that is able to
convert a set of time series with equal or unequal lengths to a matrix format.
In particular, we guarantee that the pairwise similarities between time series
are well preserved after the transformation. The learned feature representation
is particularly suitable to the class of learning problems that are sensitive
to data similarities. Given a set of $n$ time series, we first construct an
$n\times n$ partially observed similarity matrix by randomly sampling $O(n \log
n)$ pairs of time series and computing their pairwise similarities. We then
propose an extremely efficient algorithm that solves a highly non-convex and
NP-hard problem to learn new features based on the partially observed
similarity matrix. We use the learned features to conduct experiments on both
data classification and clustering tasks. Our extensive experimental results
demonstrate that the proposed framework is both effective and efficient.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,924 | Modeling Social Organizations as Communication Networks | We identify the "organization" of a human social group as the communication
network(s) within that group. We then introduce three theoretical approaches to
analyzing what determines the structures of human organizations. All three
approaches adopt a group-selection perspective, so that the group's network
structure is (approximately) optimal, given the information-processing
limitations of agents within the social group, and the exogenous welfare
function of the overall group. In the first approach we use a new sub-field of
telecommunications theory called network coding, and focus on a welfare
function that involves the ability of the organization to convey information
among the agents. In the second approach we focus on a scenario where agents
within the organization must allocate their future communication resources when
the state of the future environment is uncertain. We show how this formulation
can be solved with a linear program. In the third approach, we introduce an
information synthesis problem in which agents within an organization receive
information from various sources and must decide how to transform such
information and transmit the results to other agents in the organization. We
propose leveraging the computational power of neural networks to solve such
problems. These three approaches formalize and synthesize work in fields
including anthropology, archeology, economics and psychology that deal with
organization structure, theory of the firm, span of control and cognitive
limits on communication.
| 1 | 1 | 0 | 0 | 0 | 0 |
18,925 | The Ricci flow on solvmanifolds of real type | We show that for any solvable Lie group of real type, any homogeneous Ricci
flow solution converges in Cheeger-Gromov topology to a unique non-flat
solvsoliton, which is independent of the initial left-invariant metric. As an
application, we obtain results on the isometry groups of non-flat solvsoliton
metrics and Einstein solvmanifolds.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,926 | Model-based clustering of multi-tissue gene expression data | Recently, it has become feasible to generate large-scale, multi-tissue gene
expression data, where expression profiles are obtained from multiple tissues
or organs sampled from dozens to hundreds of individuals. When traditional
clustering methods are applied to this type of data, important information is
lost, because they either require all tissues to be analyzed independently,
ignoring dependencies and similarities between tissues, or to merge tissues in
a single, monolithic dataset, ignoring individual characteristics of tissues.
We developed a Bayesian model-based multi-tissue clustering algorithm, revamp,
which can incorporate prior information on physiological tissue similarity, and
which results in a set of clusters, each consisting of a core set of genes
conserved across tissues as well as differential sets of genes specific to one
or more subsets of tissues. Using data from seven vascular and metabolic
tissues from over 100 individuals in the STockholm Atherosclerosis Gene
Expression (STAGE) study, we demonstrate that multi-tissue clusters inferred by
revamp are more enriched for tissue-dependent protein-protein interactions
compared to alternative approaches. We further demonstrate that revamp results
in easily interpretable multi-tissue gene expression associations to key
coronary artery disease processes and clinical phenotypes in the STAGE
individuals. Revamp is implemented in the Lemon-Tree software, available at
this https URL
| 0 | 0 | 0 | 0 | 1 | 0 |
18,927 | Fastest Convergence for Q-learning | The Zap Q-learning algorithm introduced in this paper is an improvement of
Watkins' original algorithm and recent competitors in several respects. It is a
matrix-gain algorithm designed so that its asymptotic variance is optimal.
Moreover, an ODE analysis suggests that the transient behavior is a close match
to a deterministic Newton-Raphson implementation. This is made possible by a
two time-scale update equation for the matrix gain sequence.
The analysis suggests that the approach will lead to stable and efficient
computation even for non-ideal parameterized settings. Numerical experiments
confirm the quick convergence, even in such non-ideal cases.
A secondary goal of this paper is tutorial. The first half of the paper
contains a survey on reinforcement learning algorithms, with a focus on minimum
variance algorithms.
| 1 | 0 | 1 | 0 | 0 | 0 |
18,928 | New Bounds on the Field Size for Maximally Recoverable Codes Instantiating Grid-like Topologies | In recent years, the rapidly increasing amounts of data created and processed
through the internet resulted in distributed storage systems employing erasure
coding based schemes. Aiming to balance the tradeoff between data recovery for
correlated failures and efficient encoding and decoding, distributed storage
systems employing maximally recoverable codes came up. Unifying a number of
topologies considered both in theory and practice, Gopalan \cite{Gopalan2017}
initiated the study of maximally recoverable codes for grid-like topologies.
In this paper, we focus on the maximally recoverable codes that instantiate
grid-like topologies $T_{m\times n}(1,b,0)$. To characterize the property of
codes for these topologies, we introduce the notion of \emph{pseudo-parity
check matrix}. Then, using the hypergraph independent set approach, we
establish the first polynomial upper bound on the field size needed for
achieving the maximal recoverability in topologies $T_{m\times n}(1,b,0)$, when
$n$ is large enough. And we further improve this general upper bound for
topologies $T_{4\times n}(1,2,0)$ and $T_{3\times n}(1,3,0)$. By relating the
problem to generalized \emph{Sidon sets} in $\mathbb{F}_q$, we also obtain
non-trivial lower bounds on the field size for maximally recoverable codes that
instantiate topologies $T_{4\times n}(1,2,0)$ and $T_{3\times n}(1,3,0)$.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,929 | Two-Step Disentanglement for Financial Data | In this work, we address the problem of disentanglement of factors that
generate a given data into those that are correlated with the labeling and
those that are not. Our solution is simpler than previous solutions and employs
adversarial training in a straightforward manner. We demonstrate the new method
on visual datasets as well as on financial data. In order to evaluate the
latter, we developed a hypothetical trading strategy whose performance is
affected by the performance of the disentanglement, namely, it trades better
when the factors are better separated.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,930 | Optimal control of a Vlasov-Poisson plasma by an external magnetic field - Analysis of a tracking type optimal control problem | In the paper "Optimal control of a Vlasov-Poisson plasma by an external
magnetic field - The basics for variational calculus" [arXiv:1708.02464] we
have already introduced a set of admissible magnetic fields and we have proved
that each of those fields induces a unique strong solution of the
Vlasov-Poisson system. We have also established that the field-state operator
that maps any admissible field onto its corresponding solution is continuous
and weakly compact. In this paper we will show that this operator is also
Fréchet differentiable and we will continue to analyze the optimal control
problem that was introduced in [arXiv:1708.02464]. More precisely, we will
establish necessary and sufficient conditions for local optimality and we will
show that an optimal solution is unique under certain conditions.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,931 | Towards an algebraic natural proofs barrier via polynomial identity testing | We observe that a certain kind of algebraic proof - which covers essentially
all known algebraic circuit lower bounds to date - cannot be used to prove
lower bounds against VP if and only if what we call succinct hitting sets exist
for VP. This is analogous to the Razborov-Rudich natural proofs barrier in
Boolean circuit complexity, in that we rule out a large class of lower bound
techniques under a derandomization assumption. We also discuss connections
between this algebraic natural proofs barrier, geometric complexity theory, and
(algebraic) proof complexity.
| 1 | 0 | 1 | 0 | 0 | 0 |
18,932 | In Defense of the Indefensible: A Very Naive Approach to High-Dimensional Inference | In recent years, a great deal of interest has focused on conducting inference
on the parameters in a linear model in the high-dimensional setting. In this
paper, we consider a simple and very naïve two-step procedure for this
task, in which we (i) fit a lasso model in order to obtain a subset of the
variables; and (ii) fit a least squares model on the lasso-selected set.
Conventional statistical wisdom tells us that we cannot make use of the
standard statistical inference tools for the resulting least squares model
(such as confidence intervals and $p$-values), since we peeked at the data
twice: once in running the lasso, and again in fitting the least squares model.
However, in this paper, we show that under a certain set of assumptions, with
high probability, the set of variables selected by the lasso is deterministic.
Consequently, the naïve two-step approach can yield confidence intervals
that have asymptotically correct coverage, as well as p-values with proper
Type-I error control. Furthermore, this two-step approach unifies two existing
camps of work on high-dimensional inference: one camp has focused on inference
based on a sub-model selected by the lasso, and the other has focused on
inference using a debiased version of the lasso estimator.
| 0 | 0 | 1 | 1 | 0 | 0 |
18,933 | Asymptotic Analysis of Plausible Tree Hash Modes for SHA-3 | Discussions about the choice of a tree hash mode of operation for a
standardization have recently been undertaken. It appears that a single tree
mode cannot address adequately all possible uses and specifications of a
system. In this paper, we review the tree modes which have been proposed, we
discuss their problems and propose remedies. We make the reasonable assumption
that communicating systems have different specifications and that software
applications are of different types (securing stored content or live-streamed
content). Finally, we propose new modes of operation that address the resource
usage problem for the three most representative categories of devices and we
analyse their asymptotic behavior.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,934 | Neurally Plausible Model of Robot Reaching Inspired by Infant Motor Babbling | In this paper we present a neurally plausible model of robot reaching
inspired by human infant reaching that is based on embodied artificial
intelligence, which emphasizes the importance of the sensory-motor interaction
of an agent and the world. This model encompasses both learning sensory-motor
correlations through motor babbling and also arm motion planning using
spreading activation. This model is organized in three layers of neural maps
with parallel structures representing the same sensory-motor space. The motor
babbling period shapes the structure of the three neural maps as well as the
connections within and between them. We describe an implementation of this
model and an investigation of this implementation using a simple reaching task
on a humanoid robot. The robot has learned successfully to plan reaching
motions from a test set with high accuracy and smoothness.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,935 | When Will AI Exceed Human Performance? Evidence from AI Experts | Advances in artificial intelligence (AI) will transform modern life by
reshaping transportation, health, science, finance, and the military. To adapt
public policy, we need to better anticipate these advances. Here we report the
results from a large survey of machine learning researchers on their beliefs
about progress in AI. Researchers predict AI will outperform humans in many
activities in the next ten years, such as translating languages (by 2024),
writing high-school essays (by 2026), driving a truck (by 2027), working in
retail (by 2031), writing a bestselling book (by 2049), and working as a
surgeon (by 2053). Researchers believe there is a 50% chance of AI
outperforming humans in all tasks in 45 years and of automating all human jobs
in 120 years, with Asian respondents expecting these dates much sooner than
North Americans. These results will inform discussion amongst researchers and
policymakers about anticipating and managing trends in AI.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,936 | E-PUR: An Energy-Efficient Processing Unit for Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are a key technology for emerging
applications such as automatic speech recognition, machine translation or image
description. Long Short Term Memory (LSTM) networks are the most successful RNN
implementation, as they can learn long term dependencies to achieve high
accuracy. Unfortunately, the recurrent nature of LSTM networks significantly
constrains the amount of parallelism and, hence, multicore CPUs and many-core
GPUs exhibit poor efficiency for RNN inference. In this paper, we present
E-PUR, an energy-efficient processing unit tailored to the requirements of LSTM
computation. The main goal of E-PUR is to support large recurrent neural
networks for low-power mobile devices. E-PUR provides an efficient hardware
implementation of LSTM networks that is flexible to support diverse
applications. One of its main novelties is a technique that we call Maximizing
Weight Locality (MWL), which improves the temporal locality of the memory
accesses for fetching the synaptic weights, reducing the memory requirements by
a large extent. Our experimental results show that E-PUR achieves real-time
performance for different LSTM networks, while reducing energy consumption by
orders of magnitude with respect to general-purpose processors and GPUs, and it
requires a very small chip area. Compared to a modern mobile SoC, an NVIDIA
Tegra X1, E-PUR provides an average energy reduction of 92x.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,937 | Increasing Geminid meteor shower activity | Mathematical modelling has shown that activity of the Geminid meteor shower
should rise with time, and that was confirmed by analysis of visual
observations 1985--2016. We do not expect any outburst activity of the Geminid
shower in 2017, even though the asteroid (3200) Phaethon has close approach to
Earth in December of 2017. A small probability to observe dust ejected at
perihelia 2009--2016 still exists.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,938 | The Bennett-Orlicz norm | Lederer and van de Geer (2013) introduced a new Orlicz norm, the
Bernstein-Orlicz norm, which is connected to Bernstein type inequalities. Here
we introduce another Orlicz norm, the Bennett-Orlicz norm, which is connected
to Bennett type inequalities. The new Bennett-Orlicz norm yields inequalities
for expectations of maxima which are potentially somewhat tighter than those
resulting from the Bernstein-Orlicz norm when they are both applicable. We
discuss cross connections between these norms, exponential inequalities of the
Bernstein, Bennett, and Prokhorov types, and make comparisons with results of
Talagrand (1989, 1994), and Boucheron, Lugosi, and Massart (2013).
| 0 | 0 | 1 | 1 | 0 | 0 |
18,939 | Temporal Justification Logic | Justification logics are modal-like logics with the additional capability of
recording the reason, or justification, for modalities in syntactic structures,
called justification terms. Justification logics can be seen as explicit
counterparts to modal logics. The behavior and interaction of agents in
distributed system is often modeled using logics of knowledge and time. In this
paper, we sketch some preliminary ideas on how the modal knowledge part of such
logics of knowledge and time could be replaced with an appropriate
justification logic.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,940 | Simultaneous Multiparty Communication Complexity of Composed Functions | In the Number On the Forehead (NOF) multiparty communication model, $k$
players want to evaluate a function $F : X_1 \times\cdots\times X_k\rightarrow
Y$ on some input $(x_1,\dots,x_k)$ by broadcasting bits according to a
predetermined protocol. The input is distributed in such a way that each player
$i$ sees all of it except $x_i$. In the simultaneous setting, the players
cannot speak to each other but instead send information to a referee. The
referee does not know the players' input, and cannot give any information back.
At the end, the referee must be able to recover $F(x_1,\dots,x_k)$ from what
she obtained.
A central open question, called the $\log n$ barrier, is to find a function
which is hard to compute for $polylog(n)$ or more players (where the $x_i$'s
have size $poly(n)$) in the simultaneous NOF model. This has important
applications in circuit complexity, as it could help to separate $ACC^0$ from
other complexity classes. One of the candidates belongs to the family of
composed functions. The input to these functions is represented by a $k\times
(t\cdot n)$ boolean matrix $M$, whose row $i$ is the input $x_i$ and $t$ is a
block-width parameter. A symmetric composed function acting on $M$ is specified
by two symmetric $n$- and $kt$-variate functions $f$ and $g$, that output
$f\circ g(M)=f(g(B_1),\dots,g(B_n))$ where $B_j$ is the $j$-th block of width
$t$ of $M$. As the majority function $MAJ$ is conjectured to be outside of
$ACC^0$, Babai et. al. suggested to study $MAJ\circ MAJ_t$, with $t$ large
enough.
So far, it was only known that $t=1$ is not enough for $MAJ\circ MAJ_t$ to
break the $\log n$ barrier in the simultaneous deterministic NOF model. In this
paper, we extend this result to any constant block-width $t>1$, by giving a
protocol of cost $2^{O(2^t)}\log^{2^{t+1}}(n)$ for any symmetric composed
function when there are $2^{\Omega(2^t)}\log n$ players.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,941 | Online Learning for Changing Environments using Coin Betting | A key challenge in online learning is that classical algorithms can be slow
to adapt to changing environments. Recent studies have proposed "meta"
algorithms that convert any online learning algorithm to one that is adaptive
to changing environments, where the adaptivity is analyzed in a quantity called
the strongly-adaptive regret. This paper describes a new meta algorithm that
has a strongly-adaptive regret bound that is a factor of $\sqrt{\log(T)}$
better than other algorithms with the same time complexity, where $T$ is the
time horizon. We also extend our algorithm to achieve a first-order (i.e.,
dependent on the observed losses) strongly-adaptive regret bound for the first
time, to our knowledge. At its heart is a new parameter-free algorithm for the
learning with expert advice (LEA) problem in which experts sometimes do not
output advice for consecutive time steps (i.e., \emph{sleeping} experts). This
algorithm is derived by a reduction from optimal algorithms for the so-called
coin betting problem. Empirical results show that our algorithm outperforms
state-of-the-art methods in both learning with expert advice and metric
learning scenarios.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,942 | On the global sup-norm of GL(3) cusp forms | Let $\phi$ be a spherical Hecke-Maass cusp form on the non-compact space
$\mathrm{PGL}_3(\mathbb{Z})\backslash\mathrm{PGL}_3(\mathbb{R})$. We establish
various pointwise upper bounds for $\phi$ in terms of its Laplace eigenvalue
$\lambda_\phi$. These imply, for $\phi$ arithmetically normalized and tempered
at the archimedean place, the bound $\|\phi\|_\infty\ll_\epsilon
\lambda_{\phi}^{39/40+\epsilon}$ for the global sup-norm (without restriction
to a compact subset). On the way, we derive a new uniform upper bound for the
$\mathrm{GL}_3$ Jacquet-Whittaker function.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,943 | Four revolutions in physics and the second quantum revolution -- a unification of force and matter by quantum information | Newton's mechanical revolution unifies the motion of planets in the sky and
falling of apple on earth. Maxwell's electromagnetic revolution unifies
electricity, magnetism, and light. Einstein's relativity revolution unifies
space with time, and gravity with space-time distortion. The quantum revolution
unifies particle with waves, and energy with frequency. Each of those
revolution changes our world view. In this article, we will describe a
revolution that is happening now: the second quantum revolution which unifies
matter/space with information. In other words, the new world view suggests that
elementary particles (the bosonic force particles and fermionic matter
particles) all originated from quantum information (qubits): they are
collective excitations of an entangled qubit ocean that corresponds to our
space. The beautiful geometric Yang-Mills gauge theory and the strange Fermi
statistics of matter particles now have a common algebraic quantum
informational origin.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,944 | Concept Drift Learning with Alternating Learners | Data-driven predictive analytics are in use today across a number of
industrial applications, but further integration is hindered by the requirement
of similarity among model training and test data distributions. This paper
addresses the need of learning from possibly nonstationary data streams, or
under concept drift, a commonly seen phenomenon in practical applications. A
simple dual-learner ensemble strategy, alternating learners framework, is
proposed. A long-memory model learns stable concepts from a long relevant time
window, while a short-memory model learns transient concepts from a small
recent window. The difference in prediction performance of these two models is
monitored and induces an alternating policy to select, update and reset the two
models. The method features an online updating mechanism to maintain the
ensemble accuracy, and a concept-dependent trigger to focus on relevant data.
Through empirical studies the method demonstrates effective tracking and
prediction when the steaming data carry abrupt and/or gradual changes.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,945 | A new proof of the competitive exclusion principle in the chemostat | We give an new proof of the well-known competitive exclusion principle in the
chemostat model with $n$ species competing for a single resource, for any set
of increasing growth functions. The proof is constructed by induction on the
number of the species, after being ordered. It uses elementary analysis and
comparisons of solutions of ordinary differential equations.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,946 | From Random Differential Equations to Structural Causal Models: the stochastic case | Random Differential Equations provide a natural extension of Ordinary
Differential Equations to the stochastic setting. We show how, and under which
conditions, every equilibrium state of a Random Differential Equation (RDE) can
be described by a Structural Causal Model (SCM), while pertaining the causal
semantics. This provides an SCM that captures the stochastic and causal
behavior of the RDE, which can model both cycles and confounders. This enables
the study of the equilibrium states of the RDE by applying the theory and
statistical tools available for SCMs, for example, marginalizations and Markov
properties, as we illustrate by means of an example. Our work thus provides a
direct connection between two fields that so far have been developing in
isolation.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,947 | Ties That Bind - Characterizing Classes by Attributes and Social Ties | Given a set of attributed subgraphs known to be from different classes, how
can we discover their differences? There are many cases where collections of
subgraphs may be contrasted against each other. For example, they may be
assigned ground truth labels (spam/not-spam), or it may be desired to directly
compare the biological networks of different species or compound networks of
different chemicals.
In this work we introduce the problem of characterizing the differences
between attributed subgraphs that belong to different classes. We define this
characterization problem as one of partitioning the attributes into as many
groups as the number of classes, while maximizing the total attributed quality
score of all the given subgraphs.
We show that our attribute-to-class assignment problem is NP-hard and an
optimal $(1 - 1/e)$-approximation algorithm exists. We also propose two
different faster heuristics that are linear-time in the number of attributes
and subgraphs. Unlike previous work where only attributes were taken into
account for characterization, here we exploit both attributes and social ties
(i.e. graph structure).
Through extensive experiments, we compare our proposed algorithms, show
findings that agree with human intuition on datasets from Amazon co-purchases,
Congressional bill sponsorships, and DBLP co-authorships. We also show that our
approach of characterizing subgraphs is better suited for sense-making than
discriminating classification approaches.
| 1 | 1 | 0 | 0 | 0 | 0 |
18,948 | Probing dark matter with star clusters: a dark matter core in the ultra-faint dwarf Eridanus II | We present a new technique to probe the central dark matter (DM) density
profile of galaxies that harnesses both the survival and observed properties of
star clusters. As a first application, we apply our method to the `ultra-faint'
dwarf Eridanus II (Eri II) that has a lone star cluster ~45 pc from its centre.
Using a grid of collisional $N$-body simulations, incorporating the effects of
stellar evolution, external tides and dynamical friction, we show that a DM
core for Eri II naturally reproduces the size and the projected position of its
star cluster. By contrast, a dense cusped galaxy requires the cluster to lie
implausibly far from the centre of Eri II (>1 kpc), with a high inclination
orbit that must be observed at a particular orbital phase. Our results,
therefore, favour a dark matter core. This implies that either a cold DM cusp
was `heated up' at the centre of Eri II by bursty star formation, or we are
seeing an evidence for physics beyond cold DM.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,949 | Predicting Foreground Object Ambiguity and Efficiently Crowdsourcing the Segmentation(s) | We propose the ambiguity problem for the foreground object segmentation task
and motivate the importance of estimating and accounting for this ambiguity
when designing vision systems. Specifically, we distinguish between images
which lead multiple annotators to segment different foreground objects
(ambiguous) versus minor inter-annotator differences of the same object. Taking
images from eight widely used datasets, we crowdsource labeling the images as
"ambiguous" or "not ambiguous" to segment in order to construct a new dataset
we call STATIC. Using STATIC, we develop a system that automatically predicts
which images are ambiguous. Experiments demonstrate the advantage of our
prediction system over existing saliency-based methods on images from vision
benchmarks and images taken by blind people who are trying to recognize objects
in their environment. Finally, we introduce a crowdsourcing system to achieve
cost savings for collecting the diversity of all valid "ground truth"
foreground object segmentations by collecting extra segmentations only when
ambiguity is expected. Experiments show our system eliminates up to 47% of
human effort compared to existing crowdsourcing methods with no loss in
capturing the diversity of ground truths.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,950 | On Some Generalized Polyhedral Convex Constructions | Generalized polyhedral convex sets, generalized polyhedral convex functions
on locally convex Hausdorff topological vector spaces, and the related
constructions such as sum of sets, sum of functions, directional derivative,
infimal convolution, normal cone, conjugate function, subdifferential, are
studied thoroughly in this paper. Among other things, we show how a generalized
polyhedral convex set can be characterized via the finiteness of the number of
its faces. In addition, it is proved that the infimal convolution of a
generalized polyhedral convex function and a polyhedral convex function is a
polyhedral convex function. The obtained results can be applied to scalar
optimization problems described by generalized polyhedral convex sets and
generalized polyhedral convex functions.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,951 | mGPfusion: Predicting protein stability changes with Gaussian process kernel learning and data fusion | Proteins are commonly used by biochemical industry for numerous processes.
Refining these proteins' properties via mutations causes stability effects as
well. Accurate computational method to predict how mutations affect protein
stability are necessary to facilitate efficient protein design. However,
accuracy of predictive models is ultimately constrained by the limited
availability of experimental data. We have developed mGPfusion, a novel
Gaussian process (GP) method for predicting protein's stability changes upon
single and multiple mutations. This method complements the limited experimental
data with large amounts of molecular simulation data. We introduce a Bayesian
data fusion model that re-calibrates the experimental and in silico data
sources and then learns a predictive GP model from the combined data. Our
protein-specific model requires experimental data only regarding the protein of
interest and performs well even with few experimental measurements. The
mGPfusion models proteins by contact maps and infers the stability effects
caused by mutations with a mixture of graph kernels. Our results show that
mGPfusion outperforms state-of-the-art methods in predicting protein stability
on a dataset of 15 different proteins and that incorporating molecular
simulation data improves the model learning and prediction accuracy.
| 0 | 0 | 0 | 1 | 1 | 0 |
18,952 | How Do Software Startups Pivot? Empirical Results from a Multiple Case Study | In order to handle intense time pressure and survive in dynamic market,
software startups have to make crucial decisions constantly on whether to
change directions or stay on chosen courses, or in the terms of Lean Startup,
to pivot or to persevere. The existing research and knowledge on software
startup pivots are very limited. In this study, we focused on understanding the
pivoting processes of software startups, and identified the triggering factors
and pivot types. To achieve this, we employed a multiple case study approach,
and analyzed the data obtained from four software startups. The initial
findings show that different software startups make different types of pivots
related to business and technology during their product development life cycle.
The pivots are triggered by various factors including negative customer
feedback.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,953 | Some criteria for Wind Riemannian completeness and existence of Cauchy hypersurfaces | Recently, a link between Lorentzian and Finslerian Geometries has been
carried out, leading to the notion of wind Riemannian structure (WRS), a
generalization of Finslerian Randers metrics. Here, we further develop this
notion and its applications to spacetimes, by introducing some
characterizations and criteria for the completeness of WRS's.
As an application, we consider a general class of spacetimes admitting a time
function $t$ generated by the flow of a complete Killing vector field
(generalized standard stationary spacetimes or, more precisely, SSTK ones) and
derive simple criteria ensuring that its slices $t=$ constant are Cauchy.
Moreover, a brief summary on the Finsler/Lorentz link for readers with some
acquaintance in Lorentzian Geometry, plus some simple examples in Mathematical
Relativity, are provided.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,954 | A generalization of Schönemann's theorem via a graph theoretic method | Recently, Grynkiewicz et al. [{\it Israel J. Math.} {\bf 193} (2013),
359--398], using tools from additive combinatorics and group theory, proved
necessary and sufficient conditions under which the linear congruence
$a_1x_1+\cdots +a_kx_k\equiv b \pmod{n}$, where $a_1,\ldots,a_k,b,n$ ($n\geq
1$) are arbitrary integers, has a solution $\langle x_1,\ldots,x_k \rangle \in
\Z_{n}^k$ with all $x_i$ distinct modulo $n$. So, it would be an interesting
problem to give an explicit formula for the number of such solutions. Quite
surprisingly, this problem was first considered, in a special case, by
Schönemann almost two centuries ago(!) but his result seems to have been
forgotten. Schönemann [{\it J. Reine Angew. Math.} {\bf 1839} (1839),
231--243] proved an explicit formula for the number of such solutions when
$b=0$, $n=p$ a prime, and $\sum_{i=1}^k a_i \equiv 0 \pmod{p}$ but $\sum_{i \in
I} a_i \not\equiv 0 \pmod{p}$ for all $I\varsubsetneq \lbrace 1, \ldots,
k\rbrace$. In this paper, we generalize Schönemann's theorem using a result
on the number of solutions of linear congruences due to D. N. Lehmer and also a
result on graph enumeration recently obtained by Ardila et al. [{\it Int. Math.
Res. Not.} {\bf 2015} (2015), 3830--3877]. This seems to be a rather uncommon
method in the area; besides, our proof technique or its modifications may be
useful for dealing with other cases of this problem (or even the general case)
or other relevant problems.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,955 | Map-based Multi-Policy Reinforcement Learning: Enhancing Adaptability of Robots by Deep Reinforcement Learning | In order for robots to perform mission-critical tasks, it is essential that
they are able to quickly adapt to changes in their environment as well as to
injuries and or other bodily changes. Deep reinforcement learning has been
shown to be successful in training robot control policies for operation in
complex environments. However, existing methods typically employ only a single
policy. This can limit the adaptability since a large environmental
modification might require a completely different behavior compared to the
learning environment. To solve this problem, we propose Map-based Multi-Policy
Reinforcement Learning (MMPRL), which aims to search and store multiple
policies that encode different behavioral features while maximizing the
expected reward in advance of the environment change. Thanks to these policies,
which are stored into a multi-dimensional discrete map according to its
behavioral feature, adaptation can be performed within reasonable time without
retraining the robot. An appropriate pre-trained policy from the map can be
recalled using Bayesian optimization. Our experiments show that MMPRL enables
robots to quickly adapt to large changes without requiring any prior knowledge
on the type of injuries that could occur. A highlight of the learned behaviors
can be found here: this https URL .
| 1 | 0 | 0 | 0 | 0 | 0 |
18,956 | Testing convexity of a discrete distribution | Based on the convex least-squares estimator, we propose two different
procedures for testing convexity of a probability mass function supported on N
with an unknown finite support. The procedures are shown to be asymptotically
calibrated.
| 0 | 0 | 1 | 1 | 0 | 0 |
18,957 | Small and Strong Formulations for Unions of Convex Sets from the Cayley Embedding | There is often a significant trade-off between formulation strength and size
in mixed integer programming (MIP). When modeling convex disjunctive
constraints (e.g. unions of convex sets), adding auxiliary continuous variables
can sometimes help resolve this trade-off. However, standard formulations that
use such auxiliary continuous variables can have a worse-than-expected
computational effectiveness, which is often attributed precisely to these
auxiliary continuous variables. For this reason, there has been considerable
interest in constructing strong formulations that do not use continuous
auxiliary variables. We introduce a technique to construct formulations without
these detrimental continuous auxiliary variables. To develop this technique we
introduce a natural non-polyhedral generalization of the Cayley embedding of a
family of polytopes and show it inherits many geometric properties of the
original embedding. We then show how the associated formulation technique can
be used to construct small and strong formulation for a wide range of
disjunctive constraints. In particular, we show it can recover and generalize
all known strong formulations without continuous auxiliary variables.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,958 | Safe Robotic Grasping: Minimum Impact-Force Grasp Selection | This paper addresses the problem of selecting from a choice of possible
grasps, so that impact forces will be minimised if a collision occurs while the
robot is moving the grasped object along a post-grasp trajectory. Such
considerations are important for safety in human-robot interaction, where even
a certified "human-safe" (e.g. compliant) arm may become hazardous once it
grasps and begins moving an object, which may have significant mass, sharp
edges or other dangers. Additionally, minimising collision forces is critical
to preserving the longevity of robots which operate in uncertain and hazardous
environments, e.g. robots deployed for nuclear decommissioning, where removing
a damaged robot from a contaminated zone for repairs may be extremely difficult
and costly. Also, unwanted collisions between a robot and critical
infrastructure (e.g. pipework) in such high-consequence environments can be
disastrous. In this paper, we investigate how the safety of the post-grasp
motion can be considered during the pre-grasp approach phase, so that the
selected grasp is optimal in terms applying minimum impact forces if a
collision occurs during a desired post-grasp manipulation. We build on the
methods of augmented robot-object dynamics models and "effective mass" and
propose a method for combining these concepts with modern grasp and trajectory
planners, to enable the robot to achieve a grasp which maximises the safety of
the post-grasp trajectory, by minimising potential collision forces. We
demonstrate the effectiveness of our approach through several experiments with
both simulated and real robots.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,959 | Nonparametric Kernel Density Estimation for Univariate Curent Status Data | We derive estimators of the density of the event times of current status
data. The estimators are derived for the situations where the distribution of
the observation times is known and where this distribution is unknown. The
density estimators are constructed from kernel estimators of the density of
transformed current status data, which have a distribution similar to uniform
deconvolution data. Expansions of the expectation and variance as well as
asymptotic normality are derived. A reference density based bandwidth selection
method is proposed. A simulated example is presented.
| 0 | 0 | 1 | 1 | 0 | 0 |
18,960 | Non-Kähler Mirror Symmetry of the Iwasawa Manifold | We propose a new approach to the Mirror Symmetry Conjecture in a form
suitable to possibly non-Kähler compact complex manifolds whose canonical
bundle is trivial. We apply our methods by proving that the Iwasawa manifold
$X$, a well-known non-Kähler compact complex manifold of dimension $3$, is
its own mirror dual to the extent that its Gauduchon cone, replacing the
classical Kähler cone that is empty in this case, corresponds to what we call
the local universal family of essential deformations of $X$. These are obtained
by removing from the Kuranishi family the two "superfluous" dimensions of
complex parallelisable deformations that have a similar geometry to that of the
Iwasawa manifold. The remaining four dimensions are shown to have a clear
geometric meaning including in terms of the degeneration at $E_2$ of the
Frölicher spectral sequence. On the local moduli space of "essential" complex
structures, we obtain a canonical Hodge decomposition of weight $3$ and a
variation of Hodge structures, construct coordinates and Yukawa couplings while
implicitly proving a local Torelli theorem. On the metric side of the mirror,
we construct a variation of Hodge structures parametrised by a subset of the
complexified Gauduchon cone of the Iwasawa manifold using the sGG property of
all the small deformations of this manifold proved in earlier joint work of the
author with L. Ugarte. Finally, we define a mirror map linking the two
variations of Hodge structures and we highlight its properties.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,961 | Estimation and Inference for Moments of Ratios with Robustness against Large Trimming Bias | Empirical researchers often trim observations with small denominator A when
they estimate moments of the form E[B/A]. Large trimming is a common practice
to mitigate variance, but it incurs large trimming bias. This paper provides a
novel method of correcting large trimming bias. If a researcher is willing to
assume that the joint distribution between A and B is smooth, then a large
trimming bias may be estimated well. With the bias correction, we also develop
a valid and robust inference result for E[B/A].
| 0 | 0 | 0 | 1 | 0 | 0 |
18,962 | FPGA-based real-time 105-channel data acquisition platform for imaging system | In this paper, a real-time 105-channel data acquisition platform based on
FPGA for imaging will be implemented for mm-wave imaging systems. PC platform
is also realized for imaging results monitoring purpose. Mm-wave imaging
expands our vision by letting us see things under poor visibility conditions.
With this extended vision ability, a wide range of military imaging missions
would benefit, such as surveillance, precision targeting, navigation, and
rescue. Based on the previously designed imager modules, this project would go
on finishing the PCB design (both schematic and layout) of the following signal
processing systems consisting of Programmable Gain Amplifier(PGA) (4 PGA for
each ADC) and 16-channel Analog to Digital Converter (ADC) (7 ADC in total).
Then the system verification would be performed on the Artix-7 35T Arty FPGA
with the developing of proper controlling code to configure the ADC and realize
the communication between the FPGA and the PC (through both UART and Ethernet).
For the verification part, a simple test on a breadboard with a simple analog
input (generated from a resistor divider) would first be performed. After the
PCB design is finished, the whole system would be tested again with a precise
reference and analog input.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,963 | Zn-induced in-gap electronic states in La214 probed by uniform magnetic susceptibility: relevance to the suppression of superconducting Tc | Substitution of isovalent non-magnetic defects, such as Zn, in CuO2 plane
strongly modifies the magnetic properties of strongly electron correlated hole
doped cuprate superconductors. The reason for enhanced uniform magnetic
susceptibility, \c{hi}, in Zn substituted cuprates is debatable. So far, the
observed magnetic behavior has been analyzed mainly in terms of two somewhat
contrasting scenarios, (a) that due to independent localized moments appearing
in the vicinity of Zn arising because of the strong electronic/magnetic
correlations present in the host compound and (b) that due to transfer of
quasiparticle spectral weight and creation of weakly localized low energy
electronic states associated with each Zn atom in place of an in-plane Cu. If
the second scenario is correct, one should expect a direct correspondence
between Zn induced suppression of superconducting transition temperature, Tc,
and the extent of the enhanced magnetic susceptibility at low temperature. In
this case, the low-T enhancement of \c{hi} would be due to weakly localized
quasiparticle states at low energy and these electronic states will be
precluded from taking part in Cooper pairing. We explore this second
possibility by analyzing the \c{hi}(T) data for La2-xSrxCu1-yZnyO4 with
different hole contents, p (= x), and Zn concentrations (y) in this paper.
Results of our analysis support this scenario.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,964 | Predicate Specialization for Definitional Higher-order Logic Programs | Higher-order logic programming is an interesting extension of traditional
logic programming that allows predicates to appear as arguments and variables
to be used where predicates typically occur. Higher-order characteristics are
indeed desirable but on the other hand they are also usually more expensive to
support. In this paper we propose a program specialization technique based on
partial evaluation that can be applied to a modest but useful class of
higher-order logic programs and can transform them into first-order programs
without introducing additional data structures. The resulting first-order
programs can be executed by conventional logic programming interpreters and
benefit from other optimizations that might be available. We provide an
implementation and experimental results that suggest the efficiency of the
transformation.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,965 | Metastability and bifurcation in superconducting nanorings | We describe an approach, based on direct numerical solution of the Usadel
equation, to finding stationary points of the free energy of superconducting
nanorings. We consider both uniform (equilibrium) solutions and the critical
droplets that mediate activated transitions between them. For the uniform
solutions, we compute the critical current as a function of the temperature,
thus obtaining a correction factor to Bardeen's 1962 interpolation formula. For
the droplets, we present a metastability chart that shows the activation energy
as a function of the temperature and current. A comparison of the activation
energy for a ring to experimental results for a wire connected to
superconducting leads reveals a discrepancy at large currents. We discuss
possible reasons for it. We also discuss the nature of the bifurcation point at
which the droplet merges with the uniform solution.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,966 | Origin of life in a digital microcosm | While all organisms on Earth descend from a common ancestor, there is no
consensus on whether the origin of this ancestral self-replicator was a one-off
event or whether it was only the final survivor of multiple origins. Here we
use the digital evolution system Avida to study the origin of self-replicating
computer programs. By using a computational system, we avoid many of the
uncertainties inherent in any biochemical system of self-replicators (while
running the risk of ignoring a fundamental aspect of biochemistry). We
generated the exhaustive set of minimal-genome self-replicators and analyzed
the network structure of this fitness landscape. We further examined the
evolvability of these self-replicators and found that the evolvability of a
self-replicator is dependent on its genomic architecture. We studied the
differential ability of replicators to take over the population when competed
against each other (akin to a primordial-soup model of biogenesis) and found
that the probability of a self-replicator out-competing the others is not
uniform. Instead, progenitor (most-recent common ancestor) genotypes are
clustered in a small region of the replicator space. Our results demonstrate
how computational systems can be used as test systems for hypotheses concerning
the origin of life.
| 1 | 1 | 0 | 0 | 0 | 0 |
18,967 | The equivalence of two tax processes | We introduce two models of taxation, the latent and natural tax processes,
which have both been used to represent loss-carry-forward taxation on the
capital of an insurance company. In the natural tax process, the tax rate is a
function of the current level of capital, whereas in the latent tax process,
the tax rate is a function of the capital that would have resulted if no tax
had been paid. Whereas up to now these two types of tax processes have been
treated separately, we show that, in fact, they are essentially equivalent.
This allows a unified treatment, translating results from one model to the
other. Significantly, we solve the question of existence and uniqueness for the
natural tax process, which is defined via an integral equation. Our results
clarify the existing literature on processes with tax.
| 0 | 0 | 0 | 0 | 0 | 1 |
18,968 | Space-Time Geostatistical Models with both Linear and Seasonal Structures in the Temporal Components | We provide a novel approach to model space-time random fields where the
temporal argument is decomposed into two parts. The former captures the linear
argument, which is related, for instance, to the annual evolution of the field.
The latter is instead a circular variable describing, for instance, monthly
observations. The basic intuition behind this construction is to consider a
random field defined over space (a compact set of the $d$-dimensional Euclidean
space) across time, which is considered as the product space $\mathbb{R} \times
\mathbb{S}^1$, with $\mathbb{S}^1$ being the unit circle. Under such framework,
we derive new parametric families of covariance functions. In particular, we
focus on two classes of parametric families. The former being parenthetical to
the Gneiting class of covariance functions. The latter is instead obtained by
proposing a new Lagrangian framework for the space-time domain considered in
the manuscript. Our findings are illustrated through a real dataset of surface
air temperatures. We show that the incorporation of both temporal variables can
produce significant improvements in the predictive performances of the model.
We also discuss the extension of this approach for fields defined spatially on
a sphere, which allows to model space-time phenomena over large portions of
planet Earth.
| 0 | 0 | 1 | 1 | 0 | 0 |
18,969 | Stability and Robust Regulation of Passive Linear Systems | We study the stability of coupled impedance passive regular linear systems
under power-preserving interconnections. We present new conditions for strong,
exponential, and non-uniform stability of the closed-loop system. We apply the
stability results to the construction of passive error feedback controllers for
robust output tracking and disturbance rejection for strongly stabilizable
passive systems. In the case of nonsmooth reference and disturbance signals we
present conditions for non-uniform rational and logarithmic rates of
convergence of the output. The results are illustrated with examples on
designing controllers for linear wave and heat equations, and on studying the
stability of a system of coupled partial differential equations.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,970 | Bayes Minimax Competitors of Preliminary Test Estimators in k Sample Problems | In this paper, we consider the estimation of a mean vector of a multivariate
normal population where the mean vector is suspected to be nearly equal to mean
vectors of $k-1$ other populations. As an alternative to the preliminary test
estimator based on the test statistic for testing hypothesis of equal means, we
derive empirical and hierarchical Bayes estimators which shrink the sample mean
vector toward a pooled mean estimator given under the hypothesis. The
minimaxity of those Bayesian estimators are shown, and their performances are
investigated by simulation.
| 0 | 0 | 1 | 1 | 0 | 0 |
18,971 | Non-orthogonal Multiple Access for High-reliable and Low-latency V2X Communications | In this paper, we consider a dense vehicular communication network where each
vehicle broadcasts its safety information to its neighborhood in each
transmission period. Such applications require low latency and high
reliability, and thus, we propose a non-orthogonal multiple access scheme to
reduce the latency and to improve the packet reception probability. In the
proposed scheme, the BS performs the semi-persistent scheduling to optimize the
time scheduling and allocate frequency resources in a non-orthogonal manner
while the vehicles autonomously perform distributed power control. We formulate
the centralized scheduling and resource allocation problem as equivalent to a
multi-dimensional stable roommate matching problem, in which the users and
time/frequency resources are considered as disjoint sets of players to be
matched with each other. We then develop a novel rotation matching algorithm,
which converges to a q-exchange stable matching after a limited number of
iterations. Simulation results show that the proposed scheme outperforms the
traditional orthogonal multiple access scheme in terms of the latency and
reliability.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,972 | Cohomology monoids of monoids with coefficients in semimodules II | We relate the old and new cohomology monoids of an arbitrary monoid $M$ with
coefficients in semimodules over $M$, introduced in the author's previous
papers, to monoid and group extensions. More precisely, the old and new second
cohomology monoids describe Schreier extensions of semimodules by monoids, and
the new third cohomology monoid is related to a certain group extension
problem.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,973 | Lower Bounds for Higher-Order Convex Optimization | State-of-the-art methods in convex and non-convex optimization employ
higher-order derivative information, either implicitly or explicitly. We
explore the limitations of higher-order optimization and prove that even for
convex optimization, a polynomial dependence on the approximation guarantee and
higher-order smoothness parameters is necessary. As a special case, we show
Nesterov's accelerated cubic regularization method to be nearly tight.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,974 | Superconductivity in La1-xCexOBiSSe: carrier doping by mixed valence of Ce ions | We report the effects of Ce substitution on structural, electronic, and
magnetic properties of layered bismuth-chalcogenide La1-xCexOBiSSe (x = 0-0.9),
which are newly obtained in this study. Metallic conductivity was observed for
x > 0.1 because of electron carriers induced by mixed valence of Ce ions, as
revealed by bond valence sum calculation and magnetization measurements. Zero
resistivity and clear diamagnetic susceptibility were obtained for x = 0.2-0.6,
indicating the emergence of bulk superconductivity in these compounds.
Dome-shaped superconductivity phase diagram with the highest transition
temperature (Tc) of 3.1 K, which is slightly lower than that of F-doped
LaOBiSSe (Tc = 3.7 K), was established. The present study clearly shows that
the mixed valence of Ce ions can be utilized as an alternative approach for
electron-doping in layered bismuth-chalcogenides to induce superconductivity.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,975 | Gaussian Processes Over Graphs | We propose Gaussian processes for signals over graphs (GPG) using the apriori
knowledge that the target vectors lie over a graph. We incorporate this
information using a graph- Laplacian based regularization which enforces the
target vectors to have a specific profile in terms of graph Fourier transform
coeffcients, for example lowpass or bandpass graph signals. We discuss how the
regularization affects the mean and the variance in the prediction output. In
particular, we prove that the predictive variance of the GPG is strictly
smaller than the conventional Gaussian process (GP) for any non-trivial graph.
We validate our concepts by application to various real-world graph signals.
Our experiments show that the performance of the GPG is superior to GP for
small training data sizes and under noisy training.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,976 | Differences of Type I error rates for ANOVA and Multilevel-Linear-Models using SAS and SPSS for repeated measures designs | To derive recommendations on how to analyze longitudinal data, we examined
Type I error rates of Multilevel Linear Models (MLM) and repeated measures
Analysis of Variance (rANOVA) using SAS and SPSS.We performed a simulation with
the following specifications: To explore the effects of high numbers of
measurement occasions and small sample sizes on Type I error, measurement
occasions of m = 9 and 12 were investigated as well as sample sizes of n = 15,
20, 25 and 30. Effects of non-sphericity in the population on Type I error were
also inspected: 5,000 random samples were drawn from two populations containing
neither a within-subject nor a between-group effect. They were analyzed
including the most common options to correct rANOVA and MLM-results: The
Huynh-Feldt-correction for rANOVA (rANOVA-HF) and the Kenward-Roger-correction
for MLM (MLM-KR), which could help to correct progressive bias of MLM with an
unstructured covariance matrix (MLM-UN). Moreover, uncorrected rANOVA and MLM
assuming a compound symmetry covariance structure (MLM-CS) were also taken into
account. The results showed a progressive bias for MLM-UN for small samples
which was stronger in SPSS than in SAS. Moreover, an appropriate bias
correction for Type I error via rANOVA-HF and an insufficient correction by
MLM-UN-KR for n < 30 were found. These findings suggest MLM-CS or rANOVA if
sphericity holds and a correction of a violation via rANOVA-HF. If an analysis
requires MLM, SPSS yields more accurate Type I error rates for MLM-CS and SAS
yields more accurate Type I error rates for MLM-UN.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,977 | Efficient algorithms for Bayesian Nearest Neighbor Gaussian Processes | We consider alternate formulations of recently proposed hierarchical Nearest
Neighbor Gaussian Process (NNGP) models (Datta et al., 2016a) for improved
convergence, faster computing time, and more robust and reproducible Bayesian
inference. Algorithms are defined that improve CPU memory management and
exploit existing high-performance numerical linear algebra libraries.
Computational and inferential benefits are assessed for alternate NNGP
specifications using simulated datasets and remotely sensed light detection and
ranging (LiDAR) data collected over the US Forest Service Tanana Inventory Unit
(TIU) in a remote portion of Interior Alaska. The resulting data product is the
first statistically robust map of forest canopy for the TIU.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,978 | Variegation and space weathering on asteroid 21 Lutetia | During the flyby in 2010, the OSIRIS camera on-board Rosetta acquired
hundreds of high-resolution images of asteroid Lutetia's surface through a
range of narrow-band filters. While Lutetia appears very bland in the visible
wavelength range, Magrin et al. (2012) tentatively identified UV color
variations in the Baetica cluster, a group of relatively young craters close to
the north pole. As Lutetia remains a poorly understood asteroid, such color
variations may provide clues to the nature of its surface. We take the color
analysis one step further. First we orthorectify the images using a shape model
and improved camera pointing, then apply a variety of techniques (photometric
correction, principal component analysis) to the resulting color cubes. We
characterize variegation in the Baetica crater cluster at high spatial
resolution, identifying crater rays and small, fresh impact craters. We argue
that at least some of the color variation is due to space weathering, which
makes Lutetia's regolith redder and brighter.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,979 | Rokhlin dimension for compact quantum group actions | We show that, for a given compact or discrete quantum group $G$, the class of
actions of $G$ on C*-algebras is first-order axiomatizable in the logic for
metric structures. As an application, we extend the notion of Rokhlin property
for $G$-C*-algebra, introduced by Barlak, Szabó, and Voigt in the case when
$G$ is second countable and coexact, to an arbitrary compact quantum group $G$.
All the the preservations and rigidity results for Rokhlin actions of second
countable coexact compact quantum groups obtained by Barlak, Szabó, and
Voigt are shown to hold in this general context. As a further application, we
extend the notion of equivariant order zero dimension for equivariant
*-homomorphisms, introduced in the classical setting by the first and third
authors, to actions of compact quantum groups. This allows us to define the
Rokhlin dimension of an action of a compact quantum group on a C*-algebra,
recovering the Rokhlin property as Rokhlin dimension zero. We conclude by
establishing a preservation result for finite nuclear dimension and finite
decomposition rank when passing to fixed point algebras and crossed products by
compact quantum group actions with finite Rokhlin dimension.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,980 | Parametric Identification Using Weighted Null-Space Fitting | In identification of dynamical systems, the prediction error method using a
quadratic cost function provides asymptotically efficient estimates under
Gaussian noise and additional mild assumptions, but in general it requires
solving a non-convex optimization problem. An alternative class of methods uses
a non-parametric model as intermediate step to obtain the model of interest.
Weighted null-space fitting (WNSF) belongs to this class. It is a weighted
least-squares method consisting of three steps. In the first step, a high-order
ARX model is estimated. In a second least-squares step, this high-order
estimate is reduced to a parametric estimate. In the third step, weighted least
squares is used to reduce the variance of the estimates. The method is flexible
in parametrization and suitable for both open- and closed-loop data. In this
paper, we show that WNSF provides estimates with the same asymptotic properties
as PEM with a quadratic cost function when the model orders are chosen
according to the true system. Also, simulation studies indicate that WNSF may
be competitive with state-of-the-art methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,981 | Learning Large-Scale Bayesian Networks with the sparsebn Package | Learning graphical models from data is an important problem with wide
applications, ranging from genomics to the social sciences. Nowadays datasets
often have upwards of thousands---sometimes tens or hundreds of thousands---of
variables and far fewer samples. To meet this challenge, we have developed a
new R package called sparsebn for learning the structure of large, sparse
graphical models with a focus on Bayesian networks. While there are many
existing software packages for this task, this package focuses on the unique
setting of learning large networks from high-dimensional data, possibly with
interventions. As such, the methods provided place a premium on scalability and
consistency in a high-dimensional setting. Furthermore, in the presence of
interventions, the methods implemented here achieve the goal of learning a
causal network from data. Additionally, the sparsebn package is fully
compatible with existing software packages for network analysis.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,982 | An Asymptotically Optimal Index Policy for Finite-Horizon Restless Bandits | We consider restless multi-armed bandit (RMAB) with a finite horizon and
multiple pulls per period. Leveraging the Lagrangian relaxation, we approximate
the problem with a collection of single arm problems. We then propose an
index-based policy that uses optimal solutions of the single arm problems to
index individual arms, and offer a proof that it is asymptotically optimal as
the number of arms tends to infinity. We also use simulation to show that this
index-based policy performs better than the state-of-art heuristics in various
problem settings.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,983 | Effective holographic theory of charge density waves | We use Gauge/Gravity duality to write down an effective low energy
holographic theory of charge density waves. We consider a simple gravity model
which breaks translations spontaneously in the dual field theory in a
homogeneous manner, capturing the low energy dynamics of phonons coupled to
conserved currents. We first focus on the leading two-derivative action, which
leads to excited states with non-zero strain. We show that including subleading
quartic derivative terms leads to dynamical instabilities of AdS$_2$
translation invariant states and to stable phases breaking translations
spontaneously. We compute analytically the real part of the electric
conductivity. The model allows to construct Lifshitz-like hyperscaling
violating quantum critical ground states breaking translations spontaneously.
At these critical points, the real part of the dc conductivity can be metallic
or insulating.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,984 | Interactive Certificates for Polynomial Matrices with Sub-Linear Communication | We develop and analyze new protocols to verify the correctness of various
computations on matrices over F[x], where F is a field. The properties we
verify concern an F[x]-module and therefore cannot simply rely on
previously-developed linear algebra certificates which work only for vector
spaces. Our protocols are interactive certificates, often randomized, and
featuring a constant number of rounds of communication between the prover and
verifier. We seek to minimize the communication cost so that the amount of data
sent during the protocol is significantly smaller than the size of the result
being verified, which can be useful when combining protocols or in some
multi-party settings. The main tools we use are reductions to existing linear
algebra certificates and a new protocol to verify that a given vector is in the
F[x]-linear span of a given matrix.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,985 | Cost Functions for Robot Motion Style | We focus on autonomously generating robot motion for day to day physical
tasks that is expressive of a certain style or emotion. Because we seek
generalization across task instances and task types, we propose to capture
style via cost functions that the robot can use to augment its nominal task
cost and task constraints in a trajectory optimization process. We compare two
approaches to representing such cost functions: a weighted linear combination
of hand-designed features, and a neural network parameterization operating on
raw trajectory input. For each cost type, we learn weights for each style from
user feedback. We contrast these approaches to a nominal motion across
different tasks and for different styles in a user study, and find that they
both perform on par with each other, and significantly outperform the baseline.
Each approach has its advantages: featurized costs require learning fewer
parameters and can perform better on some styles, but neural network
representations do not require expert knowledge to design features and could
even learn more complex, nuanced costs than an expert can easily design.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,986 | Revealing the basins of convergence in the planar equilateral restricted four-body problem | The planar equilateral restricted four-body problem where two of the
primaries have equal masses is used in order to determine the Newton-Raphson
basins of convergence associated with the equilibrium points. The parametric
variation of the position of the libration points is monitored when the value
of the mass parameter $m_3$ varies in predefined intervals. The regions on the
configuration $(x,y)$ plane occupied by the basins of attraction are revealed
using the multivariate version of the Newton-Raphson iterative scheme. The
correlations between the attracting domains of the equilibrium points and the
corresponding number of iterations needed for obtaining the desired accuracy
are also illustrated. We perform a thorough and systematic numerical
investigation by demonstrating how the dynamical parameter $m_3$ influences the
shape, the geometry and the degree of fractality of the converging regions. Our
numerical outcomes strongly indicate that the mass parameter is indeed one of
the most influential factors in this dynamical system.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,987 | The Eccentric Kozai-Lidov mechanism for Outer Test Particle | The secular approximation of the hierarchical three body systems has been
proven to be very useful in addressing many astrophysical systems, from
planets, stars to black holes. In such a system two objects are on a tight
orbit, and the tertiary is on a much wider orbit. Here we study the dynamics of
a system by taking the tertiary mass to zero and solve the hierarchical three
body system up to the octupole level of approximation. We find a rich dynamics
that the outer orbit undergoes due to gravitational perturbations from the
inner binary. The nominal result of the precession of the nodes is mostly
limited for the lowest order of approximation, however, when the octupole-level
of approximation is introduced the system becomes chaotic, as expected, and the
tertiary oscillates below and above 90deg, similarly to the non-test particle
flip behavior (e.g., Naoz 2016). We provide the Hamiltonian of the system and
investigate the dynamics of the system from the quadrupole to the octupole
level of approximations. We also analyze the chaotic and quasi-periodic orbital
evolution by studying the surfaces of sections. Furthermore, including general
relativity, we show case the long term evolution of individual debris disk
particles under the influence of a far away interior eccentric planet. We show
that this dynamics can naturally result in retrograde objects and a puffy disk
after a long timescale evolution (few Gyr) for initially aligned configuration.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,988 | A more symmetric picture for Kasparov's KK-bifunctor | For C*-algebras $A$ and $B$, we generalize the notion of a quasihomomorphism
from $A$ to $B$, due to Cuntz, by considering quasihomomorphisms from some
C*-algebra $C$ to $B$ such that $C$ surjects onto $A$, and the two maps forming
a quasihomomorphism agree on the kernel of this surjection. Under an additional
assumption, the group of homotopy classes of such generalized
quasihomomorphisms coincides with $KK(A,B)$. This makes the definition of
Kasparov's bifunctor slightly more symmetric and gives more flexibility for
constructing elements of $KK$-groups. These generalized quasihomomorphisms can
be viewed as pairs of maps directly from $A$ (instead of various $C$'s), but
these maps need not be $*$-homomorphisms.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,989 | Spot dynamics in a reaction-diffusion model of plant root hair initiation | We study pattern formation in a 2-D reaction-diffusion (RD) sub-cellular
model characterizing the effect of a spatial gradient of a plant hormone
distribution on a family of G-proteins associated with root-hair (RH)
initiation in the plant cell Arabidopsis thaliana. The activation of these
G-proteins, known as the Rho of Plants (ROPs), by the plant hormone auxin, is
known to promote certain protuberances on root hair cells, which are crucial
for both anchorage and the uptake of nutrients from the soil. Our mathematical
model for the activation of ROPs by the auxin gradient is an extension of the
model of Payne and Grierson [PLoS ONE, 12(4), (2009)], and consists of a
two-component Schnakenberg-type RD system with spatially heterogeneous
coefficients on a 2-D domain. The nonlinear kinetics in this RD system model
the nonlinear interactions between the active and inactive forms of ROPs. By
using a singular perturbation analysis to study 2-D localized spatial patterns
of active ROPs, it is shown that the spatial variations in the nonlinear
reaction kinetics, due to the auxin gradient, lead to a slow spatial alignment
of the localized regions of active ROPs along the longitudinal midline of the
plant cell. Numerical bifurcation analysis, together with time-dependent
numerical simulations of the RD system are used to illustrate both 2-D
localized patterns in the model, and the spatial alignment of localized
structures.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,990 | Incorporating Covariates into Integrated Factor Analysis of Multi-View Data | In modern biomedical research, it is ubiquitous to have multiple data sets
measured on the same set of samples from different views (i.e., multi-view
data). For example, in genetic studies, multiple genomic data sets at different
molecular levels or from different cell types are measured for a common set of
individuals to investigate genetic regulation. Integration and reduction of
multi-view data have the potential to leverage information in different data
sets, and to reduce the magnitude and complexity of data for further
statistical analysis and interpretation. In this paper, we develop a novel
statistical model, called supervised integrated factor analysis (SIFA), for
integrative dimension reduction of multi-view data while incorporating
auxiliary covariates. The model decomposes data into joint and individual
factors, capturing the joint variation across multiple data sets and the
individual variation specific to each set respectively. Moreover, both joint
and individual factors are partially informed by auxiliary covariates via
nonparametric models. We devise a computationally efficient
Expectation-Maximization (EM) algorithm to fit the model under some
identifiability conditions. We apply the method to the Genotype-Tissue
Expression (GTEx) data, and provide new insights into the variation
decomposition of gene expression in multiple tissues. Extensive simulation
studies and an additional application to a pediatric growth study demonstrate
the advantage of the proposed method over competing methods.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,991 | Batch-normalized joint training for DNN-based distant speech recognition | Improving distant speech recognition is a crucial step towards flexible
human-machine interfaces. Current technology, however, still exhibits a lack of
robustness, especially when adverse acoustic conditions are met. Despite the
significant progress made in the last years on both speech enhancement and
speech recognition, one potential limitation of state-of-the-art technology
lies in composing modules that are not well matched because they are not
trained jointly. To address this concern, a promising approach consists in
concatenating a speech enhancement and a speech recognition deep neural network
and to jointly update their parameters as if they were within a single bigger
network. Unfortunately, joint training can be difficult because the output
distribution of the speech enhancement system may change substantially during
the optimization procedure. The speech recognition module would have to deal
with an input distribution that is non-stationary and unnormalized. To mitigate
this issue, we propose a joint training approach based on a fully
batch-normalized architecture. Experiments, conducted using different datasets,
tasks and acoustic conditions, revealed that the proposed framework
significantly overtakes other competitive solutions, especially in challenging
environments.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,992 | Learning Word Embeddings from the Portuguese Twitter Stream: A Study of some Practical Aspects | This paper describes a preliminary study for producing and distributing a
large-scale database of embeddings from the Portuguese Twitter stream. We start
by experimenting with a relatively small sample and focusing on three
challenges: volume of training data, vocabulary size and intrinsic evaluation
metrics. Using a single GPU, we were able to scale up vocabulary size from 2048
words embedded and 500K training examples to 32768 words over 10M training
examples while keeping a stable validation loss and approximately linear trend
on training time per epoch. We also observed that using less than 50\% of the
available training examples for each vocabulary size might result in
overfitting. Results on intrinsic evaluation show promising performance for a
vocabulary size of 32768 words. Nevertheless, intrinsic evaluation metrics
suffer from over-sensitivity to their corresponding cosine similarity
thresholds, indicating that a wider range of metrics need to be developed to
track progress.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,993 | A temperate exo-Earth around a quiet M dwarf at 3.4 parsecs | The combination of high-contrast imaging and high-dispersion spectroscopy,
which has successfully been used to detect the atmosphere of a giant planet, is
one of the most promising potential probes of the atmosphere of Earth-size
worlds. The forthcoming generation of extremely large telescopes (ELTs) may
obtain sufficient contrast with this technique to detect O$_2$ in the
atmosphere of those worlds that orbit low-mass M dwarfs. This is strong
motivation to carry out a census of planets around cool stars for which
habitable zones can be resolved by ELTs, i.e. for M dwarfs within $\sim$5
parsecs. Our HARPS survey has been a major contributor to that sample of nearby
planets. Here we report on our radial velocity observations of Ross 128
(Proxima Virginis, GJ447, HIP 57548), an M4 dwarf just 3.4 parsec away from our
Sun. This source hosts an exo-Earth with a projected mass $m \sin i = 1.35
M_\oplus$ and an orbital period of 9.9 days. Ross 128 b receives $\sim$1.38
times as much flux as Earth from the Sun and its equilibrium ranges in
temperature between 269 K for an Earth-like albedo and 213 K for a Venus-like
albedo. Recent studies place it close to the inner edge of the conventional
habitable zone. An 80-day long light curve from K2 campaign C01 demonstrates
that Ross~128~b does not transit. Together with the All Sky Automated Survey
(ASAS) photometry and spectroscopic activity indices, the K2 photometry shows
that Ross 128 rotates slowly and has weak magnetic activity. In a habitability
context, this makes survival of its atmosphere against erosion more likely.
Ross 128 b is the second closest known exo-Earth, after Proxima Centauri b (1.3
parsec), and the closest temperate planet known around a quiet star. The 15 mas
planet-star angular separation at maximum elongation will be resolved by ELTs
($>$ 3$\lambda/D$) in the optical bands of O$_2$.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,994 | Manifold Based Low-rank Regularization for Image Restoration and Semi-supervised Learning | Low-rank structures play important role in recent advances of many problems
in image science and data science. As a natural extension of low-rank
structures for data with nonlinear structures, the concept of the
low-dimensional manifold structure has been considered in many data processing
problems. Inspired by this concept, we consider a manifold based low-rank
regularization as a linear approximation of manifold dimension. This
regularization is less restricted than the global low-rank regularization, and
thus enjoy more flexibility to handle data with nonlinear structures. As
applications, we demonstrate the proposed regularization to classical inverse
problems in image sciences and data sciences including image inpainting, image
super-resolution, X-ray computer tomography (CT) image reconstruction and
semi-supervised learning. We conduct intensive numerical experiments in several
image restoration problems and a semi-supervised learning problem of
classifying handwritten digits using the MINST data. Our numerical tests
demonstrate the effectiveness of the proposed methods and illustrate that the
new regularization methods produce outstanding results by comparing with many
existing methods.
| 1 | 0 | 1 | 0 | 0 | 0 |
18,995 | History-aware Autonomous Exploration in Confined Environments using MAVs | Many scenarios require a robot to be able to explore its 3D environment
online without human supervision. This is especially relevant for inspection
tasks and search and rescue missions. To solve this high-dimensional path
planning problem, sampling-based exploration algorithms have proven successful.
However, these do not necessarily scale well to larger environments or spaces
with narrow openings. This paper presents a 3D exploration planner based on the
principles of Next-Best Views (NBVs). In this approach, a Micro-Aerial Vehicle
(MAV) equipped with a limited field-of-view depth sensor randomly samples its
configuration space to find promising future viewpoints. In order to obtain
high sampling efficiency, our planner maintains and uses a history of visited
places, and locally optimizes the robot's orientation with respect to
unobserved space. We evaluate our method in several simulated scenarios, and
compare it against a state-of-the-art exploration algorithm. The experiments
show substantial improvements in exploration time ($2\times$ faster),
computation time, and path length, and advantages in handling difficult
situations such as escaping dead-ends (up to $20\times$ faster). Finally, we
validate the on-line capability of our algorithm on a computational constrained
real world MAV.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,996 | A factor-model approach for correlation scenarios and correlation stress-testing | In 2012, JPMorgan accumulated a USD~6.2 billion loss on a credit derivatives
portfolio, the so-called `London Whale', partly as a consequence of
de-correlations of non-perfectly correlated positions that were supposed to
hedge each other. Motivated by this case, we devise a factor model for
correlations that allows for scenario-based stress testing of correlations. We
derive a number of analytical results related to a portfolio of homogeneous
assets. Using the concept of Mahalanobis distance, we show how to identify
adverse scenarios of correlation risk. In addition, we demonstrate how
correlation and volatility stress tests can be combined. As an example, we
apply the factor-model approach to the "London Whale" portfolio and determine
the value-at-risk impact from correlation changes. Since our findings are
particularly relevant for large portfolios, where even small correlation
changes can have a large impact, a further application would be to stress test
portfolios of central counterparties, which are of systemically relevant size.
| 0 | 0 | 0 | 0 | 0 | 1 |
18,997 | Topology data analysis of critical transitions in financial networks | We develop a topology data analysis-based method to detect early signs for
critical transitions in financial data. From the time-series of multiple stock
prices, we build time-dependent correlation networks, which exhibit topological
structures. We compute the persistent homology associated to these structures
in order to track the changes in topology when approaching a critical
transition. As a case study, we investigate a portfolio of stocks during a
period prior to the US financial crisis of 2007-2008, and show the presence of
early signs of the critical transition.
| 0 | 1 | 1 | 0 | 0 | 0 |
18,998 | Construction of exact constants of motion and effective models for many-body localized systems | One of the defining features of many-body localization is the presence of
extensively many quasi-local conserved quantities. These constants of motion
constitute a corner-stone to an intuitive understanding of much of the
phenomenology of many-body localized systems arising from effective
Hamiltonians. They may be seen as local magnetization operators smeared out by
a quasi-local unitary. However, accurately identifying such constants of motion
remains a challenging problem. Current numerical constructions often capture
the conserved operators only approximately restricting a conclusive
understanding of many-body localization. In this work, we use methods from the
theory of quantum many-body systems out of equilibrium to establish a new
approach for finding a complete set of exact constants of motion which are in
addition guaranteed to represent Pauli-$z$ operators. By this we are able to
construct and investigate the proposed effective Hamiltonian using exact
diagonalization. Hence, our work provides an important tool expected to further
boost inquiries into the breakdown of transport due to quenched disorder.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,999 | Toric manifolds over cyclohedra | We study the action of the dihedral group on the (equivariant) cohomology of
the toric manifolds associated with cycle graphs.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,000 | Use of Genome Information-Based Potentials to Characterize Human Adaptation | As a living information and communications system, the genome encodes
patterns in single nucleotide polymorphisms (SNPs) reflecting human adaption
that optimizes population survival in differing environments. This paper
mathematically models environmentally induced adaptive forces that quantify
changes in the distribution of SNP frequencies between populations. We make
direct connections between biophysical methods (e.g. minimizing genomic free
energy) and concepts in population genetics. Our unbiased computer program
scanned a large set of SNPs in the major histocompatibility complex region, and
flagged an altitude dependency on a SNP associated with response to oxygen
deprivation. The statistical power of our double-blind approach is demonstrated
in the flagging of mathematical functional correlations of SNP
information-based potentials in multiple populations with specific
environmental parameters. Furthermore, our approach provides insights for new
discoveries on the biology of common variants. This paper demonstrates the
power of biophysical modeling of population diversity for better understanding
genome-environment interactions in biological phenomenon.
| 0 | 0 | 0 | 0 | 1 | 0 |
Subsets and Splits