ID
int64 1
21k
| TITLE
stringlengths 7
239
| ABSTRACT
stringlengths 7
2.76k
| Computer Science
int64 0
1
| Physics
int64 0
1
| Mathematics
int64 0
1
| Statistics
int64 0
1
| Quantitative Biology
int64 0
1
| Quantitative Finance
int64 0
1
|
---|---|---|---|---|---|---|---|---|
18,101 | A new, large-scale map of interstellar reddening derived from HI emission | We present a new map of interstellar reddening, covering the 39\% of the sky
with low {\rm HI} column densities ($N_{\rm HI} < 4\times10^{20}\,\rm cm^{-2}$
or $E(B-V)\approx 45\rm\, mmag$) at $16\overset{'}{.}1$ resolution, based on
all-sky observations of Galactic HI emission by the HI4PI Survey. In this low
column density regime, we derive a characteristic value of $N_{\rm HI}/E(B-V) =
8.8\times10^{21}\, \rm\, cm^{2}\, mag^{-1}$ for gas with $|v_{\rm LSR}| <
90\,\rm km\, s^{-1}$ and find no significant reddening associated with gas at
higher velocities. We compare our HI-based reddening map with the Schlegel,
Finkbeiner, and Davis (1998, SFD) reddening map and find them consistent to
within a scatter of $\simeq 5\,\rm mmag$. Further, the differences between our
map and the SFD map are in excellent agreement with the low resolution
($4\overset{\circ}{.}5$) corrections to the SFD map derived by Peek and Graves
(2010) based on observed reddening toward passive galaxies. We therefore argue
that our HI-based map provides the most accurate interstellar reddening
estimates in the low column density regime to date. Our reddening map is made
publicly available (this http URL).
| 0 | 1 | 0 | 0 | 0 | 0 |
18,102 | Seasonal modulation of seismicity: the competing/collaborative effect of the snow and ice load on the lithosphere | Seasonal patterns associated with stress modulation, as evidenced by
earthquake occurrence, have been detected in regions characterized by present
day mountain building and glacial retreat in the Northern Hemisphere. In the
Himalaya and the Alps, seismicity is peaking in spring and summer; opposite
behaviour is observed in the Apennines. This diametrical behaviour, confirmed
by recent strong earthquakes, well correlates with the dominant tectonic
regime: peak in spring and summer in shortening areas, peak in fall and winter
in extensional areas. The analysis of the seasonal effect is extended to
several shortening (e.g. Zagros and Caucasus) and extensional regions, and
counter-examples from regions where no seasonal modulation is expected (e.g.
Tropical Atlantic Ridge). This study generalizes to different seismotectonic
settings the early observations made about short-term (seasonal) and long-term
(secular) modulation of seismicity and confirms, with some statistical
significance, that snow and ice thaw may cause crustal deformations that
modulate the occurrence of major earthquakes.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,103 | Existence and nonexistence of positive solutions to some fully nonlinear equation in one dimension | In this paper, we consider the existence (and nonexistence) of solutions to
\[
-\mathcal{M}_{\lambda,\Lambda}^\pm (u'') + V(x) u = f(u) \quad {\rm in} \
\mathbf{R}
\] where $\mathcal{M}_{\lambda,\Lambda}^+$ and
$\mathcal{M}_{\lambda,\Lambda}^-$ denote the Pucci operators with $0< \lambda
\leq \Lambda < \infty$, $V(x)$ is a bounded function, $f(s)$ is a continuous
function and its typical example is a power-type nonlinearity $f(s)
=|s|^{p-1}s$ $(p>1)$. In particular, we are interested in positive solutions
which decay at infinity, and the existence (and nonexistence) of such solutions
is proved.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,104 | Shiba Bound States across the mobility edge in doped InAs nanowires | We present a study of Andreev Quantum Dots (QDots) fabricated with
small-diameter (30 nm) Si-doped InAs nanowires where the Fermi level can be
tuned across a mobility edge separating localized states from delocalized
states. The transition to the insulating phase is identified by a drop in the
amplitude and width of the excited levels and is found to have remarkable
consequences on the spectrum of superconducting SubGap Resonances (SGRs). While
at deeply localized levels, only quasiparticles co-tunneling is observed, for
slightly delocalized levels, Shiba bound states form and a parity changing
quantum phase transition is identified by a crossing of the bound states at
zero energy. Finally, in the metallic regime, single Andreev resonances are
observed.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,105 | The Helsinki Neural Machine Translation System | We introduce the Helsinki Neural Machine Translation system (HNMT) and how it
is applied in the news translation task at WMT 2017, where it ranked first in
both the human and automatic evaluations for English--Finnish. We discuss the
success of English--Finnish translations and the overall advantage of NMT over
a strong SMT baseline. We also discuss our submissions for English--Latvian,
English--Chinese and Chinese--English.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,106 | Spatial distribution of nuclei in progressive nucleation: modeling and application | Phase transformations ruled by non-simultaneous nucleation and growth do not
lead to random distribution of nuclei. Since nucleation is only allowed in the
untransformed portion of space, positions of nuclei are correlated. In this
article an analytical approach is presented for computing pair-correlation
function of nuclei in progressive nucleation. This quantity is further employed
for characterizing the spatial distribution of nuclei through the nearest
neighbor distribution function. The modeling is developed for nucleation in 2D
space with power growth law and it is applied to describe electrochemical
nucleation where correlation effects are significant. Comparison with both
computer simulations and experimental data lends support to the model which
gives insights into the transition from Poissonian to correlated nearest
neighbor probability density.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,107 | Finite-time Guarantees for Byzantine-Resilient Distributed State Estimation with Noisy Measurements | This work considers resilient, cooperative state estimation in unreliable
multi-agent networks. A network of agents aims to collaboratively estimate the
value of an unknown vector parameter, while an {\em unknown} subset of agents
suffer Byzantine faults. Faulty agents malfunction arbitrarily and may send out
{\em highly unstructured} messages to other agents in the network. As opposed
to fault-free networks, reaching agreement in the presence of Byzantine faults
is far from trivial. In this paper, we propose a computationally-efficient
algorithm that is provably robust to Byzantine faults. At each iteration of the
algorithm, a good agent (1) performs a gradient descent update based on noisy
local measurements, (2) exchanges its update with other agents in its
neighborhood, and (3) robustly aggregates the received messages using
coordinate-wise trimmed means. Under mild technical assumptions, we establish
that good agents learn the true parameter asymptotically in almost sure sense.
We further complement our analysis by proving (high probability) {\em
finite-time} convergence rate, encapsulating network characteristics.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,108 | Constraining Polarized Foregrounds for EOR Experiments II: Polarization Leakage Simulations in the Avoidance Scheme | A critical challenge in the observation of the redshifted 21-cm line is its
separation from bright Galactic and extragalactic foregrounds. In particular,
the instrumental leakage of polarized foregrounds, which undergo significant
Faraday rotation as they propagate through the interstellar medium, may
harmfully contaminate the 21-cm power spectrum. We develop a formalism to
describe the leakage due to instrumental widefield effects in visibility-based
power spectra measured with redundant arrays, extending the delay-spectrum
approach presented in Parsons et al. (2012). We construct polarized sky models
and propagate them through the instrument model to simulate realistic full-sky
observations with the Precision Array to Probe the Epoch of Reionization. We
find that the leakage due to a population of polarized point sources is
expected to be higher than diffuse Galactic polarization at any $k$ mode for a
30~m reference baseline. For the same reference baseline, a foreground-free
window at $k > 0.3 \, h$~Mpc$^{-1}$ can be defined in terms of leakage from
diffuse Galactic polarization even under the most pessimistic assumptions. If
measurements of polarized foreground power spectra or a model of polarized
foregrounds are given, our method is able to predict the polarization leakage
in actual 21-cm observations, potentially enabling its statistical subtraction
from the measured 21-cm power spectrum.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,109 | Collision Selective Visual Neural Network Inspired by LGMD2 Neurons in Juvenile Locusts | For autonomous robots in dynamic environments mixed with human, it is vital
to detect impending collision quickly and robustly. The biological visual
systems evolved over millions of years may provide us efficient solutions for
collision detection in complex environments. In the cockpit of locusts, two
Lobula Giant Movement Detectors, i.e. LGMD1 and LGMD2, have been identified
which respond to looming objects rigorously with high firing rates. Compared to
LGMD1, LGMD2 matures early in the juvenile locusts with specific selectivity to
dark moving objects against bright background in depth while not responding to
light objects embedded in dark background - a similar situation which ground
vehicles and robots are facing with. However, little work has been done on
modeling LGMD2, let alone its potential in robotics and other vision-based
applications. In this article, we propose a novel way of modeling LGMD2 neuron,
with biased ON and OFF pathways splitting visual streams into parallel channels
encoding brightness increments and decrements separately to fulfill its
selectivity. Moreover, we apply a biophysical mechanism of spike frequency
adaptation to shape the looming selectivity in such a collision-detecting
neuron model. The proposed visual neural network has been tested with
systematic experiments, challenged against synthetic and real physical stimuli,
as well as image streams from the sensor of a miniature robot. The results
demonstrated this framework is able to detect looming dark objects embedded in
bright backgrounds selectively, which make it ideal for ground mobile
platforms. The robotic experiments also showed its robustness in collision
detection - it performed well for near range navigation in an arena with many
obstacles. Its enhanced collision selectivity to dark approaching objects
versus receding and translating ones has also been verified via systematic
experiments.
| 1 | 0 | 0 | 0 | 1 | 0 |
18,110 | Multifractal invariant measures in expanding piecewise linear coupled maps | We analyze invariant measures of two coupled piecewise linear and everywhere
expanding maps on the synchronization manifold. We observe that though the
individual maps have simple and smooth functions as their stationary densities,
they become multifractal as soon as two of them are coupled nonlinearly even
with a small coupling. For some maps, the multifractal spectrum seems to be
robust with the coupling or map parameters and for some other maps, there is a
substantial variation. The origin of the multifractal spectrum here is
intriguing as it does not seem to conform to the existing theory of
multifractal functions.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,111 | Virtually free finite-normal-subgroup-free groups are strongly verbally closed | Any virtually free group $H$ containing no non-trivial finite normal subgroup
(e.g., the infinite dihedral group) is a retract of any finitely generated
group containing $H$ as a verbally closed subgroup.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,112 | Stochastic Geometry-based Comparison of Secrecy Enhancement Techniques in D2D Networks | This letter presents a performance comparison of two popular secrecy
enhancement techniques in wireless networks: (i) creating guard zones by
restricting transmissions of legitimate transmitters whenever any eavesdropper
is detected in their vicinity, and (ii) adding artificial noise to the
confidential messages to make it difficult for the eavesdroppers to decode
them. Focusing on a noise-limited regime, we use tools from stochastic geometry
to derive the secrecy outage probability at the eavesdroppers as well as the
coverage probability at the legitimate users for both these techniques. Using
these results, we derive a threshold on the density of the eavesdroppers below
which no secrecy enhancing technique is required to ensure a target secrecy
outage probability. For eavesdropper densities above this threshold, we
concretely characterize the regimes in which each technique outperforms the
other. Our results demonstrate that guard zone technique is better when the
distances between the transmitters and their legitimate receivers are higher
than a certain threshold.
| 1 | 0 | 1 | 0 | 0 | 0 |
18,113 | Linear convergence of SDCA in statistical estimation | In this paper, we consider stochastic dual coordinate (SDCA) {\em without}
strongly convex assumption or convex assumption. We show that SDCA converges
linearly under mild conditions termed restricted strong convexity. This covers
a wide array of popular statistical models including Lasso, group Lasso, and
logistic regression with $\ell_1$ regularization, corrected Lasso and linear
regression with SCAD regularizer. This significantly improves previous
convergence results on SDCA for problems that are not strongly convex. As a by
product, we derive a dual free form of SDCA that can handle general
regularization term, which is of interest by itself.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,114 | Plug-and-Play Unplugged: Optimization Free Reconstruction using Consensus Equilibrium | Regularized inversion methods for image reconstruction are used widely due to
their tractability and ability to combine complex physical sensor models with
useful regularity criteria. Such methods motivated the recently developed
Plug-and-Play prior method, which provides a framework to use advanced
denoising algorithms as regularizers in inversion. However, the need to
formulate regularized inversion as the solution to an optimization problem
limits the possible regularity conditions and physical sensor models.
In this paper, we introduce Consensus Equilibrium (CE), which generalizes
regularized inversion to include a much wider variety of both forward
components and prior components without the need for either to be expressed
with a cost function. CE is based on the solution of a set of equilibrium
equations that balance data fit and regularity. In this framework, the problem
of MAP estimation in regularized inversion is replaced by the problem of
solving these equilibrium equations, which can be approached in multiple ways.
The key contribution of CE is to provide a novel framework for fusing
multiple heterogeneous models of physical sensors or models learned from data.
We describe the derivation of the CE equations and prove that the solution of
the CE equations generalizes the standard MAP estimate under appropriate
circumstances.
We also discuss algorithms for solving the CE equations, including ADMM with
a novel form of preconditioning and Newton's method. We give examples to
illustrate consensus equilibrium and the convergence properties of these
algorithms and demonstrate this method on some toy problems and on a denoising
example in which we use an array of convolutional neural network denoisers,
none of which is tuned to match the noise level in a noisy image but which in
consensus can achieve a better result than any of them individually.
| 1 | 0 | 1 | 0 | 0 | 0 |
18,115 | Using angular pair upweighting to improve 3D clustering measurements | Three dimensional galaxy clustering measurements provide a wealth of
cosmological information. However, obtaining spectra of galaxies is expensive,
and surveys often only measure redshifts for a subsample of a target galaxy
population. Provided that the spectroscopic data is representative, we argue
that angular pair upweighting should be used in these situations to improve the
3D clustering measurements. We present a toy model showing mathematically how
such a weighting can improve measurements, and provide a practical example of
its application using mocks created for the Baryon Oscillation Spectroscopic
Survey (BOSS). Our analysis of mocks suggests that, if an angular clustering
measurement is available over twice the area covered spectroscopically,
weighting gives a $\sim$10-20% reduction of the variance of the monopole
correlation function on the BAO scale.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,116 | Maximal entries of elements in certain matrix monoids | Let $L_u=\begin{bmatrix}1 & 0\\u & 1\end{bmatrix}$ and $R_v=\begin{bmatrix}1
& v\\0 & 1\end{bmatrix}$ be matrices in $SL_2(\mathbb Z)$ with $u, v\geq 1$.
Since the monoid generated by $L_u$ and $R_v$ is free, we can associate a depth
to each element based on its product representation. In the cases where $u=v=2$
and $u=v=3$, Bromberg, Shpilrain, and Vdovina determined the depth $n$ matrices
containing the maximal entry for each $n\geq 1$. By using ideas from our
previous work on $(u,v)$-Calkin-Wilf trees, we extend their results for any $u,
v\geq 1$ and in the process we recover the Fibonacci and some Lucas sequences.
As a consequence we obtain bounds which guarantee collision resistance on a
family of hashing functions based on $L_u$ and $R_v$.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,117 | Generative Modeling of Multimodal Multi-Human Behavior | This work presents a methodology for modeling and predicting human behavior
in settings with N humans interacting in highly multimodal scenarios (i.e.
where there are many possible highly-distinct futures). A motivating example
includes robots interacting with humans in crowded environments, such as
self-driving cars operating alongside human-driven vehicles or human-robot
collaborative bin packing in a warehouse. Our approach to model human behavior
in such uncertain environments is to model humans in the scene as nodes in a
graphical model, with edges encoding relationships between them. For each
human, we learn a multimodal probability distribution over future actions from
a dataset of multi-human interactions. Learning such distributions is made
possible by recent advances in the theory of conditional variational
autoencoders and deep learning approximations of probabilistic graphical
models. Specifically, we learn action distributions conditioned on interaction
history, neighboring human behavior, and candidate future agent behavior in
order to take into account response dynamics. We demonstrate the performance of
such a modeling approach in modeling basketball player trajectories, a highly
multimodal, multi-human scenario which serves as a proxy for many robotic
applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,118 | Deep Sets | We study the problem of designing models for machine learning tasks defined
on \emph{sets}. In contrast to traditional approach of operating on fixed
dimensional vectors, we consider objective functions defined on sets that are
invariant to permutations. Such problems are widespread, ranging from
estimation of population statistics \cite{poczos13aistats}, to anomaly
detection in piezometer data of embankment dams \cite{Jung15Exploration}, to
cosmology \cite{Ntampaka16Dynamical,Ravanbakhsh16ICML1}. Our main theorem
characterizes the permutation invariant functions and provides a family of
functions to which any permutation invariant objective function must belong.
This family of functions has a special structure which enables us to design a
deep network architecture that can operate on sets and which can be deployed on
a variety of scenarios including both unsupervised and supervised learning
tasks. We also derive the necessary and sufficient conditions for permutation
equivariance in deep models. We demonstrate the applicability of our method on
population statistic estimation, point cloud classification, set expansion, and
outlier detection.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,119 | Assortment Optimization under a Single Transition Model | In this paper, we consider a Markov chain choice model with single
transition. In this model, customers arrive at each product with a certain
probability. If the arrived product is unavailable, then the seller can
recommend a subset of available products to the customer and the customer will
purchase one of the recommended products or choose not to purchase with certain
transition probabilities. The distinguishing features of the model are that the
seller can control which products to recommend depending on the arrived product
and that each customer either purchases a product or leaves the market after
one transition.
We study the assortment optimization problem under this model. Particularly,
we show that this problem is generally NP-Hard even if each product could only
transit to at most two products. Despite the complexity of the problem, we
provide polynomial time algorithms for several special cases, such as when the
transition probabilities are homogeneous with respect to the starting point, or
when each product can only transit to one other product. We also provide a
tight performance bound for revenue-ordered assortments. In addition, we
propose a compact mixed integer program formulation that can solve this problem
of large size. Through extensive numerical experiments, we show that the
proposed algorithms can solve the problem efficiently and the obtained
assortments could significantly improve the revenue of the seller than under
the Markov chain choice model.
| 1 | 0 | 1 | 0 | 0 | 0 |
18,120 | Deconstructing Type III | SAS introduced Type III methods to address difficulties in dummy-variable
models for effects of multiple factors and covariates. Type III methods are
widely used in practice; they are the default method in many statistical
computing packages. Type III sums of squares (SSs) are defined by an algorithm,
and an explicit mathematical formulation does not seem to exist. For that
reason, their properties have not been rigorously proven. Some that are widely
believed to be true are not always true. An explicit formulation is derived in
this paper. It is used as a basis to prove fundamental properties of Type III
estimable functions and SSs. It is shown that, in any given setting, Type III
effects include all estimable ANOVA effects, and that if all of an ANOVA effect
is estimable then the Type III SS tests it exactly. The setting for these
results is general, comprising linear models for the mean vector of a response
that include arbitrary sets of effects of factors and covariates.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,121 | The Digital Flynn Effect: Complexity of Posts on Social Media Increases over Time | Parents and teachers often express concern about the extensive use of social
media by youngsters. Some of them see emoticons, undecipherable initialisms and
loose grammar typical for social media as evidence of language degradation. In
this paper, we use a simple measure of text complexity to investigate how the
complexity of public posts on a popular social networking site changes over
time. We analyze a unique dataset that contains texts posted by 942, 336 users
from a large European city across nine years. We show that the chosen
complexity measure is correlated with the academic performance of users: users
from high-performing schools produce more complex texts than users from
low-performing schools. We also find that complexity of posts increases with
age. Finally, we demonstrate that overall language complexity of posts on the
social networking site is constantly increasing. We call this phenomenon the
digital Flynn effect. Our results may suggest that the worries about language
degradation are not warranted.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,122 | Epistemic Modeling with Justifications | Existing logical models do not fairly represent epistemic situations with
fallible justifications, e.g., Russell's Prime Minister example, though such
scenarios have long been at the center of epistemic studies. We introduce
justification epistemic models, JEM, which can handle such scenarios. JEM makes
justifications prime objects and draws a distinction between accepted and
knowledge-producing justifications; belief and knowledge become derived
notions. Furthermore, Kripke models can be viewed as special cases of JEMs with
additional assumptions of evidence insensitivity and common knowledge of the
model. We argue that JEM can be applied to a range of epistemic scenarios in
CS, AI, Game Theory, etc.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,123 | Emergent universal critical behavior of the 2D $N$-color Ashkin-Teller model in the presence of correlated disorder | We study the critical behavior of the 2D $N$-color Ashkin-Teller model in the
presence of random bond disorder whose correlations decays with the distance
$r$ as a power-law $r^{-a}$. We consider the case when the spins of different
colors sitting at the same site are coupled by the same bond and map this
problem onto the 2D system of $N/2$ flavors of interacting Dirac fermions in
the presence of correlated disorder. Using renormalization group we show that
for $N=2$, a "weakly universal" scaling behavior at the continuous transition
becomes universal with new critical exponents. For $N>2$, the first-order phase
transition is rounded by the correlated disorder and turns into a continuous
one.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,124 | Transport signatures of topological superconductivity in a proximity-coupled nanowire | We study the conductance of a junction between the normal and superconducting
segments of a nanowire, both of which are subjected to spin-orbit coupling and
an external magnetic field. We directly compare the transport properties of the
nanowire assuming two different models for the superconducting segment: one
where we put superconductivity by hand into the wire, and one where
superconductivity is induced through a tunneling junction with a bulk s-wave
superconductor. While these two models are equivalent at low energies and at
weak coupling between the nanowire and the superconductor, we show that there
are several interesting qualitative differences away from these two limits. In
particular, the tunneling model introduces an additional conductance peak at
the energy corresponding to the bulk gap of the parent superconductor. By
employing a combination of analytical methods at zero temperature and numerical
methods at finite temperature, we show that the tunneling model of the
proximity effect reproduces many more of the qualitative features that are seen
experimentally in such a nanowire system.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,125 | An Empirical Study of Mini-Batch Creation Strategies for Neural Machine Translation | Training of neural machine translation (NMT) models usually uses mini-batches
for efficiency purposes. During the mini-batched training process, it is
necessary to pad shorter sentences in a mini-batch to be equal in length to the
longest sentence therein for efficient computation. Previous work has noted
that sorting the corpus based on the sentence length before making mini-batches
reduces the amount of padding and increases the processing speed. However,
despite the fact that mini-batch creation is an essential step in NMT training,
widely used NMT toolkits implement disparate strategies for doing so, which
have not been empirically validated or compared. This work investigates
mini-batch creation strategies with experiments over two different datasets.
Our results suggest that the choice of a mini-batch creation strategy has a
large effect on NMT training and some length-based sorting strategies do not
always work well compared with simple shuffling.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,126 | Network analyses of 4D genome datasets automate detection of community-scale gene structure and plasticity | Chromosome conformation capture and Hi-C technologies provide gene-gene
proximity datasets of stationary cells, revealing chromosome territories,
topologically associating domains, and chromosome topology. Imaging of tagged
DNA sequences in live cells through the lac operator reporter system provides
dynamic datasets of chromosomal loci. Chromosome modeling explores the
mechanisms underlying 3D genome structure and dynamics. Here, we automate 4D
genome dataset analysis with network-based tools as an alternative to gene-gene
proximity statistics and visual structure determination. Temporal network
models and community detection algorithms are applied to 4D modeling of G1 in
budding yeast with transient crosslinking of $5 kb$ domains in the nucleolus,
analyzing datasets from four decades of transient binding timescales. Network
tools detect and track transient gene communities (clusters) within the
nucleolus, their size, number, persistence time, and frequency of gene
exchanges. An optimal, weak binding affinity is revealed that maximizes
community-scale plasticity whereby large communities persist, frequently
exchanging genes.
| 0 | 0 | 0 | 0 | 1 | 0 |
18,127 | Predicate Pairing for Program Verification | It is well-known that the verification of partial correctness properties of
imperative programs can be reduced to the satisfiability problem for
constrained Horn clauses (CHCs). However, state-of-the-art solvers for CHCs
(CHC solvers) based on predicate abstraction are sometimes unable to verify
satisfiability because they look for models that are definable in a given class
A of constraints, called A-definable models. We introduce a transformation
technique, called Predicate Pairing (PP), which is able, in many interesting
cases, to transform a set of clauses into an equisatisfiable set whose
satisfiability can be proved by finding an A-definable model, and hence can be
effectively verified by CHC solvers. We prove that, under very general
conditions on A, the unfold/fold transformation rules preserve the existence of
an A-definable model, i.e., if the original clauses have an A-definable model,
then the transformed clauses have an A-definable model. The converse does not
hold in general, and we provide suitable conditions under which the transformed
clauses have an A-definable model iff the original ones have an A-definable
model. Then, we present the PP strategy which guides the application of the
transformation rules with the objective of deriving a set of clauses whose
satisfiability can be proved by looking for A-definable models. PP introduces a
new predicate defined by the conjunction of two predicates together with some
constraints. We show through some examples that an A-definable model may exist
for the new predicate even if it does not exist for its defining atomic
conjuncts. We also present some case studies showing that PP plays a crucial
role in the verification of relational properties of programs (e.g., program
equivalence and non-interference). Finally, we perform an experimental
evaluation to assess the effectiveness of PP in increasing the power of CHC
solving.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,128 | Entanglement entropy and computational complexity of the Anderson impurity model out of equilibrium I: quench dynamics | We study the growth of entanglement entropy in density matrix renormalization
group calculations of the real-time quench dynamics of the Anderson impurity
model. We find that with appropriate choice of basis, the entropy growth is
logarithmic in both the interacting and noninteracting single-impurity models.
The logarithmic entropy growth is understood from a noninteracting chain model
as a critical behavior separating regimes of linear growth and saturation of
entropy, corresponding respectively to an overlapping and gapped energy spectra
of the set of bath states. We find that with an appropriate choices of basis
(energy-ordered bath orbitals), logarithmic entropy growth is the generic
behavior of quenched impurity models. A noninteracting calculation of a
double-impurity Anderson model supports the conclusion in the multi-impurity
case. The logarithmic growth of entanglement entropy enables studies of quench
dynamics to very long times.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,129 | Be Careful What You Backpropagate: A Case For Linear Output Activations & Gradient Boosting | In this work, we show that saturating output activation functions, such as
the softmax, impede learning on a number of standard classification tasks.
Moreover, we present results showing that the utility of softmax does not stem
from the normalization, as some have speculated. In fact, the normalization
makes things worse. Rather, the advantage is in the exponentiation of error
gradients. This exponential gradient boosting is shown to speed up convergence
and improve generalization. To this end, we demonstrate faster convergence and
better performance on diverse classification tasks: image classification using
CIFAR-10 and ImageNet, and semantic segmentation using PASCAL VOC 2012. In the
latter case, using the state-of-the-art neural network architecture, the model
converged 33% faster with our method (roughly two days of training less) than
with the standard softmax activation, and with a slightly better performance to
boot.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,130 | A model theoretic Rieffel's theorem of quantum 2-torus | We defined a notion of quantum 2-torus $T_\theta$ in "Masanori Itai and Boris
Zilber, Notes on a model theory of quantum 2-torus $T_q^2$ for generic $q$,
arXiv:1503.06045v1 [mathLO]" and studied its model theoretic property. In this
note we associate quantum 2-tori $T_\theta$ with the structure over ${\mathbb
C}_\theta = ({\mathbb C}, +, \cdot, y = x^\theta),$ where $\theta \in {\mathbb
R} \setminus {\mathbb Q}$, and introduce the notion of geometric isomorphisms
between such quantum 2-tori.
We show that this notion is closely connected with the fundamental notion of
Morita equivalence of non-commutative geometry. Namely, we prove that the
quantum 2-tori $T_{\theta_1}$ and $T_{\theta_2}$ are Morita equivalent if and
only if $\theta_2 = {\displaystyle \frac{a \theta_1 + b}{c \theta_1 + d}}$ for
some $ \left( \begin{array}{cc} a & b \\ c & d \end{array} \right)
\in {\rm GL}_2({\mathbb Z})$ with $|ad - bc| = 1$. This is our version of
Rieffel's Theorem in "M. A. Rieffel and A. Schwarz, Morita equivalence of
multidimensional noncummutative tori, Internat. J. Math. 10, 2 (1999) 289-299"
which characterises Morita equivalence of quantum tori in the same terms.
The result in essence confirms that the representation $T_\theta$ in terms of
model-theoretic geometry \cite{IZ} is adequate to its original definition in
terms of non-commutative geometry.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,131 | Introducing the Simulated Flying Shapes and Simulated Planar Manipulator Datasets | We release two artificial datasets, Simulated Flying Shapes and Simulated
Planar Manipulator that allow to test the learning ability of video processing
systems. In particular, the dataset is meant as a tool which allows to easily
assess the sanity of deep neural network models that aim to encode, reconstruct
or predict video frame sequences. The datasets each consist of 90000 videos.
The Simulated Flying Shapes dataset comprises scenes showing two objects of
equal shape (rectangle, triangle and circle) and size in which one object
approaches its counterpart. The Simulated Planar Manipulator shows a 3-DOF
planar manipulator that executes a pick-and-place task in which it has to place
a size-varying circle on a squared platform. Different from other widely used
datasets such as moving MNIST [1], [2], the two presented datasets involve
goal-oriented tasks (e.g. the manipulator grasping an object and placing it on
a platform), rather than showing random movements. This makes our datasets more
suitable for testing prediction capabilities and the learning of sophisticated
motions by a machine learning model. This technical document aims at providing
an introduction into the usage of both datasets.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,132 | Couplings and quantitative contraction rates for Langevin dynamics | We introduce a new probabilistic approach to quantify convergence to
equilibrium for (kinetic) Langevin processes. In contrast to previous analytic
approaches that focus on the associated kinetic Fokker-Planck equation, our
approach is based on a specific combination of reflection and synchronous
coupling of two solutions of the Langevin equation. It yields contractions in a
particular Wasserstein distance, and it provides rather precise bounds for
convergence to equilibrium at the borderline between the overdamped and the
underdamped regime. In particular, we are able to recover kinetic behavior in
terms of explicit lower bounds for the contraction rate. For example, for a
rescaled double-well potential with local minima at distance $a$, we obtain a
lower bound for the contraction rate of order $\Omega (a^{-1})$ provided the
friction coefficient is of order $\Theta (a^{-1})$.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,133 | Stacked transfer learning for tropical cyclone intensity prediction | Tropical cyclone wind-intensity prediction is a challenging task considering
drastic changes climate patterns over the last few decades. In order to develop
robust prediction models, one needs to consider different characteristics of
cyclones in terms of spatial and temporal characteristics. Transfer learning
incorporates knowledge from a related source dataset to compliment a target
datasets especially in cases where there is lack or data. Stacking is a form of
ensemble learning focused for improving generalization that has been recently
used for transfer learning problems which is referred to as transfer stacking.
In this paper, we employ transfer stacking as a means of studying the effects
of cyclones whereby we evaluate if cyclones in different geographic locations
can be helpful for improving generalization performs. Moreover, we use
conventional neural networks for evaluating the effects of duration on cyclones
in prediction performance. Therefore, we develop an effective strategy that
evaluates the relationships between different types of cyclones through
transfer learning and conventional learning methods via neural networks.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,134 | Generalized Gray codes with prescribed ends of small dimensions | Given pairwise distinct vertices $\{\alpha_i , \beta_i\}^k_{i=1}$ of the
$n$-dimensional hypercube $Q_n$ such that the distance of $\alpha_i$ and
$\beta_i$ is odd, are there paths $P_i$ between $\alpha_i$ and $\beta_i$ such
that $\{V (P_i)\}^k_{i=1}$ partitions $V(Q_n)$? A positive solution for every
$n\ge1$ and $k=1$ is known as a Gray code of dimension $n$. In this paper we
settle this problem for small values of $n$.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,135 | Elementary-base cirquent calculus I: Parallel and choice connectives | Cirquent calculus is a proof system manipulating circuit-style constructs
rather than formulas. Using it, this article constructs a sound and complete
axiomatization CL16 of the propositional fragment of computability logic (the
game-semantically conceived logic of computational problems - see
this http URL ) whose logical vocabulary consists
of negation and parallel and choice connectives, and whose atoms represent
elementary, i.e. moveless, games.
| 1 | 0 | 1 | 0 | 0 | 0 |
18,136 | Targeted and Imaging-guided In Vivo Photodynamic Therapy of Tumors Using Dual-functional, Aggregation-induced Emission Nanoparticles | Dual-functional nanoparticles, with the property of aggregation-induced
emission and the capability of reactive oxygen species, were used to achieve
passive/active targeting of tumor. Good contrast in in vivo imaging and obvious
therapeutic efficiency were realized with a low dose of AIE nanoparticles as
well as a low power density of light, resulting in negligible side effects.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,137 | TOSC: an algorithm for the tomography of spotted transit chords | Photometric observations of planetary transits may show localized bumps,
called transit anomalies, due to the possible crossing of photospheric
starspots. The aim of this work is to analyze the transit anomalies and derive
the temperature profile inside the transit belt along the transit direction. We
develop the algorithm TOSC, a tomographic inverse-approach tool which, by means
of simple algebra, reconstructs the flux distribution along the transit belt.
We test TOSC against some simulated scenarios. We find that TOSC provides
robust results for light curves with photometric accuracies better than 1~mmag,
returning the spot-photosphere temperature contrast with an accuracy better
than 100~K. TOSC is also robust against the presence of unocculted spots,
provided that the apparent planetary radius given by the fit of the transit
light curve is used in place of the true radius. The analysis of real data with
TOSC returns results consistent with previous studies.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,138 | Bayesian uncertainty quantification for epidemic spread on networks | While there exist a number of mathematical approaches to modeling the spread
of disease on a network, analyzing such systems in the presence of uncertainty
introduces significant complexity. In scenarios where system parameters must be
inferred from limited observations, general approaches to uncertainty
quantification can generate approximate distributions of the unknown
parameters, but these methods often become computationally expensive if the
underlying disease model is complex. In this paper, we apply the recent
massively parallelizable Bayesian uncertainty quantification framework $\Pi4U$
to a model of a disease spreading on a network of communities, showing that the
method can accurately and tractably recover system parameters and select
optimal models in this setting.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,139 | Switching divergences for spectral learning in blind speech dereverberation | When recorded in an enclosed room, a sound signal will most certainly get
affected by reverberation. This not only undermines audio quality, but also
poses a problem for many human-machine interaction technologies that use speech
as their input. In this work, a new blind, two-stage dereverberation approach
based in a generalized \beta-divergence as a fidelity term over a non-negative
representation is proposed. The first stage consists of learning the spectral
structure of the signal solely from the observed spectrogram, while the second
stage is devoted to model reverberation. Both steps are taken by minimizing a
cost function in which the aim is put either in constructing a dictionary or a
good representation by changing the divergence involved. In addition, an
approach for finding an optimal fidelity parameter for dictionary learning is
proposed. An algorithm for implementing the proposed method is described and
tested against state-of-the-art methods. Results show improvements for both
artificial reverberation and real recordings.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,140 | The correlation between the sizes of globular cluster systems and their host dark matter haloes | The sizes of entire systems of globular clusters (GCs) depend on the
formation and destruction histories of the GCs themselves, but also on the
assembly, merger and accretion history of the dark matter (DM) haloes that they
inhabit. Recent work has shown a linear relation between total mass of globular
clusters in the globular cluster system and the mass of its host dark matter
halo, calibrated from weak lensing. Here we extend this to GC system sizes, by
studying the radial density profiles of GCs around galaxies in nearby galaxy
groups. We find that radial density profiles of the GC systems are well fit
with a de Vaucouleurs profile. Combining our results with those from the
literature, we find tight relationship ($\sim 0.2$ dex scatter) between the
effective radius of the GC system and the virial radius (or mass) of its host
DM halo. The steep non-linear dependence of this relationship ($R_{e, GCS}
\propto R_{200}^{2.5 - 3}$) is currently not well understood, but is an
important clue regarding the assembly history of DM haloes and of the GC
systems that they host.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,141 | Variation of field enhancement factor near the emitter tip | The field enhancement factor at the emitter tip and its variation in a close
neighbourhood determines the emitter current in a Fowler-Nordheim like
formulation. For an axially symmetric emitter with a smooth tip, it is shown
that the variation can be accounted by a $\cos{\tilde{\theta}}$ factor in
appropriately defined normalized co-ordinates. This is shown analytically for a
hemi-ellipsoidal emitter and confirmed numerically for other emitter shapes
with locally quadratic tips.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,142 | Benchmark Environments for Multitask Learning in Continuous Domains | As demand drives systems to generalize to various domains and problems, the
study of multitask, transfer and lifelong learning has become an increasingly
important pursuit. In discrete domains, performance on the Atari game suite has
emerged as the de facto benchmark for assessing multitask learning. However, in
continuous domains there is a lack of agreement on standard multitask
evaluation environments which makes it difficult to compare different
approaches fairly. In this work, we describe a benchmark set of tasks that we
have developed in an extendable framework based on OpenAI Gym. We run a simple
baseline using Trust Region Policy Optimization and release the framework
publicly to be expanded and used for the systematic comparison of multitask,
transfer, and lifelong learning in continuous domains.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,143 | A Multiple Linear Regression Approach For Estimating the Market Value of Football Players in Forward Position | In this paper, market values of the football players in the forward positions
are estimated using multiple linear regression by including the physical and
performance factors in 2017-2018 season. Players from 4 major leagues of Europe
are examined, and by applying the test for homoscedasticity, a reasonable
regression model within 0.10 significance level is built, and the most and the
least affecting factors are explained in detail.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,144 | Application of the Huang-Hilbert transform and natural time to the analysis of Seismic Electric Signal activities | The Huang-Hilbert transform is applied to Seismic Electric Signal (SES)
activities in order to decompose them into a number of Intrinsic Mode Functions
(IMFs) and study which of these functions better represent the SES. The results
are compared to those obtained from the analysis in a new time domain termed
natural time after having subtracted the magnetotelluric background from the
original signal. It is shown that the instantaneous amplitudes of the IMFs can
be used for the distinction of SES from artificial noises when combined with
the natural time analysis.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,145 | The Hurwitz Subgroups of $E_6(2)$ | We prove that the exceptional group $E_6(2)$ is not a Hurwitz group. In the
course of proving this, we complete the classification up to conjugacy of all
Hurwitz subgroups of $E_6(2)$, in particular, those isomorphic to $L_2(8)$ and
$L_3(2)$.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,146 | Three IQs of AI Systems and their Testing Methods | The rapid development of artificial intelligence has brought the artificial
intelligence threat theory as well as the problem about how to evaluate the
intelligence level of intelligent products. Both need to find a quantitative
method to evaluate the intelligence level of intelligence systems, including
human intelligence. Based on the standard intelligence system and the extended
Von Neumann architecture, this paper proposes General IQ, Service IQ and Value
IQ evaluation methods for intelligence systems, depending on different
evaluation purposes. Among them, the General IQ of intelligence systems is to
answer the question of whether the artificial intelligence can surpass the
human intelligence, which is reflected in putting the intelligence systems on
an equal status and conducting the unified evaluation. The Service IQ and Value
IQ of intelligence systems are used to answer the question of how the
intelligent products can better serve the human, reflecting the intelligence
and required cost of each intelligence system as a product in the process of
serving human.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,147 | Polynomial functors in manifold calculus | Let M be a smooth manifold, and let O(M) be the poset of open subsets of M.
Manifold calculus, due to Goodwillie and Weiss, is a calculus of functors
suitable for studying contravariant functors (cofunctors) F: O(M)--> Top from
O(M) to the category of spaces. Weiss showed that polynomial cofunctors of
degree <= k are determined by their values on O_k(M), where O_k(M) is the full
subposet of O(M) whose objects are open subsets diffeomorphic to the disjoint
union of at most k balls. Afterwards Pryor showed that one can replace O_k(M)
by more general subposets and still recover the same notion of polynomial
cofunctor. In this paper, we generalize these results to cofunctors from O(M)
to any simplicial model category C. If conf(k, M) stands for the unordered
configuration space of k points in M, we also show that the category of
homogeneous cofunctors O(M) --> C of degree k is weakly equivalent to the
category of linear cofunctors O(conf(k, M)) --> C provided that C has a zero
object. Using a completely different approach, we also show that if C is a
general model category and F: O_k(M) --> C is an isotopy cofunctor, then the
homotopy right Kan extension of F along the inclusion O_k(M) --> O(M) is also
an isotopy cofunctor.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,148 | Tikhonov Regularization for Long Short-Term Memory Networks | It is a well-known fact that adding noise to the input data often improves
network performance. While the dropout technique may be a cause of memory loss,
when it is applied to recurrent connections, Tikhonov regularization, which can
be regarded as the training with additive noise, avoids this issue naturally,
though it implies regularizer derivation for different architectures. In case
of feedforward neural networks this is straightforward, while for networks with
recurrent connections and complicated layers it leads to some difficulties. In
this paper, a Tikhonov regularizer is derived for Long-Short Term Memory (LSTM)
networks. Although it is independent of time for simplicity, it considers
interaction between weights of the LSTM unit, which in theory makes it possible
to regularize the unit with complicated dependences by using only one parameter
that measures the input data perturbation. The regularizer that is proposed in
this paper has three parameters: one to control the regularization process, and
other two to maintain computation stability while the network is being trained.
The theory developed in this paper can be applied to get such regularizers for
different recurrent neural networks with Hadamard products and Lipschitz
continuous functions.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,149 | On the shape operator of relatively parallel hypersurfaces in the $n$-dimensional relative differential geometry | We deal with hypersurfaces in the framework of the $n$-dimensional relative
differential geometry. We consider a hypersurface $\varPhi$ of
$\mathbb{R}^{n+1}$ with position vector field $\mathbf{x}$, which is relatively
normalized by a relative normalization $\mathbf{y}$. Then $\mathbf{y}$ is also
a relative normalization of every member of the one-parameter family
$\mathcal{F}$ of hypersurfaces $\varPhi_\mu$ with position vector field
$$\mathbf{x}_\mu = \mathbf{x} + \mu \, \mathbf{y},$$ where $\mu$ is a real
constant. We call every hypersurface $\varPhi_\mu \in \mathcal{F}$ relatively
parallel to $\varPhi$ at the "relative distance" $\mu$. In this paper we study
(a) the shape (or Weingarten) operator,
(b) the relative principal curvatures,
(c) the relative mean curvature functions and
(d) the affine normalization
of a relatively parallel hypersurface $\left( \varPhi_\mu,\mathbf{y}\right)$
to $\left(\varPhi,\mathbf{y}\right)$.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,150 | Soft Pneumatic Gelatin Actuator for Edible Robotics | We present a fully edible pneumatic actuator based on gelatin-glycerol
composite. The actuator is monolithic, fabricated via a molding process, and
measures 90 mm in length, 20 mm in width, and 17 mm in thickness. Thanks to the
composite mechanical characteristics similar to those of silicone elastomers,
the actuator exhibits a bending angle of 170.3 ° and a blocked force of
0.34 N at the applied pressure of 25 kPa. These values are comparable to
elastomer based pneumatic actuators. As a validation example, two actuators are
integrated to form a gripper capable of handling various objects, highlighting
the high performance and applicability of the edible actuator. These edible
actuators, combined with other recent edible materials and electronics, could
lay the foundation for a new type of edible robots.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,151 | Deep learning for extracting protein-protein interactions from biomedical literature | State-of-the-art methods for protein-protein interaction (PPI) extraction are
primarily feature-based or kernel-based by leveraging lexical and syntactic
information. But how to incorporate such knowledge in the recent deep learning
methods remains an open question. In this paper, we propose a multichannel
dependency-based convolutional neural network model (McDepCNN). It applies one
channel to the embedding vector of each word in the sentence, and another
channel to the embedding vector of the head of the corresponding word.
Therefore, the model can use richer information obtained from different
channels. Experiments on two public benchmarking datasets, AIMed and BioInfer,
demonstrate that McDepCNN compares favorably to the state-of-the-art
rich-feature and single-kernel based methods. In addition, McDepCNN achieves
24.4% relative improvement in F1-score over the state-of-the-art methods on
cross-corpus evaluation and 12% improvement in F1-score over kernel-based
methods on "difficult" instances. These results suggest that McDepCNN
generalizes more easily over different corpora, and is capable of capturing
long distance features in the sentences.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,152 | Hierarchical VampPrior Variational Fair Auto-Encoder | Decision making is a process that is extremely prone to different biases. In
this paper we consider learning fair representations that aim at removing
nuisance (sensitive) information from the decision process. For this purpose,
we propose to use deep generative modeling and adapt a hierarchical Variational
Auto-Encoder to learn these fair representations. Moreover, we utilize the
mutual information as a useful regularizer for enforcing fairness of a
representation. In experiments on two benchmark datasets and two scenarios
where the sensitive variables are fully and partially observable, we show that
the proposed approach either outperforms or performs on par with the current
best model.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,153 | MHD Models of Gamma-ray Emission in WR 11 | Recent reports claiming tentative association of the massive star binary
system gamma^2 Velorum (WR 11) with a high-energy gamma-ray source observed by
Fermi-LAT contrast the so-far exclusive role of Eta Carinae as the hitherto
only detected gamma-ray emitter in the source class of particle-accelerating
colliding-wind binary systems. We aim to shed light on this claim of
association by providing dedicated model predictions for the nonthermal photon
emission spectrum of WR 11. We use three-dimensional magneto-hydrodynamic
modeling to trace the structure and conditions of the wind-collision region of
WR 11 throughout its 78.5 day orbit, including the important effect of
radiative braking in the stellar winds. A transport equation is then solved in
the wind-collision region to determine the population of relativistic electrons
and protons which are subsequently used to compute nonthermal photon emission
components. We find that - if WR 11 be indeed confirmed as the responsible
object for the observed gamma-ray emission - its radiation will unavoidably be
of hadronic origin owing to the strong radiation fields in the binary system
which inhibit the acceleration of electrons to energies suffciently high for
observable inverse Compton radiation. Different conditions in wind-collision
region near the apastron and periastron configuration lead to significant
variability on orbital time scales. The bulk of the hadronic gamma-ray emission
originates at a 400 solar radii wide region at the apex.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,154 | Caulking the Leakage Effect in MEEG Source Connectivity Analysis | Simplistic estimation of neural connectivity in MEEG sensor space is
impossible due to volume conduction. The only viable alternative is to carry
out connectivity estimation in source space. Among the neuroscience community
this is claimed to be impossible or misleading due to Leakage: linear mixing of
the reconstructed sources. To address this problematic we propose a novel
solution method that caulks the Leakage in MEEG source activity and
connectivity estimates: BC-VARETA. It is based on a joint estimation of source
activity and connectivity in the frequency domain representation of MEEG time
series. To achieve this, we go beyond current methods that assume a fixed
gaussian graphical model for source connectivity. In contrast we estimate this
graphical model in a Bayesian framework by placing priors on it, which allows
for highly optimized computations of the connectivity, via a new procedure
based on the local quadratic approximation under quite general prior models. A
further contribution of this paper is the rigorous definition of leakage via
the Spatial Dispersion Measure and Earth Movers Distance based on the geodesic
distances over the cortical manifold. Both measures are extended for the first
time to quantify Connectivity Leakage by defining them on the cartesian product
of cortical manifolds. Using these measures, we show that BC-VARETA outperforms
most state of the art inverse solvers by several orders of magnitude.
| 0 | 0 | 0 | 0 | 1 | 0 |
18,155 | New ADS Functionality for the Curator | In this paper we provide an update concerning the operations of the NASA
Astrophysics Data System (ADS), its services and user interface, and the
content currently indexed in its database. As the primary information system
used by researchers in Astronomy, the ADS aims to provide a comprehensive index
of all scholarly resources appearing in the literature. With the current effort
in our community to support data and software citations, we discuss what steps
the ADS is taking to provide the needed infrastructure in collaboration with
publishers and data providers. A new API provides access to the ADS search
interface, metrics, and libraries allowing users to programmatically automate
discovery and curation tasks. The new ADS interface supports a greater
integration of content and services with a variety of partners, including ORCID
claiming, indexing of SIMBAD objects, and article graphics from a variety of
publishers. Finally, we highlight how librarians can facilitate the ingest of
gray literature that they curate into our system.
| 1 | 1 | 0 | 0 | 0 | 0 |
18,156 | Survey of reasoning using Neural networks | Reason and inference require process as well as memory skills by humans.
Neural networks are able to process tasks like image recognition (better than
humans) but in memory aspects are still limited (by attention mechanism, size).
Recurrent Neural Network (RNN) and it's modified version LSTM are able to solve
small memory contexts, but as context becomes larger than a threshold, it is
difficult to use them. The Solution is to use large external memory. Still, it
poses many challenges like, how to train neural networks for discrete memory
representation, how to describe long term dependencies in sequential data etc.
Most prominent neural architectures for such tasks are Memory networks:
inference components combined with long term memory and Neural Turing Machines:
neural networks using external memory resources. Also, additional techniques
like attention mechanism, end to end gradient descent on discrete memory
representation are needed to support these solutions. Preliminary results of
above neural architectures on simple algorithms (sorting, copying) and Question
Answering (based on story, dialogs) application are comparable with the state
of the art. In this paper, I explain these architectures (in general), the
additional techniques used and the results of their application.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,157 | Recurrent Neural Network-based Model Predictive Control for Continuous Pharmaceutical Manufacturing | The pharmaceutical industry has witnessed exponential growth in transforming
operations towards continuous manufacturing to effectively achieve increased
profitability, reduced waste, and extended product range. Model Predictive
Control (MPC) can be applied for enabling this vision, in providing superior
regulation of critical quality attributes. For MPC, obtaining a workable model
is of fundamental importance, especially in the presence of complex reaction
kinetics and process dynamics. Whilst physics-based models are desirable, it is
not always practical to obtain one effective and fit-for-purpose model.
Instead, within industry, data-driven system-identification approaches have
been found to be useful and widely deployed in MPC solutions. In this work, we
demonstrated the applicability of Recurrent Neural Networks (RNNs) for MPC
applications in continuous pharmaceutical manufacturing. We have shown that
RNNs are especially well-suited for modeling dynamical systems due to their
mathematical structure and satisfactory closed-loop control performance can be
yielded for MPC in continuous pharmaceutical manufacturing.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,158 | A unitary "quantization commutes with reduction" map for the adjoint action of a compact Lie group | Let $K$ be a simply connected compact Lie group and $T^{\ast}(K)$ its
cotangent bundle. We consider the problem of "quantization commutes with
reduction" for the adjoint action of $K$ on $T^{\ast}(K).$ We quantize both
$T^{\ast}(K)$ and the reduced phase space using geometric quantization with
half-forms. We then construct a geometrically natural map from the space of
invariant elements in the quantization of $T^{\ast}(K)$ to the quantization of
the reduced phase space. We show that this map is a constant multiple of a
unitary map.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,159 | Electron Paramagnetic Resonance Spectroscopy of Er$^{3+}$:Y$_2$SiO$_5$ Using Josephson Bifurcation Amplifier: Observation of Hyperfine and Quadrupole Structures | We performed magnetic field and frequency tunable electron paramagnetic
resonance spectroscopy of an Er$^{3+}$ doped Y$_2$SiO$_5$ crystal by observing
the change in flux induced on a direct current-superconducting quantum
interference device (dc-SQUID) loop of a tunable Josephson bifurcation
amplifer. The observed spectra show multiple transitions which agree well with
the simulated energy levels, taking into account the hyperfine and quadrupole
interactions of $^{167}$Er. The sensing volume is about 0.15 pl, and our
inferred measurement sensitivity (limited by external flux noise) is
approximately $1.5\times10^4$ electron spins for a 1 s measurement. The
sensitivity value is two orders of magnitude better than similar schemes using
dc-SQUID switching readout.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,160 | Step Detection Algorithm For Accurate Distance Estimation Using Dynamic Step Length | In this paper, a new Smartphone sensor based algorithm is proposed to detect
accurate distance estimation. The algorithm consists of two phases, the first
phase is for detecting the peaks from the Smartphone accelerometer sensor. The
other one is for detecting the step length which varies from step to step. The
proposed algorithm is tested and implemented in real environment and it showed
promising results. Unlike the conventional approaches, the error of the
proposed algorithm is fixed and is not affected by the long distance.
Keywords distance estimation, peaks, step length, accelerometer.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,161 | Digging Into Self-Supervised Monocular Depth Estimation | Depth-sensing is important for both navigation and scene understanding.
However, procuring RGB images with corresponding depth data for training deep
models is challenging; large-scale, varied, datasets with ground truth training
data are scarce. Consequently, several recent methods have proposed treating
the training of monocular color-to-depth estimation networks as an image
reconstruction problem, thus forgoing the need for ground truth depth.
There are multiple concepts and design decisions for these networks that seem
sensible, but give mixed or surprising results when tested. For example,
binocular stereo as the source of self-supervision seems cumbersome and hard to
scale, yet results are less blurry compared to training with monocular videos.
Such decisions also interplay with questions about architectures, loss
functions, image scales, and motion handling. In this paper, we propose a
simple yet effective model, with several general architectural and loss
innovations, that surpasses all other self-supervised depth estimation
approaches on KITTI.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,162 | The Causality/Repair Connection in Databases: Causality-Programs | In this work, answer-set programs that specify repairs of databases are used
as a basis for solving computational and reasoning problems about causes for
query answers from databases.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,163 | Two-temperature logistic regression based on the Tsallis divergence | We develop a variant of multiclass logistic regression that achieves three
properties: i) We minimize a non-convex surrogate loss which makes the method
robust to outliers, ii) our method allows transitioning between non-convex and
convex losses by the choice of the parameters, iii) the surrogate loss is Bayes
consistent, even in the non-convex case. The algorithm has one weight vector
per class and the surrogate loss is a function of the linear activations (one
per class). The surrogate loss of an example with linear activation vector
$\mathbf{a}$ and class $c$ has the form $-\log_{t_1} \exp_{t_2} (a_c -
G_{t_2}(\mathbf{a}))$ where the two temperatures $t_1$ and $t_2$ "temper" the
$\log$ and $\exp$, respectively, and $G_{t_2}$ is a generalization of the
log-partition function. We motivate this loss using the Tsallis divergence. As
the temperature of the logarithm becomes smaller than the temperature of the
exponential, the surrogate loss becomes "more quasi-convex". Various tunings of
the temperatures recover previous methods and tuning the degree of
non-convexity is crucial in the experiments. The choice $t_1<1$ and $t_2>1$
performs best experimentally. We explain this by showing that $t_1 < 1$ caps
the surrogate loss and $t_2 >1$ makes the predictive distribution have a heavy
tail.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,164 | Bootstrapped synthetic likelihood | Approximate Bayesian computation (ABC) and synthetic likelihood (SL)
techniques have enabled the use of Bayesian inference for models that may be
simulated, but for which the likelihood cannot be evaluated pointwise at values
of an unknown parameter $\theta$. The main idea in ABC and SL is to, for
different values of $\theta$ (usually chosen using a Monte Carlo algorithm),
build estimates of the likelihood based on simulations from the model
conditional on $\theta$. The quality of these estimates determines the
efficiency of an ABC/SL algorithm. In standard ABC/SL, the only means to
improve an estimated likelihood at $\theta$ is to simulate more times from the
model conditional on $\theta$, which is infeasible in cases where the simulator
is computationally expensive. In this paper we describe how to use
bootstrapping as a means for improving SL estimates whilst using fewer
simulations from the model, and also investigate its use in ABC. Further, we
investigate the use of the bag of little bootstraps as a means for applying
this approach to large datasets, yielding Monte Carlo algorithms that
accurately approximate posterior distributions whilst only simulating
subsamples of the full data. Examples of the approach applied to i.i.d.,
temporal and spatial data are given.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,165 | The Weinstein conjecture for iterated planar contact structures | In this paper, we introduce the notions of an iterated planar Lefschetz
fibration and an iterated planar open book decomposition and prove the
Weinstein conjecture for contact manifolds supporting an open book that has
iterated planar pages. For $n\geq 1$, we show that a $(2n+1)$-dimensional
contact manifold $M$ supporting an iterated planar open book decomposition
satisfies the Weinstein conjecture.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,166 | Privacy-Preserving Deep Learning via Weight Transmission | This paper considers the scenario that multiple data owners wish to apply a
machine learning method over the combined dataset of all owners to obtain the
best possible learning output but do not want to share the local datasets owing
to privacy concerns. We design systems for the scenario that the stochastic
gradient descent (SGD) algorithm is used as the machine learning method because
SGD (or its variants) is at the heart of recent deep learning techniques over
neural networks. Our systems differ from existing systems in the following
features: {\bf (1)} any activation function can be used, meaning that no
privacy-preserving-friendly approximation is required; {\bf (2)} gradients
computed by SGD are not shared but the weight parameters are shared instead;
and {\bf (3)} robustness against colluding parties even in the extreme case
that only one honest party exists. We prove that our systems, while
privacy-preserving, achieve the same learning accuracy as SGD and hence retain
the merit of deep learning with respect to accuracy. Finally, we conduct
several experiments using benchmark datasets, and show that our systems
outperform previous system in terms of learning accuracies.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,167 | CORRECT: Code Reviewer Recommendation in GitHub Based on Cross-Project and Technology Experience | Peer code review locates common coding rule violations and simple logical
errors in the early phases of software development, and thus reduces overall
cost. However, in GitHub, identifying an appropriate code reviewer for a pull
request is a non-trivial task given that reliable information for reviewer
identification is often not readily available. In this paper, we propose a code
reviewer recommendation technique that considers not only the relevant
cross-project work history (e.g., external library experience) but also the
experience of a developer in certain specialized technologies associated with a
pull request for determining her expertise as a potential code reviewer. We
first motivate our technique using an exploratory study with 10 commercial
projects and 10 associated libraries external to those projects. Experiments
using 17,115 pull requests from 10 commercial projects and six open source
projects show that our technique provides 85%--92% recommendation accuracy,
about 86% precision and 79%--81% recall in code reviewer recommendation, which
are highly promising. Comparison with the state-of-the-art technique also
validates the empirical findings and the superiority of our recommendation
technique.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,168 | Charged Perfect Fluid Distribution for Cosmological Universe Interacting With Massive Scalar Field in Brans-Dicke Theory | Considering a spherically-symmetric non-static cosmological flat model of
Robertson-Walker universe we have investigated the problem of perfect fluid
distribution interacting with the gravitational field in presence of massive
scalar field and electromagnetic field in B-D theory. Exact solutions have been
obtained by using a general approach of solving the partial differential
equations and it has been observed that the electromagnetic field cannot
survive for the cosmological flat model due to the influence caused by the
presence of massive scalar field.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,169 | Finite homogeneous geometries | This paper reproduces the text of a part of the Author's DPhil thesis. It
gives a proof of the classification of non-trivial, finite homogeneous
geometries of sufficiently high dimension which does not depend on the
classification of the finite simple groups.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,170 | Weakly-Private Information Retrieval | Private information retrieval (PIR) protocols make it possible to retrieve a
file from a database without disclosing any information about the identity of
the file being retrieved. These protocols have been rigorously explored from an
information-theoretic perspective in recent years. While existing protocols
strictly impose that no information is leaked on the file's identity, this work
initiates the study of the tradeoffs that can be achieved by relaxing the
requirement of perfect privacy. In case the user is willing to leak some
information on the identity of the retrieved file, we study how the PIR rate,
as well as the upload cost and access complexity, can be improved. For the
particular case of replicated servers, we propose two weakly-private
information retrieval schemes based on two recent PIR protocols and a family of
schemes based on partitioning. Lastly, we compare the performance of the
proposed schemes.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,171 | Concentrated Differentially Private Gradient Descent with Adaptive per-Iteration Privacy Budget | Iterative algorithms, like gradient descent, are common tools for solving a
variety of problems, such as model fitting. For this reason, there is interest
in creating differentially private versions of them. However, their conversion
to differentially private algorithms is often naive. For instance, a fixed
number of iterations are chosen, the privacy budget is split evenly among them,
and at each iteration, parameters are updated with a noisy gradient. In this
paper, we show that gradient-based algorithms can be improved by a more careful
allocation of privacy budget per iteration. Intuitively, at the beginning of
the optimization, gradients are expected to be large, so that they do not need
to be measured as accurately. However, as the parameters approach their optimal
values, the gradients decrease and hence need to be measured more accurately.
We add a basic line-search capability that helps the algorithm decide when more
accurate gradient measurements are necessary. Our gradient descent algorithm
works with the recently introduced zCDP version of differential privacy. It
outperforms prior algorithms for model fitting and is competitive with the
state-of-the-art for $(\epsilon,\delta)$-differential privacy, a strictly
weaker definition than zCDP.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,172 | "The universal meaning of the quantum of action", by Jun Ishiwara | Commented translation of the paper "Universelle Bedeutung des
Wirkungsquantums", published by Jun Ishiwara in German in the Proceedings of
Tokyo Mathematico-Physical Society 8 106-116 (1915). In his work, Ishiwara,
tenured at Sendai University, Japan, proposed - simultaneously with Arnold
Sommerfeld, William Wilson and Niels Bohr in Europe - the pase-space-integral
quantization, a rule that would be incorporated into the old-quantum-mechanics
formalism.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,173 | Barrier to recombination of oppositely charged large polarons | Electronic charge carriers in ionic materials can self-trap to form large
polarons. Interference between the ionic displacements associated with
oppositely charged large polarons increases as they approach one another.
Initially this interference produces an attractive potential that fosters their
merger. However, for small enough separations this interference generates a
repulsive interaction between oppositely charged large polarons. In suitable
circumstances this repulsion can overwhelm their direct Coulomb attraction.
Then the resulting net repulsion between oppositely charged large polarons
constitutes a potential barrier which impedes their recombination.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,174 | Metric Reduction and Generalized Holomorphic Structures | In this paper, metric reduction in generalized geometry is investigated. We
show how the Bismut connections on the quotient manifold are obtained from
those on the original manifold. The result facilitates the analysis of
generalized K$\ddot{a}$hler reduction, which motivates the concept of metric
generalized principal bundles and our approach to construct a family of
generalized holomorphic line bundles over $\mathbb{C}P^2$ equipped with some
non-trivial generalized K$\ddot{a}$hler structures.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,175 | Learning of Optimal Forecast Aggregation in Partial Evidence Environments | We consider the forecast aggregation problem in repeated settings, where the
forecasts are done on a binary event. At each period multiple experts provide
forecasts about an event. The goal of the aggregator is to aggregate those
forecasts into a subjective accurate forecast. We assume that experts are
Bayesian; namely they share a common prior, each expert is exposed to some
evidence, and each expert applies Bayes rule to deduce his forecast. The
aggregator is ignorant with respect to the information structure (i.e.,
distribution over evidence) according to which experts make their prediction.
The aggregator observes the experts' forecasts only. At the end of each period
the actual state is realized. We focus on the question whether the aggregator
can learn to aggregate optimally the forecasts of the experts, where the
optimal aggregation is the Bayesian aggregation that takes into account all the
information (evidence) in the system.
We consider the class of partial evidence information structures, where each
expert is exposed to a different subset of conditionally independent signals.
Our main results are positive; We show that optimal aggregation can be learned
in polynomial time in a quite wide range of instances of the partial evidence
environments. We provide a tight characterization of the instances where
learning is possible and impossible.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,176 | Distributed Unknown-Input-Observers for Cyber Attack Detection and Isolation in Formation Flying UAVs | In this paper, cyber attack detection and isolation is studied on a network
of UAVs in a formation flying setup. As the UAVs communicate to reach consensus
on their states while making the formation, the communication network among the
UAVs makes them vulnerable to a potential attack from malicious adversaries.
Two types of attacks pertinent to a network of UAVs have been considered: a
node attack on the UAVs and a deception attack on the communication between the
UAVs. UAVs formation control presented using a consensus algorithm to reach a
pre-specified formation. A node and a communication path deception cyber
attacks on the UAV's network are considered with their respective models in the
formation setup. For these cyber attacks detection, a bank of Unknown Input
Observer (UIO) based distributed fault detection scheme proposed to detect and
identify the compromised UAV in the formation. A rule based on the residuals
generated using the bank of UIOs are used to detect attacks and identify the
compromised UAV in the formation. Further, an algorithm developed to remove the
faulty UAV from the network once an attack detected and the compromised UAV
isolated while maintaining the formation flight with a missing UAV node.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,177 | Exponential Decay of the lengths of Spectral Gaps for Extended Harper's Model with Liouvillean Frequency | In this paper, we study the non-self dual extended Harper's model with
Liouvillean frequency. By establishing quantitative reducibility results
together with the averaging method, we prove that the lengths of spectral gaps
decay exponentially.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,178 | Topological Landau-Zener Bloch Oscillations in Photonic Floquet Lieb Lattices | The Lieb Lattice exhibits intriguing properties that are of general interest
in both the fundamental physics and practical applications. Here, we
investigate the topological Landau-Zener Bloch oscillation in a photonic
Floquet Lieb lattice, where the dimerized helical waveguides is constructed to
realize the synthetic spin-orbital interaction through the Floquet mechanism,
rendering us to study the impacts of topological transition from trivial gaps
to non-trivial ones. The compact localized states of flat bands supported by
the local symmetry of Lieb lattice will be associated with other bands by
topological invariants, Chern number, and involved into Landau-Zener transition
during Bloch oscillation. Importantly, the non-trivial geometrical phases after
topological transitions will be taken into account for constructive and
destructive interferences of wave functions. The numerical calculations of
continuum photonic medium demonstrate reasonable agreements with theoretical
tight-binding model. Our results provide an ongoing effort to realize designed
quantum materials with tailored properties.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,179 | Assessing the impact of bulk and shear viscosities on large scale structure formation | It is analyzed the effects of both bulk and shear viscosities on the
perturbations, relevant for structure formation in late time cosmology. It is
shown that shear viscosity can be as effective as the bulk viscosity on
suppressing the growth of perturbations and delaying the nonlinear regime. A
statistical analysis of the shear and bulk viscous effects is performed and
some constraints on these viscous effects are given.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,180 | Correlations in eigenfunctions of quantum chaotic systems with sparse Hamiltonian matrices | In most realistic models for quantum chaotic systems, the Hamiltonian
matrices in unperturbed bases have a sparse structure. We study correlations in
eigenfunctions of such systems and derive explicit expressions for some of the
correlation functions with respect to energy. The analytical results are tested
in several models by numerical simulations. An application is given for a
relation between transition probabilities.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,181 | Fast Trajectory Optimization for Legged Robots using Vertex-based ZMP Constraints | This paper combines the fast Zero-Moment-Point (ZMP) approaches that work
well in practice with the broader range of capabilities of a Trajectory
Optimization formulation, by optimizing over body motion, footholds and Center
of Pressure simultaneously. We introduce a vertex-based representation of the
support-area constraint, which can treat arbitrarily oriented point-, line-,
and area-contacts uniformly. This generalization allows us to create motions
such quadrupedal walking, trotting, bounding, pacing, combinations and
transitions between these, limping, bipedal walking and push-recovery all with
the same approach. This formulation constitutes a minimal representation of the
physical laws (unilateral contact forces) and kinematic restrictions (range of
motion) in legged locomotion, which allows us to generate various motion in
less than a second. We demonstrate the feasibility of the generated motions on
a real quadruped robot.
| 1 | 0 | 1 | 0 | 0 | 0 |
18,182 | Electric Field Properties inside Central Gap of Dipole Micro/Nano Antennas Operating at 30 THz | This work investigates the influence of geometric variations in dipole
micro/nano antennas, regarding their implications on the characteristics of the
electric field inside the gap space of antenna monopoles. The gap is the
interface for a metal-Insulator-Metal (MIM) rectifier diode and it needs to be
carefully optimized, in order to allow better electric current generation by
tunneling current mechanisms. The arrangement (antenna + diode or rectenna) was
designed to operate around 30 Terahertz (THz).
| 0 | 1 | 0 | 0 | 0 | 0 |
18,183 | Towards Understanding the Invertibility of Convolutional Neural Networks | Several recent works have empirically observed that Convolutional Neural Nets
(CNNs) are (approximately) invertible. To understand this approximate
invertibility phenomenon and how to leverage it more effectively, we focus on a
theoretical explanation and develop a mathematical model of sparse signal
recovery that is consistent with CNNs with random weights. We give an exact
connection to a particular model of model-based compressive sensing (and its
recovery algorithms) and random-weight CNNs. We show empirically that several
learned networks are consistent with our mathematical analysis and then
demonstrate that with such a simple theoretical framework, we can obtain
reasonable re- construction results on real images. We also discuss gaps
between our model assumptions and the CNN trained for classification in
practical scenarios.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,184 | An effective algorithm for hyperparameter optimization of neural networks | A major challenge in designing neural network (NN) systems is to determine
the best structure and parameters for the network given the data for the
machine learning problem at hand. Examples of parameters are the number of
layers and nodes, the learning rates, and the dropout rates. Typically, these
parameters are chosen based on heuristic rules and manually fine-tuned, which
may be very time-consuming, because evaluating the performance of a single
parametrization of the NN may require several hours. This paper addresses the
problem of choosing appropriate parameters for the NN by formulating it as a
box-constrained mathematical optimization problem, and applying a
derivative-free optimization tool that automatically and effectively searches
the parameter space. The optimization tool employs a radial basis function
model of the objective function (the prediction accuracy of the NN) to
accelerate the discovery of configurations yielding high accuracy. Candidate
configurations explored by the algorithm are trained to a small number of
epochs, and only the most promising candidates receive full training. The
performance of the proposed methodology is assessed on benchmark sets and in
the context of predicting drug-drug interactions, showing promising results.
The optimization tool used in this paper is open-source.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,185 | On estimation in varying coefficient models for sparse and irregularly sampled functional data | In this paper, we study a smoothness regularization method for a varying
coefficient model based on sparse and irregularly sampled functional data which
is contaminated with some measurement errors. We estimate the one-dimensional
covariance and cross-covariance functions of the underlying stochastic
processes based on a reproducing kernel Hilbert space approach. We then obtain
least squares estimates of the coefficient functions. Simulation studies
demonstrate that the proposed method has good performance. We illustrate our
method by an analysis of longitudinal primary biliary liver cirrhosis data.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,186 | Large-degree asymptotics of rational Painleve-IV functions associated to generalized Hermite polynomials | The Painleve-IV equation has three families of rational solutions generated
by the generalized Hermite polynomials. Each family is indexed by two positive
integers m and n. These functions have applications to nonlinear wave
equations, random matrices, fluid dynamics, and quantum mechanics. Numerical
studies suggest the zeros and poles form a deformed n by m rectangular grid.
Properly scaled, the zeros and poles appear to densely fill certain curvilinear
rectangles as m and n tend to infinity with r=m/n fixed. Generalizing a method
of Bertola and Bothner used to study rational Painleve-II functions, we express
the generalized Hermite rational Painleve-IV functions in terms of certain
orthogonal polynomials on the unit circle. Using the Deift-Zhou nonlinear
steepest-descent method, we asymptotically analyze the associated
Riemann-Hilbert problem in the limit as n tends to infinity with m=r*n for r
fixed. We obtain an explicit characterization of the boundary curve and
determine the leading-order asymptotic expansion of the functions in the
pole-free region.
| 0 | 1 | 1 | 0 | 0 | 0 |
18,187 | Lattice Boltzmann study of chemically-driven self-propelled droplets | We numerically study the behavior of self-propelled liquid droplets whose
motion is triggered by a Marangoni-like flow. This latter is generated by
variations of surfactant concentration which affect the droplet surface tension
promoting its motion. In the present paper a model for droplets with a third
amphiphilic component is adopted. The dynamics is described by Navier-Stokes
and convection-diffusion equations, solved by lattice Boltzmann method coupled
with finite-difference schemes. We focus on two cases. First the study of
self-propulsion of an isolated droplet is carried on and, then, the interaction
of two self-propelled droplets is investigated. In both cases, when the
surfactant migrates towards the interface, a quadrupolar vortex of the velocity
field forms inside the droplet and causes the motion. A weaker dipolar field
emerges instead when the surfactant is mainly diluted in the bulk. The dynamics
of two interacting droplets is more complex and strongly depends on their
reciprocal distance. If, in a head-on collision, droplets are close enough, the
velocity field initially attracts them until a motionless steady state is
achieved. If the droplets are vertically shifted, the hydrodynamic field leads
to an initial reciprocal attraction followed by a scattering along opposite
directions. This hydrodynamic interaction acts on a separation of some droplet
radii otherwise it becomes negligible and droplets motion is only driven by
Marangoni effect. Finally, if one of the droplets is passive, this latter is
generally advected by the fluid flow generated by the active one.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,188 | RDMAvisor: Toward Deploying Scalable and Simple RDMA as a Service in Datacenters | RDMA is increasingly adopted by cloud computing platforms to provide low CPU
overhead, low latency, high throughput network services. On the other hand,
however, it is still challenging for developers to realize fast deployment of
RDMA-aware applications in the datacenter, since the performance is highly
related to many lowlevel details of RDMA operations. To address this problem,
we present a simple and scalable RDMA as Service (RaaS) to mitigate the impact
of RDMA operational details. RaaS provides careful message buffer management to
improve CPU/memory utilization and improve the scalability of RDMA operations.
These optimized designs lead to simple and flexible programming model for
common and knowledgeable users. We have implemented a prototype of RaaS, named
RDMAvisor, and evaluated its performance on a cluster with a large number of
connections. Our experiment results demonstrate that RDMAvisor achieves high
throughput for thousand of connections and maintains low CPU and memory
overhead through adaptive RDMA transport selection.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,189 | Towards a better understanding of the matrix product function approximation algorithm in application to quantum physics | We recently introduced a method to approximate functions of Hermitian Matrix
Product Operators or Tensor Trains that are of the form $\mathsf{Tr} f(A)$.
Functions of this type occur in several applications, most notably in quantum
physics. In this work we aim at extending the theoretical understanding of our
method by showing several properties of our algorithm that can be used to
detect and correct errors in its results. Most importantly, we show that there
exists a more computationally efficient version of our algorithm for certain
inputs. To illustrate the usefulness of our finding, we prove that several
classes of spin Hamiltonians in quantum physics fall into this input category.
We finally support our findings with numerical results obtained for an example
from quantum physics.
| 1 | 1 | 0 | 0 | 0 | 0 |
18,190 | Explicit polynomial sequences with maximal spaces of partial derivatives and a question of K. Mulmuley | We answer a question of K. Mulmuley: In [Efremenko-Landsberg-Schenck-Weyman]
it was shown that the method of shifted partial derivatives cannot be used to
separate the padded permanent from the determinant. Mulmuley asked if this
"no-go" result could be extended to a model without padding. We prove this is
indeed the case using the iterated matrix multiplication polynomial. We also
provide several examples of polynomials with maximal space of partial
derivatives, including the complete symmetric polynomials. We apply Koszul
flattenings to these polynomials to have the first explicit sequence of
polynomials with symmetric border rank lower bounds higher than the bounds
attainable via partial derivatives.
| 1 | 0 | 1 | 0 | 0 | 0 |
18,191 | Affine processes under parameter uncertainty | We develop a one-dimensional notion of affine processes under parameter
uncertainty, which we call non-linear affine processes. This is done as
follows: given a set of parameters for the process, we construct a
corresponding non-linear expectation on the path space of continuous processes.
By a general dynamic programming principle we link this non-linear expectation
to a variational form of the Kolmogorov equation, where the generator of a
single affine process is replaced by the supremum over all corresponding
generators of affine processes with parameters in the parameter set. This
non-linear affine process yields a tractable model for Knightian uncertainty,
especially for modelling interest rates under ambiguity.
We then develop an appropriate Ito-formula, the respective term-structure
equations and study the non-linear versions of the Vasicek and the
Cox-Ingersoll-Ross (CIR) model. Thereafter we introduce the non-linear
Vasicek-CIR model. This model is particularly suitable for modelling interest
rates when one does not want to restrict the state space a priori and hence the
approach solves this modelling issue arising with negative interest rates.
| 0 | 0 | 0 | 0 | 0 | 1 |
18,192 | Plane graphs without 4- and 5-cycles and without ext-triangular 7-cycles are 3-colorable | Listed as No. 53 among the one hundred famous unsolved problems in [J. A.
Bondy, U. S. R. Murty, Graph Theory, Springer, Berlin, 2008] is Steinberg's
conjecture, which states that every planar graph without 4- and 5-cycles is
3-colorable. In this paper, we show that plane graphs without 4- and 5-cycles
are 3-colorable if they have no ext-triangular 7-cycles. This implies that (1)
planar graphs without 4-, 5-, 7-cycles are 3-colorable, and (2) planar graphs
without 4-, 5-, 8-cycles are 3-colorable, which cover a number of known results
in the literature motivated by Steinberg's conjecture.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,193 | Temporal Logic Task Planning and Intermittent Connectivity Control of Mobile Robot Networks | In this paper, we develop a distributed intermittent communication and task
planning framework for mobile robot teams. The goal of the robots is to
accomplish complex tasks, captured by local Linear Temporal Logic formulas, and
share the collected information with all other robots and possibly also with a
user. Specifically, we consider situations where the robot communication
capabilities are not sufficient to form reliable and connected networks while
the robots move to accomplish their tasks. In this case, intermittent
communication protocols are necessary that allow the robots to temporarily
disconnect from the network in order to accomplish their tasks free of
communication constraints. We assume that the robots can only communicate with
each other when they meet at common locations in space. Our distributed control
framework jointly determines local plans that allow all robots fulfill their
assigned temporal tasks, sequences of communication events that guarantee
information exchange infinitely often, and optimal communication locations that
minimize a desired distance metric. Simulation results verify the efficacy of
the proposed controllers.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,194 | Second-oder analysis in second-oder cone programming | The paper conducts a second-order variational analysis for an important class
of nonpolyhedral conic programs generated by the so-called
second-order/Lorentz/ice-cream cone $Q$. From one hand, we prove that the
indicator function of $Q$ is always twice epi-differentiable and apply this
result to characterizing the uniqueness of Lagrange multipliers at stationary
points together with an error bound estimate in the general second-order cone
setting involving ${\cal C}^2$-smooth data. On the other hand, we precisely
calculate the graphical derivative of the normal cone mapping to $Q$ under the
weakest metric subregularity constraint qualification and then give an
application of the latter result to a complete characterization of isolated
calmness for perturbed variational systems associated with second-order cone
programs. The obtained results seem to be the first in the literature in these
directions for nonpolyhedral problems without imposing any nondegeneracy
assumptions.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,195 | Understanding Deep Learning Performance through an Examination of Test Set Difficulty: A Psychometric Case Study | Interpreting the performance of deep learning models beyond test set accuracy
is challenging. Characteristics of individual data points are often not
considered during evaluation, and each data point is treated equally. We
examine the impact of a test set question's difficulty to determine if there is
a relationship between difficulty and performance. We model difficulty using
well-studied psychometric methods on human response patterns. Experiments on
Natural Language Inference (NLI) and Sentiment Analysis (SA) show that the
likelihood of answering a question correctly is impacted by the question's
difficulty. As DNNs are trained with more data, easy examples are learned more
quickly than hard examples.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,196 | The homotopy theory of coalgebras over simplicial comonads | We apply the Acyclicity Theorem of Hess, Kerdziorek, Riehl, and Shipley
(recently corrected by Garner, Kedziorek, and Riehl) to establishing the
existence of model category structure on categories of coalgebras over comonads
arising from simplicial adjunctions, under mild conditions on the adjunction
and the associated comonad. We study three concrete examples of such
adjunctions where the left adjoint is comonadic and show that in each case the
component of the derived counit of the comparison adjunction at any fibrant
object is an isomorphism, while the component of the derived unit at any
1-connected object is a weak equivalence. To prove this last result, we explain
how to construct explicit fibrant replacements for 1-connected coalgebras in
the image of the canonical comparison functor from the Postnikov decompositions
of their underlying simplicial sets. We also show in one case that the derived
unit is precisely the Bousfield-Kan completion map.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,197 | Spatio-temporal analysis of regional unemployment rates: A comparison of model based approaches | This study aims to analyze the methodologies that can be used to estimate the
total number of unemployed, as well as the unemployment rates for 28 regions of
Portugal, designated as NUTS III regions, using model based approaches as
compared to the direct estimation methods currently employed by INE (National
Statistical Institute of Portugal). Model based methods, often known as small
area estimation methods (Rao, 2003), "borrow strength" from neighbouring
regions and in doing so, aim to compensate for the small sample sizes often
observed in these areas. Consequently, it is generally accepted that model
based methods tend to produce estimates which have lesser variation. Other
benefit in employing model based methods is the possibility of including
auxiliary information in the form of variables of interest and latent random
structures. This study focuses on the application of Bayesian hierarchical
models to the Portuguese Labor Force Survey data from the 1st quarter of 2011
to the 4th quarter of 2013. Three different data modeling strategies are
considered and compared: Modeling of the total unemployed through Poisson,
Binomial and Negative Binomial models; modeling of rates using a Beta model;
and modeling of the three states of the labor market (employed, unemployed and
inactive) by a Multinomial model. The implementation of these models is based
on the \textit{Integrated Nested Laplace Approximation} (INLA) approach, except
for the Multinomial model which is implemented based on the method of Monte
Carlo Markov Chain (MCMC). Finally, a comparison of the performance of these
models, as well as the comparison of the results with those obtained by direct
estimation methods at NUTS III level are given.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,198 | Accelerating solutions of one-dimensional unsteady PDEs with GPU-based swept time-space decomposition | The expedient design of precision components in aerospace and other high-tech
industries requires simulations of physical phenomena often described by
partial differential equations (PDEs) without exact solutions. Modern design
problems require simulations with a level of resolution difficult to achieve in
reasonable amounts of time---even in effectively parallelized solvers. Though
the scale of the problem relative to available computing power is the greatest
impediment to accelerating these applications, significant performance gains
can be achieved through careful attention to the details of memory
communication and access. The swept time-space decomposition rule reduces
communication between sub-domains by exhausting the domain of influence before
communicating boundary values. Here we present a GPU implementation of the
swept rule, which modifies the algorithm for improved performance on this
processing architecture by prioritizing use of private (shared) memory,
avoiding interblock communication, and overwriting unnecessary values. It shows
significant improvement in the execution time of finite-difference solvers for
one-dimensional unsteady PDEs, producing speedups of 2--9$\times$ for a range
of problem sizes, respectively, compared with simple GPU versions and
7--300$\times$ compared with parallel CPU versions. However, for a more
sophisticated one-dimensional system of equations discretized with a
second-order finite-volume scheme, the swept rule performs 1.2--1.9$\times$
worse than a standard implementation for all problem sizes.
| 1 | 1 | 0 | 0 | 0 | 0 |
18,199 | Efficiency Analysis of ASP Encodings for Sequential Pattern Mining Tasks | This article presents the use of Answer Set Programming (ASP) to mine
sequential patterns. ASP is a high-level declarative logic programming paradigm
for high level encoding combinatorial and optimization problem solving as well
as knowledge representation and reasoning. Thus, ASP is a good candidate for
implementing pattern mining with background knowledge, which has been a data
mining issue for a long time. We propose encodings of the classical sequential
pattern mining tasks within two representations of embeddings (fill-gaps vs
skip-gaps) and for various kinds of patterns: frequent, constrained and
condensed. We compare the computational performance of these encodings with
each other to get a good insight into the efficiency of ASP encodings. The
results show that the fill-gaps strategy is better on real problems due to
lower memory consumption. Finally, compared to a constraint programming
approach (CPSM), another declarative programming paradigm, our proposal showed
comparable performance.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,200 | Fukaya categories in Koszul duality theory | In this paper, we define $A_{\infty}$-Koszul duals for directed
$A_{\infty}$-categories in terms of twists in their $A_{\infty}$-derived
categories. Then, we compute a concrete formula of $A_{\infty}$-Koszul duals
for path algebras with directed $A_n$-type Gabriel quivers. To compute an
$A_\infty$-Koszul dual of such an algebra $A$, we construct a directed
subcategory of a Fukaya category which are $A_\infty$-derived equivalent to the
category of $A$-modules and compute Dehn twists as twists. The formula unveils
all the ext groups of simple modules of the parh algebras and their higher
composition structures.
| 0 | 0 | 1 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.