ID
int64 1
21k
| TITLE
stringlengths 7
239
| ABSTRACT
stringlengths 7
2.76k
| Computer Science
int64 0
1
| Physics
int64 0
1
| Mathematics
int64 0
1
| Statistics
int64 0
1
| Quantitative Biology
int64 0
1
| Quantitative Finance
int64 0
1
|
---|---|---|---|---|---|---|---|---|
19,401 | Poincaré profiles of groups and spaces | We introduce a spectrum of monotone coarse invariants for metric measure
spaces called Poincaré profiles. The two extremes of this spectrum
determine the growth of the space, and the separation profile as defined by
Benjamini--Schramm--Timár. In this paper we focus on properties of the
Poincaré profiles of groups with polynomial growth, and of hyperbolic
spaces, where we deduce a striking connection between these profiles and
conformal dimension. One application of our results is that there is a
collection of hyperbolic Coxeter groups, indexed by a countable dense subset of
$(1,\infty)$, such that $G_s$ does not coarsely embed into $G_t$ whenever
$s<t$.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,402 | NMR Study of the New Magnetic Superconductor CaK(Fe$0.951Ni0.049)4As4: Microscopic Coexistence of Hedgehog Spin-vortex Crystal and Superconductivity | Coexistence of a new-type antiferromagnetic (AFM) state, the so-called
hedgehog spin-vortex crystal (SVC), and superconductivity (SC) is evidenced by
$^{75}$As nuclear magnetic resonance study on single-crystalline
CaK(Fe$_{0.951}$Ni$_{0.049}$)$_4$As$_4$. The hedgehog SVC order is clearly
demonstrated by the direct observation of the internal magnetic induction along
the $c$ axis at the As1 site (close to K) and a zero net internal magnetic
induction at the As2 site (close to Ca) below an AFM ordering temperature
$T_{\rm N}$ $\sim$ 52 K. The nuclear spin-lattice relaxation rate 1/$T_1$ shows
a distinct decrease below $T_{\rm c}$ $\sim$ 10 K, providing also unambiguous
evidence for the microscopic coexistence. Furthermore, based on the analysis of
the 1/$T_1$ data, the hedgehog SVC-type spin correlations are found to be
enhanced below $T$ $\sim$ 150 K in the paramagnetic state. These results
indicate the hedgehog SVC-type spin correlations play an important role for the
appearance of SC in the new magnetic superconductor.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,403 | Light spanners for bounded treewidth graphs imply light spanners for $H$-minor-free graphs | Grigni and Hung~\cite{GH12} conjectured that H-minor-free graphs have
$(1+\epsilon)$-spanners that are light, that is, of weight $g(|H|,\epsilon)$
times the weight of the minimum spanning tree for some function $g$. This
conjecture implies the {\em efficient} polynomial-time approximation scheme
(PTAS) of the traveling salesperson problem in $H$-minor free graphs; that is,
a PTAS whose running time is of the form $2^{f(\epsilon)}n^{O(1)}$ for some
function $f$. The state of the art PTAS for TSP in H-minor-free-graphs has
running time $n^{1/\epsilon^c}$. We take a further step toward proving this
conjecture by showing that if the bounded treewidth graphs have light greedy
spanners, then the conjecture is true. We also prove that the greedy spanner of
a bounded pathwidth graph is light and discuss the possibility of extending our
proof to bounded treewidth graphs.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,404 | Modeling Daily Seasonality of Mexico City Ozone using Nonseparable Covariance Models on Circles Cross Time | Mexico City tracks ground-level ozone levels to assess compliance with
national ambient air quality standards and to prevent environmental health
emergencies. Ozone levels show distinct daily patterns, within the city, and
over the course of the year. To model these data, we use covariance models over
space, circular time, and linear time. We review existing models and develop
new classes of nonseparable covariance models of this type, models appropriate
for quasi-periodic data collected at many locations. With these covariance
models, we use nearest-neighbor Gaussian processes to predict hourly ozone
levels at unobserved locations in April and May, the peak ozone season, to
infer compliance to Mexican air quality standards and to estimate respiratory
health risk associated with ozone. Predicted compliance with air quality
standards and estimated respiratory health risk vary greatly over space and
time. In some regions, we predict exceedance of national standards for more
than a third of the hours in April and May. On many days, we predict that
nearly all of Mexico City exceeds nationally legislated ozone thresholds at
least once. In peak regions, we estimate respiratory risk for ozone to be 55%
higher on average than the annual average risk and as much at 170% higher on
some days.
| 0 | 0 | 0 | 1 | 0 | 0 |
19,405 | Huygens-Fresnel Picture for Electron-Molecule Elastic Scattering | The elastic scattering cross sections for a slow electron by C2 and H2
molecules have been calculated within the framework of the non-overlapping
atomic potential model. For the amplitudes of the multiple electron scattering
by a target the wave function of the molecular continuum is represented as a
combination of a plane wave and two spherical waves generated by the centers of
atomic spheres. This wave function obeys the Huygens-Fresnel principle
according to which the electron wave scattering by a system of two centers is
accompanied by generation of two spherical waves; their interaction creates a
diffraction pattern far from the target. Each of the Huygens waves, in turn, is
a superposition of the partial spherical waves with different orbital angular
momenta l and their projections m. The amplitudes of these partial waves are
defined by the corresponding phases of electron elastic scattering by an
isolated atomic potential. In numerical calculations the s- and p-phase shifts
are taken into account. So the number of interfering electron waves is equal to
eight: two of which are the s-type waves and the remaining six waves are of the
p-type with different m values. The calculation of the scattering amplitudes in
closed form (rather than in the form of S-matrix expansion) is reduced to
solving a system of eight inhomogeneous algebraic equations. The differential
and total cross sections of electron scattering by fixed-in-space molecules and
randomly oriented ones have been calculated as well. We conclude by discussing
the special features of the S-matrix method for the case of arbitrary
non-spherical potentials.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,406 | When Point Process Meets RNNs: Predicting Fine-Grained User Interests with Mutual Behavioral Infectivity | Predicting fine-grained interests of users with temporal behavior is
important to personalization and information filtering applications. However,
existing interest prediction methods are incapable of capturing the subtle
degreed user interests towards particular items, and the internal time-varying
drifting attention of individuals is not studied yet. Moreover, the prediction
process can also be affected by inter-personal influence, known as behavioral
mutual infectivity. Inspired by point process in modeling temporal point
process, in this paper we present a deep prediction method based on two
recurrent neural networks (RNNs) to jointly model each user's continuous
browsing history and asynchronous event sequences in the context of inter-user
behavioral mutual infectivity. Our model is able to predict the fine-grained
interest from a user regarding a particular item and corresponding timestamps
when an occurrence of event takes place. The proposed approach is more flexible
to capture the dynamic characteristic of event sequences by using the temporal
point process to model event data and timely update its intensity function by
RNNs. Furthermore, to improve the interpretability of the model, the attention
mechanism is introduced to emphasize both intra-personal and inter-personal
behavior influence over time. Experiments on real datasets demonstrate that our
model outperforms the state-of-the-art methods in fine-grained user interest
prediction.
| 1 | 0 | 0 | 1 | 0 | 0 |
19,407 | Radiomics strategies for risk assessment of tumour failure in head-and-neck cancer | Quantitative extraction of high-dimensional mineable data from medical images
is a process known as radiomics. Radiomics is foreseen as an essential
prognostic tool for cancer risk assessment and the quantification of
intratumoural heterogeneity. In this work, 1615 radiomic features (quantifying
tumour image intensity, shape, texture) extracted from pre-treatment FDG-PET
and CT images of 300 patients from four different cohorts were analyzed for the
risk assessment of locoregional recurrences (LR) and distant metastases (DM) in
head-and-neck cancer. Prediction models combining radiomic and clinical
variables were constructed via random forests and imbalance-adjustment
strategies using two of the four cohorts. Independent validation of the
prediction and prognostic performance of the models was carried out on the
other two cohorts (LR: AUC = 0.69 and CI = 0.67; DM: AUC = 0.86 and CI = 0.88).
Furthermore, the results obtained via Kaplan-Meier analysis demonstrated the
potential of radiomics for assessing the risk of specific tumour outcomes using
multiple stratification groups. This could have important clinical impact,
notably by allowing for a better personalization of chemo-radiation treatments
for head-and-neck cancer patients from different risk groups.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,408 | Normalization of zero-inflated data: An empirical analysis of a new indicator family and its use with altmetrics data | Recently, two new indicators (Equalized Mean-based Normalized Proportion
Cited, EMNPC; Mean-based Normalized Proportion Cited, MNPC) were proposed which
are intended for sparse scientometrics data. The indicators compare the
proportion of mentioned papers (e.g. on Facebook) of a unit (e.g., a researcher
or institution) with the proportion of mentioned papers in the corresponding
fields and publication years (the expected values). In this study, we propose a
third indicator (Mantel-Haenszel quotient, MHq) belonging to the same indicator
family. The MHq is based on the MH analysis - an established method in
statistics for the comparison of proportions. We test (using citations and
assessments by peers, i.e. F1000Prime recommendations) if the three indicators
can distinguish between different quality levels as defined on the basis of the
assessments by peers. Thus, we test their convergent validity. We find that the
indicator MHq is able to distinguish between the quality levels in most cases
while MNPC and EMNPC are not. Since the MHq is shown in this study to be a
valid indicator, we apply it to six types of zero-inflated altmetrics data and
test whether different altmetrics sources are related to quality. The results
for the various altmetrics demonstrate that the relationship between altmetrics
(Wikipedia, Facebook, blogs, and news data) and assessments by peers is not as
strong as the relationship between citations and assessments by peers.
Actually, the relationship between citations and peer assessments is about two
to three times stronger than the association between altmetrics and assessments
by peers.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,409 | Gravity Formality | We show that Willwacher's cyclic formality theorem can be extended to
preserve natural Gravity operations on cyclic multivector fields and cyclic
multidifferential operators. We express this in terms of a homotopy Gravity
quasi-isomorphism with explicit local formulas. For this, we develop operadic
tools related to mixed complexes and cyclic homology and prove that the operad
$\mathsf M_\circlearrowleft$ of natural operations on cyclic operators is
formal and hence quasi-isomorphic to the Gravity operad.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,410 | On the Feasibility of Distinguishing Between Process Disturbances and Intrusions in Process Control Systems Using Multivariate Statistical Process Control | Process Control Systems (PCSs) are the operating core of Critical
Infrastructures (CIs). As such, anomaly detection has been an active research
field to ensure CI normal operation. Previous approaches have leveraged network
level data for anomaly detection, or have disregarded the existence of process
disturbances, thus opening the possibility of mislabelling disturbances as
attacks and vice versa. In this paper we present an anomaly detection and
diagnostic system based on Multivariate Statistical Process Control (MSPC),
that aims to distinguish between attacks and disturbances. For this end, we
expand traditional MSPC to monitor process level and controller level data. We
evaluate our approach using the Tennessee-Eastman process. Results show that
our approach can be used to distinguish disturbances from intrusions to a
certain extent and we conclude that the proposed approach can be extended with
other sources of data for improving results.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,411 | Quantum hydrodynamic approximations to the finite temperature trapped Bose gases | For the quantum kinetic system modelling the Bose-Einstein Condensate that
accounts for interactions between condensate and excited atoms, we use the
Chapman-Enskog expansion to derive its hydrodynamic approximations, include
both Euler and Navier-Stokes approximations. The hydrodynamic approximations
describe not only the macroscopic behavior of the BEC but also its coupling
with the non-condensates, which agrees with Landau's two fluid theory.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,412 | Game-theoretic dynamic investment model with incomplete information: futures contracts | Over the past few years, the futures market has been successfully developing
in the North-West region. Futures markets are one of the most effective and
liquid-visible trading mechanisms. A large number of buyers are forced to
compete with each other and raise their prices. A large number of sellers make
them reduce prices. Thus, the gap between the prices of offers of buyers and
sellers is reduced due to high competition, and this is a good criterion for
the liquidity of the market. This high degree of liquidity contributed to the
fact that futures trading took such an important role in commerce and finance.
A multi-step, non-cooperative n persons game is formalized and studied
| 0 | 0 | 0 | 0 | 0 | 1 |
19,413 | Deep Collaborative Learning for Visual Recognition | Deep neural networks are playing an important role in state-of-the-art visual
recognition. To represent high-level visual concepts, modern networks are
equipped with large convolutional layers, which use a large number of filters
and contribute significantly to model complexity. For example, more than half
of the weights of AlexNet are stored in the first fully-connected layer (4,096
filters).
We formulate the function of a convolutional layer as learning a large visual
vocabulary, and propose an alternative way, namely Deep Collaborative Learning
(DCL), to reduce the computational complexity. We replace a convolutional layer
with a two-stage DCL module, in which we first construct a couple of smaller
convolutional layers individually, and then fuse them at each spatial position
to consider feature co-occurrence. In mathematics, DCL can be explained as an
efficient way of learning compositional visual concepts, in which the
vocabulary size increases exponentially while the model complexity only
increases linearly. We evaluate DCL on a wide range of visual recognition
tasks, including a series of multi-digit number classification datasets, and
some generic image classification datasets such as SVHN, CIFAR and ILSVRC2012.
We apply DCL to several state-of-the-art network structures, improving the
recognition accuracy meanwhile reducing the number of parameters (16.82% fewer
in AlexNet).
| 1 | 0 | 0 | 0 | 0 | 0 |
19,414 | Co-location Epidemic Tracking on London Public Transports Using Low Power Mobile Magnetometer | The public transports provide an ideal means to enable contagious diseases
transmission. This paper introduces a novel idea to detect co-location of
people in such environment using just the ubiquitous geomagnetic field sensor
on the smart phone. Essentially, given that all passengers must share the same
journey between at least two consecutive stations, we have a long window to
match the user trajectory. Our idea was assessed over a painstakingly survey of
over 150 kilometres of travelling distance, covering different parts of London,
using the overground trains, the underground tubes and the buses.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,415 | Waring-Goldbach Problem: One Square, Four Cubes and Higher Powers | Let $\mathcal{P}_r$ denote an almost-prime with at most $r$ prime factors,
counted according to multiplicity. In this paper, it is proved that, for
$12\leqslant b\leqslant 35$ and for every sufficiently large odd integer $N$,
the equation \begin{equation*}
N=x^2+p_1^3+p_2^3+p_3^3+p_4^3+p_5^4+p_6^b \end{equation*} is solvable with
$x$ being an almost-prime $\mathcal{P}_{r(b)}$ and the other variables primes,
where $r(b)$ is defined in the Theorem. This result constitutes an improvement
upon that of Lü and Mu.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,416 | Realizability of tropical canonical divisors | We use recent results by Bainbridge-Chen-Gendron-Grushevsky-Moeller on
compactifications of strata of abelian differentials to give a comprehensive
solution to the realizability problem for effective tropical canonical divisors
in equicharacteristic zero. Given a pair $(\Gamma, D)$ consisting of a stable
tropical curve $\Gamma$ and a divisor $D$ in the canonical linear system on
$\Gamma$, we give a purely combinatorial condition to decide whether there is a
smooth curve $X$ over a non-Archimedean field whose stable reduction has
$\Gamma$ as its dual tropical curve together with a effective canonical divisor
$K_X$ that specializes to $D$. Along the way, we develop a moduli-theoretic
framework to understand Baker's specialization of divisors from algebraic to
tropical curves as a natural toroidal tropicalization map in the sense of
Abramovich-Caporaso-Payne.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,417 | Causal inference for interfering units with cluster and population level treatment allocation programs | Interference arises when an individual's potential outcome depends on the
individual treatment level, but also on the treatment level of others. A common
assumption in the causal inference literature in the presence of interference
is partial interference, implying that the population can be partitioned in
clusters of individuals whose potential outcomes only depend on the treatment
of units within the same cluster. Previous literature has defined average
potential outcomes under counterfactual scenarios where treatments are randomly
allocated to units within a cluster. However, within clusters there may be
units that are more or less likely to receive treatment based on covariates or
neighbors' treatment. We define new estimands that describe average potential
outcomes for realistic counterfactual treatment allocation programs, extending
existing estimands to take into consideration the units' covariates and
dependence between units' treatment assignment. We further propose entirely new
estimands for population-level interventions over the collection of clusters,
which correspond in the motivating setting to regulations at the federal (vs.
cluster or regional) level. We discuss these estimands, propose unbiased
estimators and derive asymptotic results as the number of clusters grows.
Finally, we estimate effects in a comparative effectiveness study of power
plant emission reduction technologies on ambient ozone pollution.
| 0 | 0 | 1 | 1 | 0 | 0 |
19,418 | A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and Optical Flow Estimation | We present a unifying framework to solve several computer vision problems
with event cameras: motion, depth and optical flow estimation. The main idea of
our framework is to find the point trajectories on the image plane that are
best aligned with the event data by maximizing an objective function: the
contrast of an image of warped events. Our method implicitly handles data
association between the events, and therefore, does not rely on additional
appearance information about the scene. In addition to accurately recovering
the motion parameters of the problem, our framework produces motion-corrected
edge-like images with high dynamic range that can be used for further scene
analysis. The proposed method is not only simple, but more importantly, it is,
to the best of our knowledge, the first method that can be successfully applied
to such a diverse set of important vision tasks with event cameras.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,419 | TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing | Machine learning models are notoriously difficult to interpret and debug.
This is particularly true of neural networks. In this work, we introduce
automated software testing techniques for neural networks that are well-suited
to discovering errors which occur only for rare inputs. Specifically, we
develop coverage-guided fuzzing (CGF) methods for neural networks. In CGF,
random mutations of inputs to a neural network are guided by a coverage metric
toward the goal of satisfying user-specified constraints. We describe how fast
approximate nearest neighbor algorithms can provide this coverage metric. We
then discuss the application of CGF to the following goals: finding numerical
errors in trained neural networks, generating disagreements between neural
networks and quantized versions of those networks, and surfacing undesirable
behavior in character level language models. Finally, we release an open source
library called TensorFuzz that implements the described techniques.
| 0 | 0 | 0 | 1 | 0 | 0 |
19,420 | Inferring directed climatic interactions with renormalized partial directed coherence and directed partial correlation | Inferring interactions between processes promises deeper insight into
mechanisms underlying network phenomena. Renormalised partial directed
coherence (rPDC) is a frequency-domain representation of the concept of Granger
causality while directed partial correlation (DPC) is an alternative approach
for quantifying Granger causality in the time domain. Both methodologies have
been successfully applied to neurophysiological signals for detecting directed
relationships. This paper introduces their application to climatological time
series. We first discuss the application to ENSO -- Monsoon interaction, and
then apply the methodologies to the more challenging air-sea interaction in the
South Atlantic Convergence Zone (SACZ). While in the first case the results
obtained are fully consistent with present knowledge in climate modeling, in
the second case the results are, as expected, less clear, and to fully
elucidate the SACZ air-sea interaction, further investigations on the
specificity and sensitivity of these methodologies are needed.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,421 | Conditions for the invertibility of dual energy data | The Alvarez-Macovski method [Alvarez, R. E and Macovski, A.,
"Energy-selective reconstructions in X-ray computerized tomography", Phys. Med.
Biol. (1976), 733--44] requires the inversion of the transformation from the
line integrals of the basis set coefficients to measurements with multiple
x-ray spectra. Analytical formulas for invertibility of the transformation from
two measurements to two line integrals are derived. It is found that
non-invertible systems have near zero Jacobian determinants on a nearly
straight line in the line integrals plane. Formulas are derived for the points
where the line crosses the axes, thus determining the line. Additional formulas
are derived for the values of the terms of the Jacobian determinant at the
endpoints of the line of non-invertibility. The formulas are applied to a set
of spectra including one suggested by Levine that is not invertible as well as
similar spectra that are invertible and voltage switched x-ray tube spectra
that are also invertible. An iterative inverse transformation algorithm
exhibits large errors with non-invertible spectra.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,422 | Examples of lattice-polarized K3 surfaces with automorphic discriminant, and Lorentzian Kac--Moody algebras | Using our results about Lorentzian Kac--Moody algebras and arithmetic mirror
symmetry, we give six series of examples of lattice-polarized K3 surfaces with
automorphic discriminant.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,423 | Quantum ferrofluid turbulence | We study the elementary characteristics of turbulence in a quantum ferrofluid
through the context of a dipolar Bose gas condensing from a highly
non-equilibrium thermal state. Our simulations reveal that the dipolar
interactions drive the emergence of polarized turbulence and density
corrugations. The superfluid vortex lines and density fluctuations adopt a
columnar or stratified configuration, depending on the sign of the dipolar
interactions, with the vortices tending to form in the low density regions to
minimize kinetic energy. When the interactions are dominantly dipolar, the
decay of vortex line length is enhanced, closely following a $t^{-3/2}$
behaviour. This system poses exciting prospects for realizing stratified
quantum turbulence and new levels of generating and controlling turbulence
using magnetic fields.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,424 | Estimation of the discontinuous leverage effect: Evidence from the NASDAQ order book | An extensive empirical literature documents a generally negative correlation,
named the "leverage effect," between asset returns and changes of volatility.
It is more challenging to establish such a return-volatility relationship for
jumps in high-frequency data. We propose new nonparametric methods to assess
and test for a discontinuous leverage effect --- i.e. a relation between
contemporaneous jumps in prices and volatility. The methods are robust to
market microstructure noise and build on a newly developed price-jump
localization and estimation procedure. Our empirical investigation of six years
of transaction data from 320 NASDAQ firms displays no unconditional negative
correlation between price and volatility cojumps. We show, however, that there
is a strong relation between price-volatility cojumps if one conditions on the
sign of price jumps and whether the price jumps are market-wide or
idiosyncratic. Firms' volatility levels strongly explain the cross-section of
discontinuous leverage while debt-to-equity ratios have no significant
explanatory power.
| 0 | 0 | 1 | 1 | 0 | 0 |
19,425 | Group representations that resist worst-case sampling | Motivated by expansion in Cayley graphs, we show that there exist infinitely
many groups $G$ with a nontrivial irreducible unitary representation whose
average over every set of $o(\log\log|G|)$ elements of $G$ has operator norm $1
- o(1)$. This answers a question of Lovett, Moore, and Russell, and strengthens
their negative answer to a question of Wigderson.
The construction is the affine group of $\mathbb{F}_p$ and uses the fact that
for every $A \subset \mathbb{F}_p\setminus\{0\}$, there is a set of size
$\exp(\exp(O(|A|)))$ that is almost invariant under both additive and
multiplicatpive translations by elements of $A$.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,426 | On the nearly smooth complex spaces | We introduce a class of normal complex spaces having only mild sin-gularities
(close to quotient singularities) for which we generalize the notion of a
(analytic) fundamental class for an analytic cycle and also the notion of a
relative fundamental class for an analytic family of cycles. We also generalize
to these spaces the geometric intersection theory for analytic cycles with
rational positive coefficients and show that it behaves well with respect to
analytic families of cycles. We prove that this intersection theory has most of
the usual properties of the standard geometric intersection theory on complex
manifolds, but with the exception that the intersection cycle of two cycles
with positive integral coefficients that intersect properly may have rational
coefficients. AMS classification. 32 C 20-32 C 25-32 C 36.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,427 | Semi-supervised learning of hierarchical representations of molecules using neural message passing | With the rapid increase of compound databases available in medicinal and
material science, there is a growing need for learning representations of
molecules in a semi-supervised manner. In this paper, we propose an
unsupervised hierarchical feature extraction algorithm for molecules (or more
generally, graph-structured objects with fixed number of types of nodes and
edges), which is applicable to both unsupervised and semi-supervised tasks. Our
method extends recently proposed Paragraph Vector algorithm and incorporates
neural message passing to obtain hierarchical representations of subgraphs. We
applied our method to an unsupervised task and demonstrated that it outperforms
existing proposed methods in several benchmark datasets. We also experimentally
showed that semi-supervised tasks enhanced predictive performance compared with
supervised ones with labeled molecules only.
| 1 | 0 | 0 | 1 | 0 | 0 |
19,428 | Tracy-Widom at each edge of real covariance and MANOVA estimators | We study the sample covariance matrix for real-valued data with general
population covariance, as well as MANOVA-type covariance estimators in variance
components models under null hypotheses of global sphericity. In the limit as
matrix dimensions increase proportionally, the asymptotic spectra of such
estimators may have multiple disjoint intervals of support, possibly
intersecting the negative half line. We show that the distribution of the
extremal eigenvalue at each regular edge of the support has a GOE Tracy-Widom
limit. Our proof extends a comparison argument of Ji Oon Lee and Kevin
Schnelli, replacing a continuous Green function flow by a discrete Lindeberg
swapping scheme.
| 0 | 0 | 1 | 1 | 0 | 0 |
19,429 | Multi-stage Neural Networks with Single-sided Classifiers for False Positive Reduction and its Evaluation using Lung X-ray CT Images | Lung nodule classification is a class imbalanced problem because nodules are
found with much lower frequency than non-nodules. In the class imbalanced
problem, conventional classifiers tend to be overwhelmed by the majority class
and ignore the minority class. We therefore propose cascaded convolutional
neural networks to cope with the class imbalanced problem. In the proposed
approach, multi-stage convolutional neural networks that perform as
single-sided classifiers filter out obvious non-nodules. Successively, a
convolutional neural network trained with a balanced data set calculates nodule
probabilities. The proposed method achieved the sensitivity of 92.4\% and 94.5%
at 4 and 8 false positives per scan in Free Receiver Operating Characteristics
(FROC) curve analysis, respectively.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,430 | Monochromatic knots and other unusual electromagnetic disturbances: light localised in 3D | We introduce and examine a collection of unusual electromagnetic
disturbances. Each of these is an exact, monochromatic solution of Maxwell's
equations in free space with looped electric and magnetic field lines of finite
extent and a localised appearance in all three spatial dimensions. Included are
the first explicit examples of monochromatic electromagnetic knots. We also
consider the generation of our unusual electromagnetic disturbances in the
laboratory, at both low and high frequencies, and highlight possible directions
for future research, including the use of unusual electromagnetic disturbances
as the basis of a new form of three-dimensional display.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,431 | Zero-cycles of degree one on Skorobogatov's bielliptic surface | Skorobogatov constructed a bielliptic surface which is a counterexample to
the Hasse principle not explained by the Brauer-Manin obstruction. We show that
this surface has a $0$-cycle of degree 1, as predicted by a conjecture of
Colliot-Thélène.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,432 | Locally Nameless Permutation Types | We define "Locally Nameless Permutation Types", which fuse permutation types
as used in Nominal Isabelle with the locally nameless representation. We show
that this combination is particularly useful when formalizing programming
languages where bound names may become free during execution ("extrusion"),
common in process calculi. It inherits the generic definition of permutations
and support, and associated lemmas, from the Nominal approach, and the ability
to stay close to pencil-and-paper proofs from the locally nameless approach. We
explain how to use cofinite quantification in this setting, show why reasoning
about renaming is more important here than in languages without extrusion, and
provide results about infinite support, necessary when reasoning about
countable choice.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,433 | On dimension-free variational inequalities for averaging operators in $\mathbb R^d$ | We study dimension-free $L^p$ inequalities for $r$-variations of the
Hardy--Littlewood averaging operators defined over symmetric convex bodies in
$\mathbb R^d$.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,434 | One-particle density matrix of trapped one-dimensional impenetrable bosons from conformal invariance | The one-particle density matrix of the one-dimensional Tonks-Girardeau gas
with inhomogeneous density profile is calculated, thanks to a recent
observation that relates this system to a two-dimensional conformal field
theory in curved space. The result is asymptotically exact in the limit of
large particle density and small density variation, and holds for arbitrary
trapping potentials. In the particular case of a harmonic trap, we recover a
formula obtained by Forrester et al. [Phys. Rev. A 67, 043607 (2003)] from a
different method.
| 0 | 1 | 1 | 0 | 0 | 0 |
19,435 | Image Analysis Using a Dual-Tree $M$-Band Wavelet Transform | We propose a 2D generalization to the $M$-band case of the dual-tree
decomposition structure (initially proposed by N. Kingsbury and further
investigated by I. Selesnick) based on a Hilbert pair of wavelets. We
particularly address (\textit{i}) the construction of the dual basis and
(\textit{ii}) the resulting directional analysis. We also revisit the necessary
pre-processing stage in the $M$-band case. While several reconstructions are
possible because of the redundancy of the representation, we propose a new
optimal signal reconstruction technique, which minimizes potential estimation
errors. The effectiveness of the proposed $M$-band decomposition is
demonstrated via denoising comparisons on several image types (natural,
texture, seismics), with various $M$-band wavelets and thresholding strategies.
Significant improvements in terms of both overall noise reduction and direction
preservation are observed.
| 1 | 1 | 1 | 0 | 0 | 0 |
19,436 | Nonlinear Cauchy-Riemann Equations and Liouville Equation For Conformal Metrics | We introduce the Nonlinear Cauchy-Riemann equations as Bäcklund
transformations for several nonlinear and linear partial differential
equations. From these equations we treat in details the Laplace and the
Liouville equations by deriving general solution for the nonlinear Liouville
equation. By Möbius transformation we relate solutions for the Poincare
model of hyperbolic geometry, the Klein model in half-plane and the
pseudo-sphere. Conformal form of the constant curvature metrics in these
geometries, stereographic projections and special solutions are discussed. Then
we introduce the hyperbolic analog of the Riemann sphere, which we call the
Riemann pseudosphere. We identify point at infinity on this pseudosphere and
show that it can be used in complex analysis as an alternative to usual Riemann
sphere to extend the complex plane. Interpretation of symmetric and antipodal
points on both, the Riemann sphere and the Riemann pseudo-sphere, are given. By
Möbius transformation and homogenous coordinates, the most general solution
of Liouville equation as discussed by Crowdy is derived.
| 0 | 1 | 1 | 0 | 0 | 0 |
19,437 | SmartPaste: Learning to Adapt Source Code | Deep Neural Networks have been shown to succeed at a range of natural
language tasks such as machine translation and text summarization. While tasks
on source code (ie, formal languages) have been considered recently, most work
in this area does not attempt to capitalize on the unique opportunities offered
by its known syntax and structure. In this work, we introduce SmartPaste, a
first task that requires to use such information. The task is a variant of the
program repair problem that requires to adapt a given (pasted) snippet of code
to surrounding, existing source code. As first solutions, we design a set of
deep neural models that learn to represent the context of each variable
location and variable usage in a data flow-sensitive way. Our evaluation
suggests that our models can learn to solve the SmartPaste task in many cases,
achieving 58.6% accuracy, while learning meaningful representation of variable
usages.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,438 | An Outcome Model Approach to Translating a Randomized Controlled Trial Results to a Target Population | Participants enrolled into randomized controlled trials (RCTs) often do not
reflect real-world populations. Previous research in how best to translate RCT
results to target populations has focused on weighting RCT data to look like
the target data. Simulation work, however, has suggested that an outcome model
approach may be preferable. Here we describe such an approach using source data
from the 2x2 factorial NAVIGATOR trial which evaluated the impact of valsartan
and nateglinide on cardiovascular outcomes and new-onset diabetes in a
pre-diabetic population. Our target data consisted of people with pre-diabetes
serviced at our institution. We used Random Survival Forests to develop
separate outcome models for each of the 4 treatments, estimating the 5-year
risk difference for progression to diabetes and estimated the treatment effect
in our local patient populations, as well as sub-populations, and the results
compared to the traditional weighting approach. Our models suggested that the
treatment effect for valsartan in our patient population was the same as in the
trial, whereas for nateglinide treatment effect was stronger than observed in
the original trial. Our effect estimates were more efficient than the weighting
approach.
| 0 | 0 | 0 | 1 | 0 | 0 |
19,439 | Influence of parameterized small-scale gravity waves on the migrating diurnal tide in Earth's thermosphere | Effects of subgrid-scale gravity waves (GWs) on the diurnal migrating tides
are investigated from the mesosphere to the upper thermosphere for September
equinox conditions, using a general circulation model coupled with the extended
spectral nonlinear GW parameterization of Yiğit et al (2008). Simulations
with GW effects cut-off above the turbopause and included in the entire
thermosphere have been conducted. GWs appreciably impact the mean circulation
and cool the thermosphere down by up to 12-18%. GWs significantly affect the
winds modulated by the diurnal migrating tide, in particular in the
low-latitude mesosphere and lower thermosphere and in the high-latitude
thermosphere. These effects depend on the mutual correlation of the diurnal
phases of the GW forcing and tides: GWs can either enhance or reduce the tidal
amplitude. In the low-latitude MLT, the correlation between the direction of
the deposited GW momentum and the tidal phase is positive due to propagation of
a broad spectrum of GW harmonics through the alternating winds. In the Northern
Hemisphere high-latitude thermosphere, GWs act against the tide due to an
anti-correlation of tidal wind and GW momentum, while in the Southern
high-latitudes they weakly enhance the tidal amplitude via a combination of a
partial correlation of phases and GW-induced changes of the circulation. The
variable nature of GW effects on the thermal tide can be captured in GCMs
provided that a GW parameterization (1) considers a broad spectrum of
harmonics, (2) properly describes their propagation, and (3) correctly accounts
for the physics of wave breaking/saturation.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,440 | A Statistical Model for Ideal Team Selection for A National Cricket Squad | Cricket is a game played between two teams which consists of eleven players
each. Nowadays cricket game is becoming more and more popular in Bangladesh and
other South Asian Countries. Before a match people are very enthusiastic about
team squads and "Which players are playing today?", "How well will MR. X
perform today?" are the million dollar questions before a big match. This
article will propose a method using statistical data analysis for recommending
a national team squad. Recent match scorecards for domestic and international
matches played by a specific team in recent years are used to recommend the
ideal squad. Impact point or rating points of all players in different
conditions are calculated and the best ones from different categories are
chosen to form optimal line-ups. To evaluate the efficiency of impact point
system, it will be tested with real time match data to see how much accuracy it
gives.
| 0 | 0 | 0 | 1 | 0 | 0 |
19,441 | The geometry of some generalized affine Springer fibers | We study basic geometric properties of some group analogue of affine Springer
fibers and compare with the classical Lie algebra affine Springer fibers. The
main purpose is to formulate a conjecture that relates the number of
irreducible components of such varieties for a reductive group $G$ to certain
weight multiplicities defined by the Langlands dual group $\hat{G}$. We prove
our conjecture in the case of unramified conjugacy class.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,442 | Modelling of a Permanent Magnet Synchronous Machine Using Isogeometric Analysis | Isogeometric analysis (IGA) is used to simulate a permanent magnet
synchronous machine. IGA uses non-uniform rational B-splines to parametrise the
domain and to approximate the solution space, thus allowing for the exact
description of the geometries even on the coarsest level of mesh refinement.
Given the properties of the isogeometric basis functions, this choice
guarantees a higher accuracy than the classical finite element method.
For dealing with the different stator and rotor topologies, the domain is
split into two non-overlapping parts on which Maxwell's equations are solved
independently in the context of a classical Dirichlet-to-Neumann domain
decomposition scheme. The results show good agreement with the ones obtained by
the classical finite element approach.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,443 | Learning and Trust in Auction Markets | In this paper, we study behavior of bidders in an experimental launch of a
new advertising auction platform by Zillow, as Zillow switched from negotiated
contracts to using auctions in several geographically isolated markets. A
unique feature of this experiment is that the bidders in this market are real
estate agents that bid on their own behalf, not using third-party
intermediaries. To help bidders, Zillow also provided a recommendation tool
that suggested a bid for each bidder.
Our main focus in this paper is on the decisions of bidders whether or not to
adopt the platform-provided bid recommendation. We observe that a significant
proportion of bidders do not use the recommended bid. Using the bid history of
the agents we infer their value, and compare the agents' regret with their
actual bidding history with results they would have obtained following the
recommendation. We find that for half of the agents not following the
recommendation, the increased effort of experimenting with alternate bids
results in increased regret, i.e., they get decreased net value out of the
system. The proportion of agents not following the recommendation slowly
declines as markets mature, but it remains large in most markets that we
observe. We argue that the main reason for this phenomenon is the lack of trust
in the platform-provided tool.
Our work provides an empirical insight into possible design choices for
auction-based online advertising platforms. While search advertising platforms
(such as Google or Bing) allow bidders to submit bids on their own, many
display advertising platforms (such as Facebook) optimize bids on bidders'
behalf and eliminate the need for bids. Our empirical analysis shows that the
latter approach is preferred for markets where bidders are individuals, who
don't have access to third party tools, and who may question the fairness of
platform-provided suggestions.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,444 | Like trainer, like bot? Inheritance of bias in algorithmic content moderation | The internet has become a central medium through which `networked publics'
express their opinions and engage in debate. Offensive comments and personal
attacks can inhibit participation in these spaces. Automated content moderation
aims to overcome this problem using machine learning classifiers trained on
large corpora of texts manually annotated for offence. While such systems could
help encourage more civil debate, they must navigate inherently normatively
contestable boundaries, and are subject to the idiosyncratic norms of the human
raters who provide the training data. An important objective for platforms
implementing such measures might be to ensure that they are not unduly biased
towards or against particular norms of offence. This paper provides some
exploratory methods by which the normative biases of algorithmic content
moderation systems can be measured, by way of a case study using an existing
dataset of comments labelled for offence. We train classifiers on comments
labelled by different demographic subsets (men and women) to understand how
differences in conceptions of offence between these groups might affect the
performance of the resulting models on various test sets. We conclude by
discussing some of the ethical choices facing the implementers of algorithmic
moderation systems, given various desired levels of diversity of viewpoints
amongst discussion participants.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,445 | Holographic Neural Architectures | Representation learning is at the heart of what makes deep learning
effective. In this work, we introduce a new framework for representation
learning that we call "Holographic Neural Architectures" (HNAs). In the same
way that an observer can experience the 3D structure of a holographed object by
looking at its hologram from several angles, HNAs derive Holographic
Representations from the training set. These representations can then be
explored by moving along a continuous bounded single dimension. We show that
HNAs can be used to make generative networks, state-of-the-art regression
models and that they are inherently highly resistant to noise. Finally, we
argue that because of their denoising abilities and their capacity to
generalize well from very few examples, models based upon HNAs are particularly
well suited for biological applications where training examples are rare or
noisy.
| 0 | 0 | 0 | 1 | 1 | 0 |
19,446 | An introduction to Topological Data Analysis: fundamental and practical aspects for data scientists | Topological Data Analysis (tda) is a recent and fast growing eld providing a
set of new topological and geometric tools to infer relevant features for
possibly complex data. This paper is a brief introduction, through a few
selected topics, to basic fundamental and practical aspects of tda for non
experts. 1 Introduction and motivation Topological Data Analysis (tda) is a
recent eld that emerged from various works in applied (algebraic) topology and
computational geometry during the rst decade of the century. Although one can
trace back geometric approaches for data analysis quite far in the past, tda
really started as a eld with the pioneering works of Edelsbrunner et al. (2002)
and Zomorodian and Carlsson (2005) in persistent homology and was popularized
in a landmark paper in 2009 Carlsson (2009). tda is mainly motivated by the
idea that topology and geometry provide a powerful approach to infer robust
qualitative, and sometimes quantitative, information about the structure of
data-see, e.g. Chazal (2017). tda aims at providing well-founded mathematical,
statistical and algorithmic methods to infer, analyze and exploit the complex
topological and geometric structures underlying data that are often represented
as point clouds in Euclidean or more general metric spaces. During the last few
years, a considerable eort has been made to provide robust and ecient data
structures and algorithms for tda that are now implemented and available and
easy to use through standard libraries such as the Gudhi library (C++ and
Python) Maria et al. (2014) and its R software interface Fasy et al. (2014a).
Although it is still rapidly evolving, tda now provides a set of mature and
ecient tools that can be used in combination or complementary to other data
sciences tools. The tdapipeline. tda has recently known developments in various
directions and application elds. There now exist a large variety of methods
inspired by topological and geometric approaches. Providing a complete overview
of all these existing approaches is beyond the scope of this introductory
survey. However, most of them rely on the following basic and standard pipeline
that will serve as the backbone of this paper: 1. The input is assumed to be a
nite set of points coming with a notion of distance-or similarity between them.
This distance can be induced by the metric in the ambient space (e.g. the
Euclidean metric when the data are embedded in R d) or come as an intrinsic
metric dened by a pairwise distance matrix. The denition of the metric on the
data is usually given as an input or guided by the application. It is however
important to notice that the choice of the metric may be critical to reveal
interesting topological and geometric features of the data.
| 1 | 0 | 1 | 1 | 0 | 0 |
19,447 | Machine Learning by Two-Dimensional Hierarchical Tensor Networks: A Quantum Information Theoretic Perspective on Deep Architectures | The resemblance between the methods used in quantum-many body physics and in
machine learning has drawn considerable attention. In particular, tensor
networks (TNs) and deep learning architectures bear striking similarities to
the extent that TNs can be used for machine learning. Previous results used
one-dimensional TNs in image recognition, showing limited scalability and
flexibilities. In this work, we train two-dimensional hierarchical TNs to solve
image recognition problems, using a training algorithm derived from the
multipartite entanglement renormalization ansatz. This approach introduces
novel mathematical connections among quantum many-body physics, quantum
information theory, and machine learning. While keeping the TN unitary in the
training phase, TN states are defined, which optimally encode classes of images
into quantum many-body states. We study the quantum features of the TN states,
including quantum entanglement and fidelity. We find these quantities could be
novel properties that characterize the image classes, as well as the machine
learning tasks. Our work could contribute to the research on
identifying/modeling quantum artificial intelligences.
| 0 | 1 | 0 | 1 | 0 | 0 |
19,448 | Hierarchical Imitation and Reinforcement Learning | We study how to effectively leverage expert feedback to learn sequential
decision-making policies. We focus on problems with sparse rewards and long
time horizons, which typically pose significant challenges in reinforcement
learning. We propose an algorithmic framework, called hierarchical guidance,
that leverages the hierarchical structure of the underlying problem to
integrate different modes of expert interaction. Our framework can incorporate
different combinations of imitation learning (IL) and reinforcement learning
(RL) at different levels, leading to dramatic reductions in both expert effort
and cost of exploration. Using long-horizon benchmarks, including Montezuma's
Revenge, we demonstrate that our approach can learn significantly faster than
hierarchical RL, and be significantly more label-efficient than standard IL. We
also theoretically analyze labeling cost for certain instantiations of our
framework.
| 0 | 0 | 0 | 1 | 0 | 0 |
19,449 | Social learning in a simple task allocation game | We investigate the effects of social interactions in task al- location using
Evolutionary Game Theory (EGT). We propose a simple task-allocation game and
study how different learning mechanisms can give rise to specialised and non-
specialised colonies under different ecological conditions. By combining
agent-based simulations and adaptive dynamics we show that social learning can
result in colonies of generalists or specialists, depending on ecological
parameters. Agent-based simulations further show that learning dynamics play a
crucial role in task allocation. In particular, introspective individual
learning readily favours the emergence of specialists, while a process
resembling task recruitment favours the emergence of generalists.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,450 | Guided projections for analysing the structure of high-dimensional data | A powerful data transformation method named guided projections is proposed
creating new possibilities to reveal the group structure of high-dimensional
data in the presence of noise variables. Utilising projections onto a space
spanned by a selection of a small number of observations allows measuring the
similarity of other observations to the selection based on orthogonal and score
distances. Observations are iteratively exchanged from the selection creating a
non-random sequence of projections which we call guided projections. In
contrast to conventional projection pursuit methods, which typically identify a
low-dimensional projection revealing some interesting features contained in the
data, guided projections generate a series of projections that serve as a basis
not just for diagnostic plots but to directly investigate the group structure
in data. Based on simulated data we identify the strengths and limitations of
guided projections in comparison to commonly employed data transformation
methods. We further show the relevance of the transformation by applying it to
real-world data sets.
| 0 | 0 | 0 | 1 | 0 | 0 |
19,451 | Limits of Predictability of Cascading Overload Failures in Spatially-Embedded Networks with Distributed Flows | Cascading failures are a critical vulnerability of complex information or
infrastructure networks. Here we investigate the properties of load-based
cascading failures in real and synthetic spatially-embedded network structures,
and propose mitigation strategies to reduce the severity of damages caused by
such failures. We introduce a stochastic method for optimal heterogeneous
distribution of resources (node capacities) subject to a fixed total cost.
Additionally, we design and compare the performance of networks with N-stable
and (N-1)-stable network-capacity allocations by triggering cascades using
various real-world node-attack and node-failure scenarios. We show that failure
mitigation through increased node protection can be effectively achieved
against single node failures. However, mitigating against multiple node
failures is much more difficult due to the combinatorial increase in possible
failures. We analyze the robustness of the system with increasing protection,
and find that a critical tolerance exists at which the system undergoes a phase
transition, and above which the network almost completely survives an attack.
Moreover, we show that cascade-size distributions measured in this region
exhibit a power-law decay. Finally, we find a strong correlation between
cascade sizes induced by individual nodes and sets of nodes. We also show that
network topology alone is a weak factor in determining the progression of
cascading failures.
| 1 | 1 | 0 | 0 | 0 | 0 |
19,452 | Strong electron-hole symmetric Rashba spin-orbit coupling in graphene/monolayer transition metal dichalcogenide heterostructures | Despite its extremely weak intrinsic spin-orbit coupling (SOC), graphene has
been shown to acquire considerable SOC by proximity coupling with exfoliated
transition metal dichalcogenides (TMDs). Here we demonstrate strong induced
Rashba SOC in graphene that is proximity coupled to a monolayer TMD film, MoS2
or WSe2, grown by chemical vapor deposition with drastically different Fermi
level positions. Graphene/TMD heterostructures are fabricated with a
pickup-transfer technique utilizing hexagonal boron nitride, which serves as a
flat template to promote intimate contact and therefore a strong interfacial
interaction between TMD and graphene as evidenced by quenching of the TMD
photoluminescence. We observe strong induced graphene SOC that manifests itself
in a pronounced weak anti-localization (WAL) effect in the graphene
magnetoconductance. The spin relaxation rate extracted from the WAL analysis
varies linearly with the momentum scattering time and is independent of the
carrier type. This indicates a dominantly Dyakonov-Perel spin relaxation
mechanism caused by the induced Rashba SOC. Our analysis yields a Rashba SOC
energy of ~1.5 meV in graphene/WSe2 and ~0.9 meV in graphene/MoS2,
respectively. The nearly electron-hole symmetric nature of the induced Rashba
SOC provides a clue to possible underlying SOC mechanisms.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,453 | Robust Statistics for Image Deconvolution | We present a blind multiframe image-deconvolution method based on robust
statistics. The usual shortcomings of iterative optimization of the likelihood
function are alleviated by minimizing the M-scale of the residuals, which
achieves more uniform convergence across the image. We focus on the
deconvolution of astronomical images, which are among the most challenging due
to their huge dynamic ranges and the frequent presence of large noise-dominated
regions in the images. We show that high-quality image reconstruction is
possible even in super-resolution and without the use of traditional
regularization terms. Using a robust \r{ho}-function is straightforward to
implement in a streaming setting and, hence our method is applicable to the
large volumes of astronomy images. The power of our method is demonstrated on
observations from the Sloan Digital Sky Survey (Stripe 82) and we briefly
discuss the feasibility of a pipeline based on Graphical Processing Units for
the next generation of telescope surveys.
| 1 | 1 | 0 | 0 | 0 | 0 |
19,454 | Nash and Wardrop equilibria in aggregative games with coupling constraints | We consider the framework of aggregative games, in which the cost function of
each agent depends on his own strategy and on the average population strategy.
As first contribution, we investigate the relations between the concepts of
Nash and Wardrop equilibrium. By exploiting a characterization of the two
equilibria as solutions of variational inequalities, we bound their distance
with a decreasing function of the population size. As second contribution, we
propose two decentralized algorithms that converge to such equilibria and are
capable of coping with constraints coupling the strategies of different agents.
Finally, we study the applications of charging of electric vehicles and of
route choice on a road network.
| 1 | 0 | 1 | 0 | 0 | 0 |
19,455 | Magnetoresistance in the superconducting state at the (111) LaAlO$_3$/SrTiO$_3$ interface | Condensed matter systems that simultaneously exhibit superconductivity and
ferromagnetism are rare due the antagonistic relationship between conventional
spin-singlet superconductivity and ferromagnetic order. In materials in which
superconductivity and magnetic order is known to coexist (such as some
heavy-fermion materials), the superconductivity is thought to be of an
unconventional nature. Recently, the conducting gas that lives at the interface
between the perovskite band insulators LaAlO$_3$ (LAO) and SrTiO$_3$ (STO) has
also been shown to host both superconductivity and magnetism. Most previous
research has focused on LAO/STO samples in which the interface is in the (001)
crystal plane. Relatively little work has focused on the (111) crystal
orientation, which has hexagonal symmetry at the interface, and has been
predicted to have potentially interesting topological properties, including
unconventional superconducting pairing states. Here we report measurements of
the magnetoresistance of (111) LAO/STO heterostructures at temperatures at
which they are also superconducting. As with the (001) structures, the
magnetoresistance is hysteretic, indicating the coexistence of magnetism and
superconductivity, but in addition, we find that this magnetoresistance is
anisotropic. Such an anisotropic response is completely unexpected in the
superconducting state, and suggests that (111) LAO/STO heterostructures may
support unconventional superconductivity.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,456 | Variational autoencoders for tissue heterogeneity exploration from (almost) no preprocessed mass spectrometry imaging data | The paper presents the application of Variational Autoencoders (VAE) for data
dimensionality reduction and explorative analysis of mass spectrometry imaging
data (MSI). The results confirm that VAEs are capable of detecting the patterns
associated with the different tissue sub-types with performance than standard
approaches.
| 1 | 0 | 0 | 1 | 0 | 0 |
19,457 | Learning General Latent-Variable Graphical Models with Predictive Belief Propagation and Hilbert Space Embeddings | In this paper, we propose a new algorithm for learning general
latent-variable probabilistic graphical models using the techniques of
predictive state representation, instrumental variable regression, and
reproducing-kernel Hilbert space embeddings of distributions. Under this new
learning framework, we first convert latent-variable graphical models into
corresponding latent-variable junction trees, and then reduce the hard
parameter learning problem into a pipeline of supervised learning problems,
whose results will then be used to perform predictive belief propagation over
the latent junction tree during the actual inference procedure. We then give
proofs of our algorithm's correctness, and demonstrate its good performance in
experiments on one synthetic dataset and two real-world tasks from
computational biology and computer vision - classifying DNA splice junctions
and recognizing human actions in videos.
| 1 | 0 | 0 | 1 | 0 | 0 |
19,458 | Can interacting dark energy solve the $H_0$ tension? | The answer is Yes! We indeed find that interacting dark energy can alleviate
the current tension on the value of the Hubble constant $H_0$ between the
Cosmic Microwave Background anisotropies constraints obtained from the Planck
satellite and the recent direct measurements reported by Riess et al. 2016. The
combination of these two datasets points towards an evidence for a non-zero
dark matter-dark energy coupling $\xi$ at more than two standard deviations,
with $\xi=-0.26_{-0.12}^{+0.16}$ at $95\%$ CL. However the $H_0$ tension is
better solved when the equation of state of the interacting dark energy
component is allowed to freely vary, with a phantom-like equation of state
$w=-1.184\pm0.064$ (at $68 \%$ CL), ruling out the pure cosmological constant
case, $w=-1$, again at more than two standard deviations. When Planck data are
combined with external datasets, as BAO, JLA Supernovae Ia luminosity
distances, cosmic shear or lensing data, we find good consistency with the
cosmological constant scenario and no compelling evidence for a dark
matter-dark energy coupling.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,459 | Fast Switching Dual Fabry-Perot Cavity Optical Refractometry - Methodologies for Accurate Assessment of Gas Density | Dual Fabry-Perot cavity based optical refractometry (DFPC-OR) has a high
potential for assessments of gas density. However, drifts of the FP cavity
often limit its performance. We show that by the use of two narrow-linewidth
fiber lasers locked to two high finesse cavities and Allan-Werle plots that
drift-free DFPC-OR can be obtained for short measurement times (for which the
drifts of the cavity can be disregarded). Based on this, a novel strategy,
termed fast switching DFPC-OR (FS-DFPC-OR), is presented. A set of novel
methodologies for assessment of both gas density and flow rates (in particular
from small leaks) that are not restricted by the conventional limitations
imposed by the drifts of the cavity are presented. The methodologies deal with
assessments in both open and closed (finite-sized) compartments. They
circumvent the problem with volumetric expansion, i.e. that the gas density in
a measurement cavity is not the same as that in the closed external compartment
that should be assessed, by performing a pair of measurements in rapid
succession; the first one serves the purpose of assessing the density of the
gas that has been transferred into the measurement cavity by the gas
equilibration process, while the 2nd is used to automatically calibrate the
system with respect to the relative volumes of the measurement cavity and the
external compartment. The methodologies for assessments of leak rates comprise
triple cavity evacuation assessments, comprising two measurements performed in
rapid succession, supplemented by a 3rd measurement a certain time thereafter.
A clear explanation of why the technique has such a small temperature
dependence is given. It is concluded that FS-DFPC-OR constitutes a novel
strategy that can be used for precise and accurate assessment of gas number
density and gas flows under a variety of conditions, in particular
non-temperature stabilized ones.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,460 | A Large Self-Annotated Corpus for Sarcasm | We introduce the Self-Annotated Reddit Corpus (SARC), a large corpus for
sarcasm research and for training and evaluating systems for sarcasm detection.
The corpus has 1.3 million sarcastic statements -- 10 times more than any
previous dataset -- and many times more instances of non-sarcastic statements,
allowing for learning in both balanced and unbalanced label regimes. Each
statement is furthermore self-annotated -- sarcasm is labeled by the author,
not an independent annotator -- and provided with user, topic, and conversation
context. We evaluate the corpus for accuracy, construct benchmarks for sarcasm
detection, and evaluate baseline methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,461 | Aerosol properties in the atmospheres of extrasolar giant planets | We use a model of aerosol microphysics to investigate the impact of
high-altitude photochemical aerosols on the transmission spectra and
atmospheric properties of close-in exoplanets, such as HD209458b and HD189733b.
The results depend strongly on the temperature profiles in the middle and upper
atmosphere that are poorly understood. Nevertheless, our model of HD189733b,
based on the most recently inferred temperature profiles, produces an aerosol
distribution that matches the observed transmission spectrum. We argue that the
hotter temperature of HD209458b inhibits the production of high-altitude
aerosols and leads to the appearance of a more clear atmosphere than on
HD189733b. The aerosol distribution also depends on the particle composition,
the photochemical production, and the atmospheric mixing. Due to degeneracies
among these inputs, current data cannot constrain the aerosol properties in
detail. Instead, our work highlights the role of different factors in
controlling the aerosol distribution that will prove useful in understanding
different observations, including those from future missions. For the
atmospheric mixing efficiency suggested by general circulation models (GCMs) we
find that aerosol particles are small ($\sim$nm) and probably spherical. We
further conclude that composition based on complex hydrocarbons (soots) is the
most likely candidate to survive the high temperatures in hot Jupiter
atmospheres. Such particles would have a significant impact on the energy
balance of HD189733b's atmosphere and should be incorporated in future studies
of atmospheric structure. We also evaluate the contribution of external sources
in the photochemical aerosol formation and find that their spectral signature
is not consistent with observations.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,462 | Influence of random opinion change in complex networks | Opinion formation in the population has attracted extensive research
interest. Various models have been introduced and studied, including the ones
with individuals' free will allowing them to change their opinions. Such
models, however, have not taken into account the fact that individuals with
different opinions may have different levels of loyalty, and consequently,
different probabilities of changing their opinions. In this work, we study on
how the non-uniform distribution of the opinion changing probability may affect
the final state of opinion distribution. By simulating a few different cases
with different symmetric and asymmetric non-uniform patterns of opinion
changing probabilities, we demonstrate the significant effects that the
different loyalty levels of different opinions have on the final state of the
opinion distribution.
| 1 | 1 | 0 | 0 | 0 | 0 |
19,463 | We Are Not Your Real Parents: Telling Causal from Confounded using MDL | Given data over variables $(X_1,...,X_m, Y)$ we consider the problem of
finding out whether $X$ jointly causes $Y$ or whether they are all confounded
by an unobserved latent variable $Z$. To do so, we take an
information-theoretic approach based on Kolmogorov complexity. In a nutshell,
we follow the postulate that first encoding the true cause, and then the
effects given that cause, results in a shorter description than any other
encoding of the observed variables.
The ideal score is not computable, and hence we have to approximate it. We
propose to do so using the Minimum Description Length (MDL) principle. We
compare the MDL scores under the models where $X$ causes $Y$ and where there
exists a latent variables $Z$ confounding both $X$ and $Y$ and show our scores
are consistent. To find potential confounders we propose using latent factor
modeling, in particular, probabilistic PCA (PPCA).
Empirical evaluation on both synthetic and real-world data shows that our
method, CoCa, performs very well -- even when the true generating process of
the data is far from the assumptions made by the models we use. Moreover, it is
robust as its accuracy goes hand in hand with its confidence.
| 1 | 0 | 0 | 1 | 0 | 0 |
19,464 | Evolution of eccentricity and inclination of hot protoplanets embedded in radiative discs | We study the evolution of the eccentricity and inclination of protoplanetary
embryos and low-mass protoplanets (from a fraction of an Earth mass to a few
Earth masses) embedded in a protoplanetary disc, by means of three dimensional
hydrodynamics calculations with radiative transfer in the diffusion limit. When
the protoplanets radiate in the surrounding disc the energy released by the
accretion of solids, their eccentricity and inclination experience a growth
toward values which depend on the luminosity to mass ratio of the planet, which
are comparable to the disc's aspect ratio and which are reached over timescales
of a few thousand years. This growth is triggered by the appearance of a hot,
under-dense region in the vicinity of the planet. The growth rate of the
eccentricity is typically three times larger than that of the inclination. In
long term calculations, we find that the excitation of eccentricity and the
excitation of inclination are not independent. In the particular case in which
a planet has initially a very small eccentricity and inclination, the
eccentricity largely overruns the inclination. When the eccentricity reaches
its asymptotic value, the growth of inclination is quenched, yielding an
eccentric orbit with a very low inclination. As a side result, we find that the
eccentricity and inclination of non-luminous planets are damped more vigorously
in radiative discs than in isothermal discs.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,465 | Definitions in mathematics | We discuss various forms of definitions in mathematics and describe rules
governing them.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,466 | Excellence in prime characteristic | Fix any field $K$ of characteristic $p$ such that $[K:K^p]$ is finite. We
discuss excellence for Noetherian domains whose fraction field is $K$, showing
for example, that $R$ is excellent if and only if the Frobenius map is finite
on $R$. Furthermore, we show $R$ is excellent if and only if it admits some
non-zero $p^{-e}$-linear map for $R$ or equivalently, that $R$ is a solid
$R$-algebra under Frobenius. In particular, this means that Frobenius split
Noetherian domains that are generically $F$-finite are always excellent. We
also show that non-excellent rings are abundant and easy to construct in prime
characteristic, even within the world of regular local rings of dimension one
in function fields. This paper is mostly expository in nature.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,467 | Asymptotically optimal private estimation under mean square loss | We consider the minimax estimation problem of a discrete distribution with
support size $k$ under locally differential privacy constraints. A
privatization scheme is applied to each raw sample independently, and we need
to estimate the distribution of the raw samples from the privatized samples. A
positive number $\epsilon$ measures the privacy level of a privatization
scheme.
In our previous work (arXiv:1702.00610), we proposed a family of new
privatization schemes and the corresponding estimator. We also proved that our
scheme and estimator are order optimal in the regime $e^{\epsilon} \ll k$ under
both $\ell_2^2$ and $\ell_1$ loss. In other words, for a large number of
samples the worst-case estimation loss of our scheme was shown to differ from
the optimal value by at most a constant factor. In this paper, we eliminate
this gap by showing asymptotic optimality of the proposed scheme and estimator
under the $\ell_2^2$ (mean square) loss. More precisely, we show that for any
$k$ and $\epsilon,$ the ratio between the worst-case estimation loss of our
scheme and the optimal value approaches $1$ as the number of samples tends to
infinity.
| 1 | 0 | 1 | 1 | 0 | 0 |
19,468 | Coulomb repulsion of holes and competition between d_{x^2-y^2}-wave and s-wave parings in cuprate superconductors | The effect of the Coulomb repulsion of holes on the Cooper instability in an
ensemble of spin-polaron quasiparticles has been analyzed, taking into account
the peculiarities of the crystallographic structure of the CuO$_2$ plane, which
are associated with the presence of two oxygen ions and one copper ion in the
unit cell, as well as the strong spin-fermion coupling. The investigation of
the possibility of implementation superconducting phases with d-wave and s-wave
pairing of the order parameter symmetry has shown that in the entire doping
region only the d-wave pairing satisfies the self-consistency equations, while
there is no solution for the s-wave pairing. This result completely corresponds
to the experimental data on cuprate HTSC. It has been demonstrated analytically
that the intersite Coulomb interaction does not affect the superconducting
d-wave pairing, because its Fourier transform $V_q$ does not appear in the
kernel of the corresponding integral equation.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,469 | Modeling Game Avatar Synergy and Opposition through Embedding in Multiplayer Online Battle Arena Games | Multiplayer Online Battle Arena (MOBA) games have received increasing
worldwide popularity recently. In such games, players compete in teams against
each other by controlling selected game avatars, each of which is designed with
different strengths and weaknesses. Intuitively, putting together game avatars
that complement each other (synergy) and suppress those of opponents
(opposition) would result in a stronger team. In-depth understanding of synergy
and opposition relationships among game avatars benefits player in making
decisions in game avatar drafting and gaining better prediction of match
events. However, due to intricate design and complex interactions between game
avatars, thorough understanding of their relationships is not a trivial task.
In this paper, we propose a latent variable model, namely Game Avatar
Embedding (GAE), to learn avatars' numerical representations which encode
synergy and opposition relationships between pairs of avatars. The merits of
our model are twofold: (1) the captured synergy and opposition relationships
are sensible to experienced human players' perception; (2) the learned
numerical representations of game avatars allow many important downstream
tasks, such as similar avatar search, match outcome prediction, and avatar pick
recommender. To our best knowledge, no previous model is able to simultaneously
support both features. Our quantitative and qualitative evaluations on real
match data from three commercial MOBA games illustrate the benefits of our
model.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,470 | Design and Development of Effective Transmission Mechanisms on a Tendon Driven Hand Orthosis for Stroke Patients | Tendon-driven hand orthoses have advantages over exoskeletons with respect to
wearability and safety because of their low-profile design and ability to fit a
range of patients without requiring custom joint alignment. However, no
existing study on a wearable tendon-driven hand orthosis for stroke patients
presents evidence that such devices can overcome spasticity given repeated use
and fatigue, or discusses transmission efficiency. In this study, we propose
two designs that provide effective force transmission by increasing moment arms
around finger joints. We evaluate the designs with geometric models and
experiment using a 3D-printed artificial finger to find force and joint angle
characteristics of the suggested structures. We also perform clinical tests
with stroke patients to demonstrate the feasibility of the designs. The testing
supports the hypothesis that the proposed designs efficiently elicit extension
of the digits in patients with spasticity as compared to existing baselines.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,471 | eXpose: A Character-Level Convolutional Neural Network with Embeddings For Detecting Malicious URLs, File Paths and Registry Keys | For years security machine learning research has promised to obviate the need
for signature based detection by automatically learning to detect indicators of
attack. Unfortunately, this vision hasn't come to fruition: in fact, developing
and maintaining today's security machine learning systems can require
engineering resources that are comparable to that of signature-based detection
systems, due in part to the need to develop and continuously tune the
"features" these machine learning systems look at as attacks evolve. Deep
learning, a subfield of machine learning, promises to change this by operating
on raw input signals and automating the process of feature design and
extraction. In this paper we propose the eXpose neural network, which uses a
deep learning approach we have developed to take generic, raw short character
strings as input (a common case for security inputs, which include artifacts
like potentially malicious URLs, file paths, named pipes, named mutexes, and
registry keys), and learns to simultaneously extract features and classify
using character-level embeddings and convolutional neural network. In addition
to completely automating the feature design and extraction process, eXpose
outperforms manual feature extraction based baselines on all of the intrusion
detection problems we tested it on, yielding a 5%-10% detection rate gain at
0.1% false positive rate compared to these baselines.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,472 | A Software Reuse Approach and Its Effect On Software Quality, An Empirical Study for The Software Industry | Software reusability has become much interesting because of increased quality
and reduce cost. A good process of software reuse leads to enhance the
reliability, productivity, quality and the reduction of time and cost. Current
reuse techniques focuses on the reuse of software artifact which grounded on
anticipated functionality whereas, the non-functional (quality) aspect are also
important. So, Software reusability used here to expand quality and
productivity of software. It improves overall quality of software in minimum
energy and time. Main objective of this study was to present a reuse approach
that discovered that how software reuse improves the quality in Software
Industry. The V&V technique used for this purpose which is part of software
quality management process, it checks the quality and correctness during the
software life cycle. A survey study conducted as QUESTIONAIR to find the impact
of reuse approach on quality attributes which are requirement specification and
design specification. Other quality enhancement techniques like ad hoc, CBSE,
MBSE, Product line, COTS reuse checked on existing software industry. Results
analyzed with the help of MATLAB tool as it provides effective data management,
wide range of options, better output organization, to check weather quality
enhancement technique is affected due to reusability and how quality will
improve.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,473 | Portfolio Optimization with Nondominated Priors and Unbounded Parameters | We consider classical Merton problem of terminal wealth maximization in
finite horizon. We assume that the drift of the stock is following
Ornstein-Uhlenbeck process and the volatility of it is following GARCH(1)
process. In particular, both mean and volatility are unbounded. We assume that
there is Knightian uncertainty on the parameters of both mean and volatility.
We take that the investor has logarithmic utility function, and solve the
corresponding utility maximization problem explicitly. To the best of our
knowledge, this is the first work on utility maximization with unbounded mean
and volatility in Knightian uncertainty under nondominated priors.
| 0 | 0 | 0 | 0 | 0 | 1 |
19,474 | Measuring heterogeneity in urban expansion via spatial entropy | The lack of efficiency in urban diffusion is a debated issue, important for
biologists, urban specialists, planners and statisticians, both in developed
and new developing countries. Many approaches have been considered to measure
urban sprawl, i.e. chaotic urban expansion; such idea of chaos is here linked
to the concept of entropy. Entropy, firstly introduced in information theory,
rapidly became a standard tool in ecology, biology and geography to measure the
degree of heterogeneity among observations; in these contexts, entropy measures
should include spatial information. The aim of this paper is to employ a
rigorous spatial entropy based approach to measure urban sprawl associated to
the diffusion of metropolitan cities. In order to assess the performance of the
considered measures, a comparative study is run over alternative urban
scenarios; afterwards, measures are used to quantify the degree of disorder in
the urban expansion of three cities in Europe. Results are easily interpretable
and can be used both as an absolute measure of urban sprawl and for comparison
over space and time.
| 0 | 0 | 0 | 1 | 0 | 0 |
19,475 | Bayesian Cluster Enumeration Criterion for Unsupervised Learning | We derive a new Bayesian Information Criterion (BIC) by formulating the
problem of estimating the number of clusters in an observed data set as
maximization of the posterior probability of the candidate models. Given that
some mild assumptions are satisfied, we provide a general BIC expression for a
broad class of data distributions. This serves as a starting point when
deriving the BIC for specific distributions. Along this line, we provide a
closed-form BIC expression for multivariate Gaussian distributed variables. We
show that incorporating the data structure of the clustering problem into the
derivation of the BIC results in an expression whose penalty term is different
from that of the original BIC. We propose a two-step cluster enumeration
algorithm. First, a model-based unsupervised learning algorithm partitions the
data according to a given set of candidate models. Subsequently, the number of
clusters is determined as the one associated with the model for which the
proposed BIC is maximal. The performance of the proposed two-step algorithm is
tested using synthetic and real data sets.
| 1 | 0 | 1 | 1 | 0 | 0 |
19,476 | Fuzzy Adaptive Tuning of a Particle Swarm Optimization Algorithm for Variable-Strength Combinatorial Test Suite Generation | Combinatorial interaction testing is an important software testing technique
that has seen lots of recent interest. It can reduce the number of test cases
needed by considering interactions between combinations of input parameters.
Empirical evidence shows that it effectively detects faults, in particular, for
highly configurable software systems. In real-world software testing, the input
variables may vary in how strongly they interact, variable strength
combinatorial interaction testing (VS-CIT) can exploit this for higher
effectiveness. The generation of variable strength test suites is a
non-deterministic polynomial-time (NP) hard computational problem
\cite{BestounKamalFuzzy2017}. Research has shown that stochastic
population-based algorithms such as particle swarm optimization (PSO) can be
efficient compared to alternatives for VS-CIT problems. Nevertheless, they
require detailed control for the exploitation and exploration trade-off to
avoid premature convergence (i.e. being trapped in local optima) as well as to
enhance the solution diversity. Here, we present a new variant of PSO based on
Mamdani fuzzy inference system
\cite{Camastra2015,TSAKIRIDIS2017257,KHOSRAVANIAN2016280}, to permit adaptive
selection of its global and local search operations. We detail the design of
this combined algorithm and evaluate it through experiments on multiple
synthetic and benchmark problems. We conclude that fuzzy adaptive selection of
global and local search operations is, at least, feasible as it performs only
second-best to a discrete variant of PSO, called DPSO. Concerning obtaining the
best mean test suite size, the fuzzy adaptation even outperforms DPSO
occasionally. We discuss the reasons behind this performance and outline
relevant areas of future work.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,477 | Is Life Most Likely Around Sun-like Stars? | We consider the habitability of Earth-analogs around stars of different
masses, which is regulated by the stellar lifetime, stellar wind-induced
atmospheric erosion, and biologically active ultraviolet (UV) irradiance. By
estimating the timescales associated with each of these processes, we show that
they collectively impose limits on the habitability of Earth-analogs. We
conclude that planets orbiting most M-dwarfs are not likely to host life, and
that the highest probability of complex biospheres is for planets around K- and
G-type stars. Our analysis suggests that the current existence of life near the
Sun is slightly unusual, but not significantly anomalous.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,478 | Maximum Entropy Generators for Energy-Based Models | Unsupervised learning is about capturing dependencies between variables and
is driven by the contrast between the probable vs. improbable configurations of
these variables, often either via a generative model that only samples probable
ones or with an energy function (unnormalized log-density) that is low for
probable ones and high for improbable ones. Here, we consider learning both an
energy function and an efficient approximate sampling mechanism. Whereas the
discriminator in generative adversarial networks (GANs) learns to separate data
and generator samples, introducing an entropy maximization regularizer on the
generator can turn the interpretation of the critic into an energy function,
which separates the training distribution from everything else, and thus can be
used for tasks like anomaly or novelty detection. Then, we show how Markov
Chain Monte Carlo can be done in the generator latent space whose samples can
be mapped to data space, producing better samples. These samples are used for
the negative phase gradient required to estimate the log-likelihood gradient of
the data space energy function. To maximize entropy at the output of the
generator, we take advantage of recently introduced neural estimators of mutual
information. We find that in addition to producing a useful scoring function
for anomaly detection, the resulting approach produces sharp samples while
covering the modes well, leading to high Inception and Frechet scores.
| 1 | 0 | 0 | 1 | 0 | 0 |
19,479 | A Lifelong Learning Approach to Brain MR Segmentation Across Scanners and Protocols | Convolutional neural networks (CNNs) have shown promising results on several
segmentation tasks in magnetic resonance (MR) images. However, the accuracy of
CNNs may degrade severely when segmenting images acquired with different
scanners and/or protocols as compared to the training data, thus limiting their
practical utility. We address this shortcoming in a lifelong multi-domain
learning setting by treating images acquired with different scanners or
protocols as samples from different, but related domains. Our solution is a
single CNN with shared convolutional filters and domain-specific batch
normalization layers, which can be tuned to new domains with only a few
($\approx$ 4) labelled images. Importantly, this is achieved while retaining
performance on the older domains whose training data may no longer be
available. We evaluate the method for brain structure segmentation in MR
images. Results demonstrate that the proposed method largely closes the gap to
the benchmark, which is training a dedicated CNN for each scanner.
| 0 | 0 | 0 | 1 | 0 | 0 |
19,480 | Hybrid Kinematic Control for Rigid Body Pose Stabilization using Dual Quaternions | In this paper, we address the rigid body pose stabilization problem using
dual quaternion formalism. We propose a hybrid control strategy to design a
switching control law with hysteresis in such a way that the global asymptotic
stability of the closed-loop system is guaranteed and such that the global
attractivity of the stabilization pose does not exhibit chattering, a problem
that is present in all discontinuous-based feedback controllers. Using
numerical simulations, we illustrate the problems that arise from existing
results in the literature -- as unwinding and chattering -- and verify the
effectiveness of the proposed controller to solve the robust global pose
stability problem.
| 1 | 0 | 1 | 0 | 0 | 0 |
19,481 | On 2d-4d motivic wall-crossing formulas | In this paper we propose definitions and examples of categorical enhancements
of the data involved in the $2d$-$4d$ wall-crossing formulas which generalize
both Cecotti-Vafa and Kontsevich-Soibelman motivic wall-crossing formulas.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,482 | On Quitting: Performance and Practice in Online Game Play | We study the relationship between performance and practice by analyzing the
activity of many players of a casual online game. We find significant
heterogeneity in the improvement of player performance, given by score, and
address this by dividing players into similar skill levels and segmenting each
player's activity into sessions, i.e., sequence of game rounds without an
extended break. After disaggregating data, we find that performance improves
with practice across all skill levels. More interestingly, players are more
likely to end their session after an especially large improvement, leading to a
peak score in their very last game of a session. In addition, success is
strongly correlated with a lower quitting rate when the score drops, and only
weakly correlated with skill, in line with psychological findings about the
value of persistence and "grit": successful players are those who persist in
their practice despite lower scores. Finally, we train an epsilon-machine, a
type of hidden Markov model, and find a plausible mechanism of game play that
can predict player performance and quitting the game. Our work raises the
possibility of real-time assessment and behavior prediction that can be used to
optimize human performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,483 | Video Labeling for Automatic Video Surveillance in Security Domains | Beyond traditional security methods, unmanned aerial vehicles (UAVs) have
become an important surveillance tool used in security domains to collect the
required annotated data. However, collecting annotated data from videos taken
by UAVs efficiently, and using these data to build datasets that can be used
for learning payoffs or adversary behaviors in game-theoretic approaches and
security applications, is an under-explored research question. This paper
presents VIOLA, a novel labeling application that includes (i) a workload
distribution framework to efficiently gather human labels from videos in a
secured manner; (ii) a software interface with features designed for labeling
videos taken by UAVs in the domain of wildlife security. We also present the
evolution of VIOLA and analyze how the changes made in the development process
relate to the efficiency of labeling, including when seemingly obvious
improvements did not lead to increased efficiency. VIOLA enables collecting
massive amounts of data with detailed information from challenging security
videos such as those collected aboard UAVs for wildlife security. VIOLA will
lead to the development of new approaches that integrate deep learning for
real-time detection and response.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,484 | On density of subgraphs of Cartesian products | In this paper, we extend two classical results about the density of subgraphs
of hypercubes to subgraphs $G$ of Cartesian products $G_1\times\cdots\times
G_m$ of arbitrary connected graphs. Namely, we show that
$\frac{|E(G)|}{|V(G)|}\le \lceil 2\max\{
\text{dens}(G_1),\ldots,\text{dens}(G_m)\} \rceil\log|V(G)|$, where
$\text{dens}(H)$ is the maximum ratio $\frac{|E(H')|}{|V(H')|}$ over all
subgraphs $H'$ of $H$. We introduce the notions of VC-dimension
$\text{VC-dim}(G)$ and VC-density $\text{VC-dens}(G)$ of a subgraph $G$ of a
Cartesian product $G_1\times\cdots\times G_m$, generalizing the classical
Vapnik-Chervonenkis dimension of set-families (viewed as subgraphs of
hypercubes). We prove that if $G_1,\ldots,G_m$ belong to the class ${\mathcal
G}(H)$ of all finite connected graphs not containing a given graph $H$ as a
minor, then for any subgraph $G$ of $G_1\times\cdots\times G_m$ a sharper
inequality $\frac{|E(G)|}{|V(G)|}\le \text{VC-dim}(G)\alpha(H)$ holds, where
$\alpha(H)$ is the density of the graphs from ${\mathcal G}(H)$. We refine and
sharpen those two results to several specific graph classes. We also derive
upper bounds (some of them polylogarithmic) for the size of adjacency labeling
schemes of subgraphs of Cartesian products.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,485 | Replica Analysis for Maximization of Net Present Value | In this paper, we use replica analysis to determine the investment strategy
that can maximize the net present value for portfolios containing multiple
development projects. Replica analysis was developed in statistical mechanical
informatics and econophysics to evaluate disordered systems, and here we use it
to formulate the maximization of the net present value as an optimization
problem under budget and investment concentration constraints. Furthermore, we
confirm that a common approach from operations research underestimates the true
maximal net present value as the maximal expected net present value by
comparing our results with the maximal expected net present value as derived in
operations research. Moreover, it is shown that the conventional method for
estimating the net present value does not consider variance in the cash flow.
| 0 | 0 | 0 | 0 | 0 | 1 |
19,486 | On some properties of weak solutions to elliptic equations with divergence-free drifts | We discuss the local properties of weak solutions to the equation $-\Delta u
+ b\cdot\nabla u=0$. The corresponding theory is well-known in the case $b\in
L_n$, where $n$ is the dimension of the space. Our main interest is focused on
the case $b\in L_2$. In this case the structure assumption $\operatorname{div}
b=0$ turns out to be crucial.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,487 | The heptagon-wheel cocycle in the Kontsevich graph complex | The real vector space of non-oriented graphs is known to carry a differential
graded Lie algebra structure. Cocycles in the Kontsevich graph complex,
expressed using formal sums of graphs on $n$ vertices and $2n-2$ edges, induce
-- under the orientation mapping -- infinitesimal symmetries of classical
Poisson structures on arbitrary finite-dimensional affine real manifolds.
Willwacher has stated the existence of a nontrivial cocycle that contains the
$(2\ell+1)$-wheel graph with a nonzero coefficient at every
$\ell\in\mathbb{N}$. We present detailed calculations of the differential of
graphs; for the tetrahedron and pentagon-wheel cocycles, consisting at $\ell =
1$ and $\ell = 2$ of one and two graphs respectively, the cocycle condition
$d(\gamma) = 0$ is verified by hand. For the next, heptagon-wheel cocycle
(known to exist at $\ell = 3$), we provide an explicit representative: it
consists of 46 graphs on 8 vertices and 14 edges.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,488 | Maneuver Regulation for Accelerating Bodies in Atmospheric Environments | In order to address the need for an affordable reduced gravity test platform,
this work focuses on the analysis and implementation of atmospheric
acceleration tracking with an autonomous aerial vehicle. As proof of concept,
the vehicle is designed with the objective of flying accurate reduced-gravity
parabolas. Suggestions from both academia and industry were taken into account,
as well as requirements imposed by a regulatory agency. The novelty of this
work is the Proportional Integral Ramp Quadratic PIRQ controller, which is
employed to counteract the aerodynamic forces impeding the vehicles constant
acceleration during the maneuver. The stability of the free-fall maneuver under
this controller is studied in detail via the formation of the transverse
dynamics and the application of the circle criterion. The implementation of
such a controller is then outlined, and the PIRQ controller is validated
through a flight test, where the vehicle successfully tracks Martian gravity
0.378 G's with a standard deviation of 0.0426.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,489 | A Hybrid Deep Learning Approach for Texture Analysis | Texture classification is a problem that has various applications such as
remote sensing and forest species recognition. Solutions tend to be custom fit
to the dataset used but fails to generalize. The Convolutional Neural Network
(CNN) in combination with Support Vector Machine (SVM) form a robust selection
between powerful invariant feature extractor and accurate classifier. The
fusion of experts provides stability in classification rates among different
datasets.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,490 | Accounting Noise and the Pricing of CoCos | Contingent Convertible bonds (CoCos) are debt instruments that convert into
equity or are written down in times of distress. Existing pricing models assume
conversion triggers based on market prices and on the assumption that markets
can always observe all relevant firm information. But all Cocos issued so far
have triggers based on accounting ratios and/or regulatory intervention. We
incorporate that markets receive information through noisy accounting reports
issued at discrete time instants, which allows us to distinguish between market
and accounting values, and between automatic triggers and regulator-mandated
conversions. Our second contribution is to incorporate that coupon payments are
contingent too: their payment is conditional on the Maximum Distributable
Amount not being exceeded. We examine the impact of CoCo design parameters,
asset volatility and accounting noise on the price of a CoCo; and investigate
the interaction between CoCo design features, the capital structure of the
issuing bank and their implications for risk taking and investment incentives.
Finally, we use our model to explain the crash in CoCo prices after Deutsche
Bank's profit warning in February 2016.
| 0 | 0 | 0 | 0 | 0 | 1 |
19,491 | An Improved Algorithm for E-Generalization | E-generalization computes common generalizations of given ground terms w.r.t.
a given equational background theory E. In 2005 [arXiv:1403.8118], we had
presented a computation approach based on standard regular tree grammar
algorithms, and a Prolog prototype implementation. In this report, we present
algorithmic improvements, prove them correct and complete, and give some
details of an efficiency-oriented implementation in C that allows us to handle
problems larger by several orders of magnitude.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,492 | On quartic double fivefolds and the matrix factorizations of exceptional quaternionic representations | We study quartic double fivefolds from the perspective of Fano manifolds of
Calabi-Yau type and that of exceptional quaternionic representations. We first
prove that the generic quartic double fivefold can be represented, in a finite
number of ways, as a double cover of P^5 ramified along a linear section of the
Sp 12-invariant quartic in P^31. Then, using the geometry of the Vinberg's type
II decomposition of some exceptional quaternionic representations, and backed
by some cohomological computations performed by Macaulay2, we prove the
existence of a spherical rank 6 vector bundle on such a generic quartic double
fivefold. We finally use the existence this vector bundle to prove that the
homological unit of the CY-3 category associated by Kuznetsov to the derived
category of a generic quartic double fivefold is C $\oplus$ C[3].
| 0 | 0 | 1 | 0 | 0 | 0 |
19,493 | Atmospheric thermal tides and planetary spin I. The complex interplay between stratification and rotation | Thermal atmospheric tides can torque telluric planets away from spin-orbit
synchronous rotation, as observed in the case of Venus. They thus participate
to determine the possible climates and general circulations of the atmospheres
of these planets. In this work, we write the equations governing the dynamics
of thermal tides in a local vertically-stratified section of a rotating
planetary atmosphere by taking into account the effects of the complete
Coriolis acceleration on tidal waves. This allows us to derive analytically the
tidal torque and the tidally dissipated energy, which we use to discuss the
possible regimes of tidal dissipation and examine the key role played by
stratification.
In agreement with early studies, we find that the frequency dependence of the
thermal atmospheric tidal torque in the vicinity of synchronization can be
approximated by a Maxwell model. This behaviour corresponds to weakly stably
stratified or convective fluid layers, as observed in ADLM2016a. A strong
stable stratification allows gravity waves to propagate, which makes the tidal
torque become negligible. The transition is continuous between these two
regimes. The traditional approximation appears to be valid in thin atmospheres
and in regimes where the rotation frequency is dominated by the forcing or the
buoyancy frequencies.
Depending on the stability of their atmospheres with respect to convection,
observed exoplanets can be tidally driven toward synchronous or asynchronous
final rotation rates. The domain of applicability of the traditional
approximation is rigorously constrained by calculations.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,494 | Inferring Generative Model Structure with Static Analysis | Obtaining enough labeled data to robustly train complex discriminative models
is a major bottleneck in the machine learning pipeline. A popular solution is
combining multiple sources of weak supervision using generative models. The
structure of these models affects training label quality, but is difficult to
learn without any ground truth labels. We instead rely on these weak
supervision sources having some structure by virtue of being encoded
programmatically. We present Coral, a paradigm that infers generative model
structure by statically analyzing the code for these heuristics, thus reducing
the data required to learn structure significantly. We prove that Coral's
sample complexity scales quasilinearly with the number of heuristics and number
of relations found, improving over the standard sample complexity, which is
exponential in $n$ for identifying $n^{\textrm{th}}$ degree relations.
Experimentally, Coral matches or outperforms traditional structure learning
approaches by up to 3.81 F1 points. Using Coral to model dependencies instead
of assuming independence results in better performance than a fully supervised
model by 3.07 accuracy points when heuristics are used to label radiology data
without ground truth labels.
| 1 | 0 | 0 | 1 | 0 | 0 |
19,495 | Maslov, Chern-Weil and Mean Curvature | We provide an integral formula for the Maslov index of a pair $(E,F)$ over a
surface $\Sigma$, where $E\rightarrow\Sigma$ is a complex vector bundle and
$F\subset E_{|\partial\Sigma}$ is a totally real subbundle. As in Chern-Weil
theory, this formula is written in terms of the curvature of $E$ plus a
boundary contribution.
When $(E,F)$ is obtained via an immersion of $(\Sigma,\partial\Sigma)$ into a
pair $(M,L)$ where $M$ is Kähler and $L$ is totally real, the formula allows
us to control the Maslov index in terms of the geometry of $(M,L)$. We exhibit
natural conditions on $(M,L)$ which lead to bounds and monotonicity results.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,496 | Tuning Goodness-of-Fit Tests | As modern precision cosmological measurements continue to show agreement with
the broad features of the standard $\Lambda$-Cold Dark Matter ($\Lambda$CDM)
cosmological model, we are increasingly motivated to look for small departures
from the standard model's predictions which might not be detected with standard
approaches. While searches for extensions and modifications of $\Lambda$CDM
have to date turned up no convincing evidence of beyond-the-standard-model
cosmology, the list of models compared against $\Lambda$CDM is by no means
complete and is often governed by readily-coded modifications to standard
Boltzmann codes. Also, standard goodness-of-fit methods such as a naive
$\chi^2$ test fail to put strong pressure on the null $\Lambda$CDM hypothesis,
since modern datasets have orders of magnitudes more degrees of freedom than
$\Lambda$CDM. Here we present a method of tuning goodness-of-fit tests to
detect potential sub-dominant extra-$\Lambda$CDM signals present in the data
through compressing observations in a way that maximizes extra-$\Lambda$CDM
signal variation over noise and $\Lambda$CDM variation. This method, based on a
Karhunen-Loève transformation of the data, is tuned to be maximally
sensitive to particular types of variations characteristic of the tuning model;
but, unlike direct model comparison, the test is also sensitive to features
that only partially mimic the tuning model. As an example of its use, we apply
this method in the context of a nonstandard primordial power spectrum compared
against the $2015$ $Planck$ CMB temperature and polarization power spectrum. We
find weak evidence of extra-$\Lambda$CDM physics, conceivably due to known
systematics in the 2015 Planck polarization release.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,497 | The Lubin-Tate stack and Gross-Hopkins duality | Morava $E$-theory $E$ is an $E_\infty$-ring with an action of the Morava
stabilizer group $\Gamma$. We study the derived stack $\operatorname{Spf}
E/\Gamma$. Descent-theoretic techniques allow us to deduce a theorem of
Hopkins-Mahowald-Sadofsky on the $K(n)$-local Picard group, as well as a recent
result of Barthel-Beaudry-Stojanoska on the Anderson duals of higher real
$K$-theories.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,498 | Generalized Probabilistic Bisection for Stochastic Root-Finding | We consider numerical schemes for root finding of noisy responses through
generalizing the Probabilistic Bisection Algorithm (PBA) to the more practical
context where the sampling distribution is unknown and location-dependent. As
in standard PBA, we rely on a knowledge state for the approximate posterior of
the root location. To implement the corresponding Bayesian updating, we also
carry out inference of oracle accuracy, namely learning the probability of
correct response. To this end we utilize batched querying in combination with a
variety of frequentist and Bayesian estimators based on majority vote, as well
as the underlying functional responses, if available. For guiding sampling
selection we investigate both Information Directed sampling, as well as
Quantile sampling. Our numerical experiments show that these strategies perform
quite differently; in particular we demonstrate the efficiency of randomized
quantile sampling which is reminiscent of Thompson sampling. Our work is
motivated by the root-finding sub-routine in pricing of Bermudan financial
derivatives, illustrated in the last section of the paper.
| 1 | 0 | 0 | 1 | 0 | 0 |
19,499 | Towards Proof Synthesis Guided by Neural Machine Translation for Intuitionistic Propositional Logic | Inspired by the recent evolution of deep neural networks (DNNs) in machine
learning, we explore their application to PL-related topics. This paper is the
first step towards this goal; we propose a proof-synthesis method for the
negation-free propositional logic in which we use a DNN to obtain a guide of
proof search. The idea is to view the proof-synthesis problem as a translation
from a proposition to its proof. We train seq2seq, which is a popular network
in neural machine translation, so that it generates a proof encoded as a
$\lambda$-term of a given proposition. We implement the whole framework and
empirically observe that a generated proof term is close to a correct proof in
terms of the tree edit distance of AST. This observation justifies using the
output from a trained seq2seq model as a guide for proof search.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,500 | Fibonacci words in hyperbolic Pascal triangles | The hyperbolic Pascal triangle ${\cal HPT}_{4,q}$ $(q\ge5)$ is a new
mathematical construction, which is a geometrical generalization of Pascal's
arithmetical triangle. In the present study we show that a natural pattern of
rows of ${\cal HPT}_{4,5}$ is almost the same as the sequence consisting of
every second term of the well-known Fibonacci words. Further, we give a
generalization of the Fibonacci words using the hyperbolic Pascal triangles.
The geometrical properties of a ${\cal HPT}_{4,q}$ imply a graph structure
between the finite Fibonacci words.
| 1 | 0 | 0 | 0 | 0 | 0 |
Subsets and Splits