title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Parametric Inference for Discretely Observed Subordinate Diffusions | Subordinate diffusions are constructed by time changing diffusion processes
with an independent Lévy subordinator. This is a rich family of Markovian
jump processes which exhibit a variety of jump behavior and have found many
applications. This paper studies parametric inference of discretely observed
ergodic subordinate diffusions. We solve the identifiability problem for these
processes using spectral theory and propose a two-step estimation procedure
based on estimating functions. In the first step, we use an estimating function
that only involves diffusion parameters. In the second step, a martingale
estimating function based on eigenvalues and eigenfunctions of the subordinate
diffusion is used to estimate the parameters of the Lévy subordinator and
the problem of how to choose the weighting matrix is solved. When the
eigenpairs do not have analytical expressions, we apply the constant
perturbation method with high order corrections to calculate them numerically
and the martingale estimating function can be computed efficiently. Consistency
and asymptotic normality of our estimator are established considering the
effect of numerical approximation. Through numerical examples, we show that our
method is both computationally and statistically efficient. A subordinate
diffusion model for VIX (CBOE volatility index) is developed which provides
good fit to the data.
| 0 | 0 | 1 | 1 | 0 | 0 |
Adversarially Regularized Graph Autoencoder for Graph Embedding | Graph embedding is an effective method to represent graph data in a low
dimensional space for graph analytics. Most existing embedding algorithms
typically focus on preserving the topological structure or minimizing the
reconstruction errors of graph data, but they have mostly ignored the data
distribution of the latent codes from the graphs, which often results in
inferior embedding in real-world graph data. In this paper, we propose a novel
adversarial graph embedding framework for graph data. The framework encodes the
topological structure and node content in a graph to a compact representation,
on which a decoder is trained to reconstruct the graph structure. Furthermore,
the latent representation is enforced to match a prior distribution via an
adversarial training scheme. To learn a robust embedding, two variants of
adversarial approaches, adversarially regularized graph autoencoder (ARGA) and
adversarially regularized variational graph autoencoder (ARVGA), are developed.
Experimental studies on real-world graphs validate our design and demonstrate
that our algorithms outperform baselines by a wide margin in link prediction,
graph clustering, and graph visualization tasks.
| 0 | 0 | 0 | 1 | 0 | 0 |
Cheap Orthogonal Constraints in Neural Networks: A Simple Parametrization of the Orthogonal and Unitary Group | We introduce a novel approach to perform first-order optimization with
orthogonal and unitary constraints. This approach is based on a parametrization
stemming from Lie group theory through the exponential map. The parametrization
transforms the constrained optimization problem into an unconstrained one over
a Euclidean space, for which common first-order optimization methods can be
used. The theoretical results presented are general enough to cover the special
orthogonal group, the unitary group and, in general, any connected compact Lie
group. We discuss how this and other parametrizations can be computed
efficiently through an implementation trick, making numerically complex
parametrizations usable at a negligible runtime cost in neural networks. In
particular, we apply our results to RNNs with orthogonal recurrent weights,
yielding a new architecture called expRNN. We demonstrate how our method
constitutes a more robust approach to optimization with orthogonal constraints,
showing faster, accurate, and more stable convergence in several tasks designed
to test RNNs.
| 1 | 0 | 0 | 1 | 0 | 0 |
Training Big Random Forests with Little Resources | Without access to large compute clusters, building random forests on large
datasets is still a challenging problem. This is, in particular, the case if
fully-grown trees are desired. We propose a simple yet effective framework that
allows to efficiently construct ensembles of huge trees for hundreds of
millions or even billions of training instances using a cheap desktop computer
with commodity hardware. The basic idea is to consider a multi-level
construction scheme, which builds top trees for small random subsets of the
available data and which subsequently distributes all training instances to the
top trees' leaves for further processing. While being conceptually simple, the
overall efficiency crucially depends on the particular implementation of the
different phases. The practical merits of our approach are demonstrated using
dense datasets with hundreds of millions of training instances.
| 0 | 0 | 0 | 1 | 0 | 0 |
Epitaxy of Advanced Nanowire Quantum Devices | Semiconductor nanowires provide an ideal platform for various low-dimensional
quantum devices. In particular, topological phases of matter hosting
non-Abelian quasi-particles can emerge when a semiconductor nanowire with
strong spin-orbit coupling is brought in contact with a superconductor. To
fully exploit the potential of non-Abelian anyons for topological quantum
computing, they need to be exchanged in a well-controlled braiding operation.
Essential hardware for braiding is a network of single-crystalline nanowires
coupled to superconducting islands. Here, we demonstrate a technique for
generic bottom-up synthesis of complex quantum devices with a special focus on
nanowire networks having a predefined number of superconducting islands.
Structural analysis confirms the high crystalline quality of the nanowire
junctions, as well as an epitaxial superconductor-semiconductor interface.
Quantum transport measurements of nanowire "hashtags" reveal Aharonov-Bohm and
weak-antilocalization effects, indicating a phase coherent system with strong
spin-orbit coupling. In addition, a proximity-induced hard superconducting gap
is demonstrated in these hybrid superconductor-semiconductor nanowires,
highlighting the successful materials development necessary for a first
braiding experiment. Our approach opens new avenues for the realization of
epitaxial 3-dimensional quantum device architectures.
| 0 | 1 | 0 | 0 | 0 | 0 |
Toward universality in degree 2 of the Kricker lift of the Kontsevich integral and the Lescop equivariant invariant | In the setting of finite type invariants for null-homologous knots in
rational homology 3-spheres with respect to null Lagrangian-preserving
surgeries, there are two candidates to be universal invariants, defined
respectively by Kricker and Lescop. In a previous paper, the second author
defined maps between spaces of Jacobi diagrams. Injectivity for these maps
would imply that Kricker and Lescop invariants are indeed universal invariants;
this would prove in particular that these two invariants are equivalent. In the
present paper, we investigate the injectivity status of these maps for degree 2
invariants, in the case of knots whose Blanchfield modules are direct sums of
isomorphic Blanchfield modules of Q-- dimension two. We prove that they are
always injective except in one case, for which we determine explicitly the
kernel.
| 0 | 0 | 1 | 0 | 0 | 0 |
Automatic segmentation of MR brain images with a convolutional neural network | Automatic segmentation in MR brain images is important for quantitative
analysis in large-scale studies with images acquired at all ages.
This paper presents a method for the automatic segmentation of MR brain
images into a number of tissue classes using a convolutional neural network. To
ensure that the method obtains accurate segmentation details as well as spatial
consistency, the network uses multiple patch sizes and multiple convolution
kernel sizes to acquire multi-scale information about each voxel. The method is
not dependent on explicit features, but learns to recognise the information
that is important for the classification based on training data. The method
requires a single anatomical MR image only.
The segmentation method is applied to five different data sets: coronal
T2-weighted images of preterm infants acquired at 30 weeks postmenstrual age
(PMA) and 40 weeks PMA, axial T2- weighted images of preterm infants acquired
at 40 weeks PMA, axial T1-weighted images of ageing adults acquired at an
average age of 70 years, and T1-weighted images of young adults acquired at an
average age of 23 years. The method obtained the following average Dice
coefficients over all segmented tissue classes for each data set, respectively:
0.87, 0.82, 0.84, 0.86 and 0.91.
The results demonstrate that the method obtains accurate segmentations in all
five sets, and hence demonstrates its robustness to differences in age and
acquisition protocol.
| 1 | 0 | 0 | 0 | 0 | 0 |
Integrable Structure of Multispecies Zero Range Process | We present a brief review on integrability of multispecies zero range process
in one dimension introduced recently. The topics range over stochastic $R$
matrices of quantum affine algebra $U_q (A^{(1)}_n)$, matrix product
construction of stationary states for periodic systems, $q$-boson
representation of Zamolodchikov-Faddeev algebra, etc. We also introduce new
commuting Markov transfer matrices having a mixed boundary condition and prove
the factorization of a family of $R$ matrices associated with the tetrahedron
equation and generalized quantum groups at a special point of the spectral
parameter.
| 0 | 1 | 1 | 0 | 0 | 0 |
A shared latent space matrix factorisation method for recommending new trial evidence for systematic review updates | Clinical trial registries can be used to monitor the production of trial
evidence and signal when systematic reviews become out of date. However, this
use has been limited to date due to the extensive manual review required to
search for and screen relevant trial registrations. Our aim was to evaluate a
new method that could partially automate the identification of trial
registrations that may be relevant for systematic review updates. We identified
179 systematic reviews of drug interventions for type 2 diabetes, which
included 537 clinical trials that had registrations in ClinicalTrials.gov. We
tested a matrix factorisation approach that uses a shared latent space to learn
how to rank relevant trial registrations for each systematic review, comparing
the performance to document similarity to rank relevant trial registrations.
The two approaches were tested on a holdout set of the newest trials from the
set of type 2 diabetes systematic reviews and an unseen set of 141 clinical
trial registrations from 17 updated systematic reviews published in the
Cochrane Database of Systematic Reviews. The matrix factorisation approach
outperformed the document similarity approach with a median rank of 59 and
recall@100 of 60.9%, compared to a median rank of 138 and recall@100 of 42.8%
in the document similarity baseline. In the second set of systematic reviews
and their updates, the highest performing approach used document similarity and
gave a median rank of 67 (recall@100 of 62.9%). The proposed method was useful
for ranking trial registrations to reduce the manual workload associated with
finding relevant trials for systematic review updates. The results suggest that
the approach could be used as part of a semi-automated pipeline for monitoring
potentially new evidence for inclusion in a review update.
| 1 | 0 | 0 | 0 | 0 | 0 |
Spectra of Magnetic Operators on the Diamond Lattice Fractal | We adapt the well-known spectral decimation technique for computing spectra
of Laplacians on certain symmetric self-similar sets to the case of magnetic
Schrodinger operators and work through this method completely for the diamond
lattice fractal. This connects results of physicists from the 1980's, who used
similar techniques to compute spectra of sequences of magnetic operators on
graph approximations to fractals but did not verify existence of a limiting
fractal operator, to recent work describing magnetic operators on fractals via
functional analytic techniques.
| 0 | 0 | 1 | 0 | 0 | 0 |
Arrow calculus for welded and classical links | We develop a calculus for diagrams of knotted objects. We define Arrow
presentations, which encode the crossing informations of a diagram into arrows
in a way somewhat similar to Gauss diagrams, and more generally w-tree
presentations, which can be seen as `higher order Gauss diagrams'. This Arrow
calculus is used to develop an analogue of Habiro's clasper theory for welded
knotted objects, which contain classical link diagrams as a subset. This
provides a 'realization' of Polyak's algebra of arrow diagrams at the welded
level, and leads to a characterization of finite type invariants of welded
knots and long knots. As a corollary, we recover several topological results
due to K. Habiro and A. Shima and to T. Watanabe on knotted surfaces in
4-space. We also classify welded string links up to homotopy, thus recovering a
result of the first author with B. Audoux, P. Bellingeri and E. Wagner.
| 0 | 0 | 1 | 0 | 0 | 0 |
Determinacy of Schmidt's Game and Other Intersection Games | Schmidt's game, and other similar intersection games have played an important
role in recent years in applications to number theory, dynamics, and
Diophantine approximation theory. These games are real games, that is, games in
which the players make moves from a complete separable metric space. The
determinacy of these games trivially follows from the axiom of determinacy for
real games, $\mathsf{AD}_\mathbb{R}$, which is a much stronger axiom than that
asserting all integer games are determined, $\mathsf{AD}$. One of our main
results is a general theorem which under the hypothesis $\mathsf{AD}$ implies
the determinacy of intersection games which have a property allowing strategies
to be simplified. In particular, we show that Schmidt's $(\alpha,\beta,\rho)$
game on $\mathbb{R}$ is determined from $\mathsf{AD}$ alone, but on
$\mathbb{R}^n$ for $n \geq 3$ we show that $\mathsf{AD}$ does not imply the
determinacy of this game. We also prove several other results specifically
related to the determinacy of Schmidt's game. These results highlight the
obstacles in obtaining the determinacy of Schmidt's game from $\mathsf{AD}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Scalable Kernel K-Means Clustering with Nystrom Approximation: Relative-Error Bounds | Kernel $k$-means clustering can correctly identify and extract a far more
varied collection of cluster structures than the linear $k$-means clustering
algorithm. However, kernel $k$-means clustering is computationally expensive
when the non-linear feature map is high-dimensional and there are many input
points. Kernel approximation, e.g., the Nyström method, has been applied in
previous works to approximately solve kernel learning problems when both of the
above conditions are present. This work analyzes the application of this
paradigm to kernel $k$-means clustering, and shows that applying the linear
$k$-means clustering algorithm to $\frac{k}{\epsilon} (1 + o(1))$ features
constructed using a so-called rank-restricted Nyström approximation results
in cluster assignments that satisfy a $1 + \epsilon$ approximation ratio in
terms of the kernel $k$-means cost function, relative to the guarantee provided
by the same algorithm without the use of the Nyström method. As part of the
analysis, this work establishes a novel $1 + \epsilon$ relative-error trace
norm guarantee for low-rank approximation using the rank-restricted Nyström
approximation. Empirical evaluations on the $8.1$ million instance MNIST8M
dataset demonstrate the scalability and usefulness of kernel $k$-means
clustering with Nyström approximation. This work argues that spectral
clustering using Nyström approximation---a popular and computationally
efficient, but theoretically unsound approach to non-linear clustering---should
be replaced with the efficient and theoretically sound combination of kernel
$k$-means clustering with Nyström approximation. The superior performance of
the latter approach is empirically verified.
| 1 | 0 | 0 | 1 | 0 | 0 |
Discovering Playing Patterns: Time Series Clustering of Free-To-Play Game Data | The classification of time series data is a challenge common to all
data-driven fields. However, there is no agreement about which are the most
efficient techniques to group unlabeled time-ordered data. This is because a
successful classification of time series patterns depends on the goal and the
domain of interest, i.e. it is application-dependent.
In this article, we study free-to-play game data. In this domain, clustering
similar time series information is increasingly important due to the large
amount of data collected by current mobile and web applications. We evaluate
which methods cluster accurately time series of mobile games, focusing on
player behavior data. We identify and validate several aspects of the
clustering: the similarity measures and the representation techniques to reduce
the high dimensionality of time series. As a robustness test, we compare
various temporal datasets of player activity from two free-to-play video-games.
With these techniques we extract temporal patterns of player behavior
relevant for the evaluation of game events and game-business diagnosis. Our
experiments provide intuitive visualizations to validate the results of the
clustering and to determine the optimal number of clusters. Additionally, we
assess the common characteristics of the players belonging to the same group.
This study allows us to improve the understanding of player dynamics and churn
behavior.
| 1 | 0 | 0 | 1 | 0 | 0 |
On the bound states of magnetic Laplacians on wedges | This paper is mainly inspired by the conjecture about the existence of bound
states for magnetic Neumann Laplacians on planar wedges of any aperture
$\phi\in (0,\pi)$. So far, a proof was only obtained for apertures
$\phi\lesssim 0.511\pi$. The conviction in the validity of this conjecture for
apertures $\phi\gtrsim 0.511\pi$ mainly relied on numerical computations. In
this paper we succeed to prove the existence of bound states for any aperture
$\phi \lesssim 0.583\pi$ using a variational argument with suitably chosen test
functions. Employing some more involved test functions and combining a
variational argument with computer-assistance, we extend this interval up to
any aperture $\phi \lesssim 0.595\pi$. Moreover, we analyse the same question
for closely related problems concerning magnetic Robin Laplacians on wedges and
for magnetic Schrödinger operators in the plane with $\delta$-interactions
supported on broken lines.
| 0 | 0 | 1 | 0 | 0 | 0 |
Finite-Time Distributed Linear Equation Solver for Minimum $l_1$ Norm Solutions | This paper proposes distributed algorithms for multi-agent networks to
achieve a solution in finite time to a linear equation $Ax=b$ where $A$ has
full row rank, and with the minimum $l_1$-norm in the underdetermined case
(where $A$ has more columns than rows). The underlying network is assumed to be
undirected and fixed, and an analytical proof is provided for the proposed
algorithm to drive all agents' individual states to converge to a common value,
viz a solution of $Ax=b$, which is the minimum $l_1$-norm solution in the
underdetermined case. Numerical simulations are also provided as validation of
the proposed algorithms.
| 1 | 0 | 0 | 0 | 0 | 0 |
High Throughput Probabilistic Shaping with Product Distribution Matching | Product distribution matching (PDM) is proposed to generate target
distributions over large alphabets by combining the output of several parallel
distribution matchers (DMs) with smaller output alphabets. The parallel
architecture of PDM enables low-complexity and high-throughput implementation.
PDM is used as a shaping device for probabilistic amplitude shaping (PAS). For
64-ASK and a spectral efficiency of 4.5 bits per channel use (bpcu), PDM is as
power efficient as a single full-fledged DM. It is shown how PDM enables PAS
for parallel channels present in multi-carrier systems like digital subscriber
line (DSL) and orthogonal frequency-division multiplexing (OFDM). The key
feature is that PDM shares the DMs for lower bit-levels among different
sub-carriers, which improves the power efficiency significantly. A
representative parallel channel example shows that PAS with PDM is 0.93 dB more
power efficient than conventional uniform signaling and PDM is 0.35 dB more
power efficient than individual per channel DMs.
| 1 | 0 | 0 | 0 | 0 | 0 |
Minimax Estimation of Large Precision Matrices with Bandable Cholesky Factor | Last decade witnesses significant methodological and theoretical advances in
estimating large precision matrices. In particular, there are scientific
applications such as longitudinal data, meteorology and spectroscopy in which
the ordering of the variables can be interpreted through a bandable structure
on the Cholesky factor of the precision matrix. However, the minimax theory has
still been largely unknown, as opposed to the well established minimax results
over the corresponding bandable covariance matrices. In this paper, we focus on
two commonly used types of parameter spaces, and develop the optimal rates of
convergence under both the operator norm and the Frobenius norm. A striking
phenomenon is found: two types of parameter spaces are fundamentally different
under the operator norm but enjoy the same rate optimality under the Frobenius
norm, which is in sharp contrast to the equivalence of corresponding two types
of bandable covariance matrices under both norms. This fundamental difference
is established by carefully constructing the corresponding minimax lower
bounds. Two new estimation procedures are developed: for the operator norm, our
optimal procedure is based on a novel local cropping estimator targeting on all
principle submatrices of the precision matrix while for the Frobenius norm, our
optimal procedure relies on a delicate regression-based block-thresholding
rule. We further establish rate optimality in the nonparanormal model.
Numerical studies are carried out to confirm our theoretical findings.
| 0 | 0 | 1 | 1 | 0 | 0 |
Machine learning in protein engineering | Machine learning-guided protein engineering is a new paradigm that enables
the optimization of complex protein functions. Machine-learning methods use
data to predict protein function without requiring a detailed model of the
underlying physics or biological pathways. They accelerate protein engineering
by learning from information contained in all measured variants and using it to
select variants that are likely to be improved. In this review, we introduce
the steps required to collect protein data, train machine-learning models, and
use trained models to guide engineering. We make recommendations at each stage
and look to future opportunities for machine learning to enable the discovery
of new protein functions and uncover the relationship between protein sequence
and function.
| 0 | 0 | 0 | 0 | 1 | 0 |
Advanced Steel Microstructural Classification by Deep Learning Methods | The inner structure of a material is called microstructure. It stores the
genesis of a material and determines all its physical and chemical properties.
While microstructural characterization is widely spread and well known, the
microstructural classification is mostly done manually by human experts, which
gives rise to uncertainties due to subjectivity. Since the microstructure could
be a combination of different phases or constituents with complex substructures
its automatic classification is very challenging and only a few prior studies
exist. Prior works focused on designed and engineered features by experts and
classified microstructures separately from the feature extraction step.
Recently, Deep Learning methods have shown strong performance in vision
applications by learning the features from data together with the
classification step. In this work, we propose a Deep Learning method for
microstructural classification in the examples of certain microstructural
constituents of low carbon steel. This novel method employs pixel-wise
segmentation via Fully Convolutional Neural Networks (FCNN) accompanied by a
max-voting scheme. Our system achieves 93.94% classification accuracy,
drastically outperforming the state-of-the-art method of 48.89% accuracy.
Beyond the strong performance of our method, this line of research offers a
more robust and first of all objective way for the difficult task of steel
quality appreciation.
| 1 | 1 | 0 | 0 | 0 | 0 |
Categorizing Hirsch Index Variants | Utilizing the Hirsch index h and some of its variants for an exploratory
factor analysis we discuss whether one of the most important Hirsch-type
indices, namely the g-index comprises information about not only the size of
the productive core but also the impact of the papers in the core. We also
study the effect of logarithmic and square-root transformation of the data
utilized in the factor analysis. To demonstrate our approach we use a real data
example analysing the citation records of 26 physicists compiled from the Web
of Science.
| 1 | 0 | 0 | 1 | 0 | 0 |
Why a Population Genetics Framework is Inappropriate for Cultural Evolution | Although Darwinian models are rampant in the social sciences, social
scientists do not face the problem that motivated Darwin's theory of natural
selection: the problem of explaining how lineages evolve despite that any
traits they acquire are regularly discarded at the end of the lifetime of the
individuals that acquired them. While the rationale for framing culture as an
evolutionary process is correct, it does not follow that culture is a Darwinian
or selectionist process, or that population genetics and phylogenetics provide
viable starting points for modeling cultural change. This paper lays out
step-by-step arguments as to why this approach is ill-conceived, focusing on
the lack of randomness and lack of a self-assembly code in cultural evolution,
and summarizes an alternative approach.
| 0 | 0 | 0 | 0 | 1 | 0 |
Is together better? Examining scientific collaborations across multiple authors, institutions, and departments | Collaborations are an integral part of scientific research and publishing. In
the past, access to large-scale corpora has limited the ways in which questions
about collaborations could be investigated. However, with improvements in
data/metadata quality and access, it is possible to explore the idea of
research collaboration in ways beyond the traditional definition of multiple
authorship. In this paper, we examine scientific works through three different
lenses of collaboration: across multiple authors, multiple institutions, and
multiple departments. We believe this to be a first look at multiple
departmental collaborations as we employ extensive data curation to
disambiguate authors' departmental affiliations for nearly 70,000 scientific
papers. We then compare citation metrics across the different definitions of
collaboration and find that papers defined as being collaborative were more
frequently cited than their non-collaborative counterparts, regardless of the
definition of collaboration used. We also share preliminary results from
examining the relationship between co-citation and co-authorship by analyzing
the extent to which similar fields (as determined by co-citation) are
collaborating on works (as determined by co-authorship). These preliminary
results reveal trends of compartmentalization with respect to
intra-institutional collaboration and show promise in being expanded.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Lichnerowicz estimate for the spectral gap of the sub-Laplacian | For a second order operator on a compact manifold satisfying the strong
Hörmander condition, we give a bound for the spectral gap analogous to the
Lichnerowicz estimate for the Laplacian of a Riemannian manifold. We consider a
wide class of such operators which includes horizontal lifts of the Laplacian
on Riemannian submersions with minimal leaves.
| 0 | 0 | 1 | 0 | 0 | 0 |
Quasiclassical theory of spin dynamics in superfluid $^3$He: kinetic equations in the bulk and spin response of surface Majorana states | We develop a theory based on the formalism of quasiclassical Green's
functions to study the spin dynamics in superfluid $^3$He. First, we derive
kinetic equations for the spin-dependent distribution function in the bulk
superfluid reproducing the results obtained earlier without quasiclassical
approximation. Then we consider a spin dynamics near the surface of fully
gapped $^3$He-B phase taking into account spin relaxation due to the
transitions in the spectrum of localized fermionic states. The lifetime of
longitudinal and transverse spin waves is calculate taking into account the
Fermi-liquid corrections which lead to the crucial modification of fermionic
spectrum and spin responses.
| 0 | 1 | 0 | 0 | 0 | 0 |
The RBO Dataset of Articulated Objects and Interactions | We present a dataset with models of 14 articulated objects commonly found in
human environments and with RGB-D video sequences and wrenches recorded of
human interactions with them. The 358 interaction sequences total 67 minutes of
human manipulation under varying experimental conditions (type of interaction,
lighting, perspective, and background). Each interaction with an object is
annotated with the ground truth poses of its rigid parts and the kinematic
state obtained by a motion capture system. For a subset of 78 sequences (25
minutes), we also measured the interaction wrenches. The object models contain
textured three-dimensional triangle meshes of each link and their motion
constraints. We provide Python scripts to download and visualize the data. The
data is available at this https URL and hosted
at this https URL.
| 1 | 0 | 0 | 0 | 0 | 0 |
Training Generative Adversarial Networks via Primal-Dual Subgradient Methods: A Lagrangian Perspective on GAN | We relate the minimax game of generative adversarial networks (GANs) to
finding the saddle points of the Lagrangian function for a convex optimization
problem, where the discriminator outputs and the distribution of generator
outputs play the roles of primal variables and dual variables, respectively.
This formulation shows the connection between the standard GAN training process
and the primal-dual subgradient methods for convex optimization. The inherent
connection does not only provide a theoretical convergence proof for training
GANs in the function space, but also inspires a novel objective function for
training. The modified objective function forces the distribution of generator
outputs to be updated along the direction according to the primal-dual
subgradient methods. A toy example shows that the proposed method is able to
resolve mode collapse, which in this case cannot be avoided by the standard GAN
or Wasserstein GAN. Experiments on both Gaussian mixture synthetic data and
real-world image datasets demonstrate the performance of the proposed method on
generating diverse samples.
| 0 | 0 | 0 | 1 | 0 | 0 |
A Detailed Observational Analysis of V1324 Sco, the Most Gamma-Ray Luminous Classical Nova to Date | It has recently been discovered that some, if not all, classical novae emit
GeV gamma rays during outburst, but the mechanisms involved in the production
of the gamma rays are still not well understood. We present here a
comprehensive multi-wavelength dataset---from radio to X-rays---for the most
gamma-ray luminous classical nova to-date, V1324 Sco. Using this dataset, we
show that V1324 Sco is a canonical dusty Fe-II type nova, with a maximum ejecta
velocity of 2600 km s$^{-1}$ and an ejecta mass of few $\times 10^{-5}$
M$_{\odot}$. There is also evidence for complex shock interactions, including a
double-peaked radio light curve which shows high brightness temperatures at
early times. To explore why V1324~Sco was so gamma-ray luminous, we present a
model of the nova ejecta featuring strong internal shocks, and find that higher
gamma-ray luminosities result from higher ejecta velocities and/or mass-loss
rates. Comparison of V1324~Sco with other gamma-ray detected novae does not
show clear signatures of either, and we conclude that a larger sample of
similarly well-observed novae is needed to understand the origin and variation
of gamma rays in novae.
| 0 | 1 | 0 | 0 | 0 | 0 |
The California-Kepler Survey. III. A Gap in the Radius Distribution of Small Planets | The size of a planet is an observable property directly connected to the
physics of its formation and evolution. We used precise radius measurements
from the California-Kepler Survey (CKS) to study the size distribution of 2025
$\textit{Kepler}$ planets in fine detail. We detect a factor of $\geq$2 deficit
in the occurrence rate distribution at 1.5-2.0 R$_{\oplus}$. This gap splits
the population of close-in ($P$ < 100 d) small planets into two size regimes:
R$_P$ < 1.5 R$_{\oplus}$ and R$_P$ = 2.0-3.0 R$_{\oplus}$, with few planets in
between. Planets in these two regimes have nearly the same intrinsic frequency
based on occurrence measurements that account for planet detection
efficiencies. The paucity of planets between 1.5 and 2.0 R$_{\oplus}$ supports
the emerging picture that close-in planets smaller than Neptune are composed of
rocky cores measuring 1.5 R$_{\oplus}$ or smaller with varying amounts of
low-density gas that determine their total sizes.
| 0 | 1 | 0 | 0 | 0 | 0 |
Large sets avoiding linear patterns | We prove that for any dimension function $h$ with $h \prec x^d$ and for any
countable set of linear patterns, there exists a compact set $E$ with
$\mathcal{H}^h(E)>0$ avoiding all the given patterns. We also give several
applications and recover results of Keleti, Maga, and Máthé.
| 0 | 0 | 1 | 0 | 0 | 0 |
Faster Algorithms for Weighted Recursive State Machines | Pushdown systems (PDSs) and recursive state machines (RSMs), which are
linearly equivalent, are standard models for interprocedural analysis. Yet RSMs
are more convenient as they (a) explicitly model function calls and returns,
and (b) specify many natural parameters for algorithmic analysis, e.g., the
number of entries and exits. We consider a general framework where RSM
transitions are labeled from a semiring and path properties are algebraic with
semiring operations, which can model, e.g., interprocedural reachability and
dataflow analysis problems.
Our main contributions are new algorithms for several fundamental problems.
As compared to a direct translation of RSMs to PDSs and the best-known existing
bounds of PDSs, our analysis algorithm improves the complexity for
finite-height semirings (that subsumes reachability and standard dataflow
properties). We further consider the problem of extracting distance values from
the representation structures computed by our algorithm, and give efficient
algorithms that distinguish the complexity of a one-time preprocessing from the
complexity of each individual query. Another advantage of our algorithm is that
our improvements carry over to the concurrent setting, where we improve the
best-known complexity for the context-bounded analysis of concurrent RSMs.
Finally, we provide a prototype implementation that gives a significant
speed-up on several benchmarks from the SLAM/SDV project.
| 1 | 0 | 0 | 0 | 0 | 0 |
Local Density Approximation for Almost-Bosonic Anyons | We discuss the average-field approximation for a trapped gas of
non-interacting anyons in the quasi-bosonic regime. In the homogeneous case,
i.e., for a confinement to a bounded region, we prove that the energy in the
regime of large statistics parameter, i.e., for "less-bosonic" anyons, is
independent of boundary conditions and of the shape of the domain. When a
non-trivial trapping potential is present, we derive a local density
approximation in terms of a Thomas-Fermi-like model.
| 0 | 1 | 1 | 0 | 0 | 0 |
Chemical dynamics between wells across a time-dependent barrier: Self-similarity in the Lagrangian descriptor and reactive basins | In chemical or physical reaction dynamics, it is essential to distinguish
precisely between reactants and products for all time. This task is especially
demanding in time-dependent or driven systems because therein the dividing
surface (DS) between these states often exhibits a nontrivial time-dependence.
The so-called transition state (TS) trajectory has been seen to define a DS
which is free of recrossings in a large number of one-dimensional reactions
across time-dependent barriers, and, thus, allows one to determine exact
reaction rates. A fundamental challenge to applying this method is the
construction of the TS trajectory itself. The minimization of Lagrangian
descriptors (LDs) provides a general and powerful scheme to obtain that
trajectory even when perturbation theory fails. Both approaches encounter
possible breakdowns when the overall potential is bounded, admitting the
possibility of returns to the barrier long after trajectories have reached the
product or reactant wells. Such global dynamics cannot be captured by
perturbation theory. Meanwhile, in the LD-DS approach, it leads to the
emergence of additional local minima which make it difficult to extract the
optimal branch associated with the desired TS trajectory. In this work, we
illustrate this behavior for a time-dependent double-well potential revealing a
self-similar structure of the LD, and we demonstrate how the reflections and
side-minima can be addressed by an appropriate modification of the LD
associated with the direct rate across the barrier.
| 0 | 1 | 0 | 0 | 0 | 0 |
Dust-trapping vortices and a potentially planet-triggered spiral wake in the pre-transitional disk of V1247 Orionis | The radial drift problem constitutes one of the most fundamental problems in
planet formation theory, as it predicts particles to drift into the star before
they are able to grow to planetesimal size. Dust-trapping vortices have been
proposed as a possible solution to this problem, as they might be able to trap
particles over millions of years, allowing them to grow beyond the radial drift
barrier. Here, we present ALMA 0.04"-resolution imaging of the pre-transitional
disk of V1247 Orionis that reveals an asymmetric ring as well as a
sharply-confined crescent structure, resembling morphologies seen in
theoretical models of vortex formation. The asymmetric ring (at 0.17"=54 au
separation from the star) and the crescent (at 0.38"=120 au) seem smoothly
connected through a one-armed spiral arm structure that has been found
previously in scattered light. We propose a physical scenario with a planet
orbiting at $\sim0.3$"$\approx$100 au, where the one-armed spiral arm detected
in polarised light traces the accretion stream feeding the protoplanet. The
dynamical influence of the planet clears the gap between the ring and the
crescent and triggers two vortices that trap mm-sized particles, namely the
crescent and the bright asymmetry seen in the ring. We conducted dedicated
hydrodynamics simulations of a disk with an embedded planet, which results in
similar spiral-arm morphologies as seen in our scattered light images. At the
position of the spiral wake and the crescent we also observe $^{12}$CO (3-2)
and H$^{12}$CO$^{+}$ (4-3) excess line emission, likely tracing the increased
scale-height in these disk regions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Weak and smooth solutions for a fractional Yamabe flow: the case of general compact and locally conformally flat manifolds | As a counterpart of the classical Yamabe problem, a fractional Yamabe flow
has been introduced by Jin and Xiong (2014) on the sphere. Here we pursue its
study in the context of general compact smooth manifolds with positive
fractional curvature. First, we prove that the flow is locally well posed in
the weak sense on any compact manifold. If the manifold is locally conformally
flat with positive Yamabe invariant, we also prove that the flow is smooth and
converges to a constant scalar curvature metric. We provide different proofs
using extension properties introduced by Chang and González (2011) for the
conformally covariant fractional order operators.
| 0 | 0 | 1 | 0 | 0 | 0 |
Lipschitz Properties for Deep Convolutional Networks | In this paper we discuss the stability properties of convolutional neural
networks. Convolutional neural networks are widely used in machine learning. In
classification they are mainly used as feature extractors. Ideally, we expect
similar features when the inputs are from the same class. That is, we hope to
see a small change in the feature vector with respect to a deformation on the
input signal. This can be established mathematically, and the key step is to
derive the Lipschitz properties. Further, we establish that the stability
results can be extended for more general networks. We give a formula for
computing the Lipschitz bound, and compare it with other methods to show it is
closer to the optimal value.
| 1 | 0 | 1 | 0 | 0 | 0 |
Quadrature Compound: An approximating family of distributions | Compound distributions allow construction of a rich set of distributions.
Typically they involve an intractable integral. Here we use a quadrature
approximation to that integral to define the quadrature compound family.
Special care is taken that this approximation is suitable for computation of
gradients with respect to distribution parameters. This technique is applied to
discrete (Poisson LogNormal) and continuous distributions. In the continuous
case, quadrature compound family naturally makes use of parameterized
transformations of unparameterized distributions (a.k.a "reparameterization"),
allowing for gradients of expectations to be estimated as the gradient of a
sample mean. This is demonstrated in a novel distribution, the diffeomixture,
which is is a reparameterizable approximation to a mixture distribution.
| 0 | 0 | 0 | 1 | 0 | 0 |
Centrality in Modular Networks | Identifying influential nodes in a network is a fundamental issue due to its
wide applications, such as accelerating information diffusion or halting virus
spreading. Many measures based on the network topology have emerged over the
years to identify influential nodes such as Betweenness, Closeness, and
Eigenvalue centrality. However, although most real-world networks are modular,
few measures exploit this property. Recent works have shown that it has a
significant effect on the dynamics on networks. In a modular network, a node
has two types of influence: a local influence (on the nodes of its community)
through its intra-community links and a global influence (on the nodes in other
communities) through its inter-community links. Depending of the strength of
the community structure, these two components are more or less influential.
Based on this idea, we propose to extend all the standard centrality measures
defined for networks with no community structure to modular networks. The
so-called "Modular centrality" is a two dimensional vector. Its first component
quantifies the local influence of a node in its community while the second
component quantifies its global influence on the other communities of the
network. In order to illustrate the effectiveness of the Modular centrality
extensions, comparison with their scalar counterpart are performed in an
epidemic process setting. Simulation results using the
Susceptible-Infected-Recovered (SIR) model on synthetic networks with
controlled community structure allows getting a clear idea about the relation
between the strength of the community structure and the major type of influence
(global/local). Furthermore, experiments on real-world networks demonstrate the
merit of this approach.
| 1 | 0 | 0 | 0 | 0 | 0 |
Yield Trajectory Tracking for Hyperbolic Age-Structured Population Systems | For population systems modeled by age-structured hyperbolic partial
differential equations (PDEs) that are bilinear in the input and evolve with a
positive-valued infinite-dimensional state, global stabilization of constant
yield set points was achieved in prior work. Seasonal demands in
biotechnological production processes give rise to time-varying yield
references. For the proposed control objective aiming at a global attractivity
of desired yield trajectories, multiple non-standard features have to be
considered: a non-local boundary condition, a PDE state restricted to the
positive orthant of the function space and arbitrary restrictive but physically
meaningful input constraints. Moreover, we provide Control Lyapunov Functionals
ensuring an exponentially fast attraction of adequate reference trajectories.
To achieve this goal, we make use of the relation between first-order
hyperbolic PDEs and integral delay equations leading to a decoupling of the
input-dependent dynamics and the infinite-dimensional internal one.
Furthermore, the dynamic control structure does not necessitate exact knowledge
of the model parameters or online measurements of the age-profile. With a
Galerkin-based numerical simulation scheme using the key ideas of the
Karhunen-Loève-decomposition, we demonstrate the controller's performance.
| 1 | 0 | 1 | 0 | 0 | 0 |
A sixteen-relator presentation of an infinite hyperbolic Kazhdan group | We provide an explicit presentation of an infinite hyperbolic Kazhdan group
with $4$ generators and $16$ relators of length at most $73$. That group acts
properly and cocompactly on a hyperbolic triangle building of type $(3,4,4)$.
We also point out a variation of the construction that yields examples of
lattices in $\tilde A_2$-buildings admitting non-Desarguesian residues of
arbitrary prime power order.
| 0 | 0 | 1 | 0 | 0 | 0 |
Fast and accurate Bayesian model criticism and conflict diagnostics using R-INLA | Bayesian hierarchical models are increasingly popular for realistic modelling
and analysis of complex data. This trend is accompanied by the need for
flexible, general, and computationally efficient methods for model criticism
and conflict detection. Usually, a Bayesian hierarchical model incorporates a
grouping of the individual data points, for example individuals in repeated
measurement data. In such cases, the following question arises: Are any of the
groups "outliers", or in conflict with the remaining groups? Existing general
approaches aiming to answer such questions tend to be extremely computationally
demanding when model fitting is based on MCMC. We show how group-level model
criticism and conflict detection can be done quickly and accurately through
integrated nested Laplace approximations (INLA). The new method is implemented
as a part of the open source R-INLA package for Bayesian computing
(this http URL).
| 0 | 0 | 1 | 1 | 0 | 0 |
Optimizing deep video representation to match brain activity | The comparison of observed brain activity with the statistics generated by
artificial intelligence systems is useful to probe brain functional
organization under ecological conditions. Here we study fMRI activity in ten
subjects watching color natural movies and compute deep representations of
these movies with an architecture that relies on optical flow and image
content. The association of activity in visual areas with the different layers
of the deep architecture displays complexity-related contrasts across visual
areas and reveals a striking foveal/peripheral dichotomy.
| 0 | 0 | 0 | 0 | 1 | 0 |
Non-Abelian Fermionization and Fractional Quantum Hall Transitions | There has been a recent surge of interest in dualities relating theories of
Chern-Simons gauge fields coupled to either bosons or fermions within the
condensed matter community, particularly in the context of topological
insulators and the half-filled Landau level. Here, we study the application of
one such duality to the long-standing problem of quantum Hall inter-plateaux
transitions. The key motivating experimental observations are the anomalously
large value of the correlation length exponent $\nu \approx 2.3$ and that $\nu$
is observed to be super-universal, i.e., the same in the vicinity of distinct
critical points. Duality motivates effective descriptions for a fractional
quantum Hall plateau transition involving a Chern-Simons field with $U(N_c)$
gauge group coupled to $N_f = 1$ fermion. We study one class of theories in a
controlled limit where $N_f \gg N_c$ and calculate $\nu$ to leading non-trivial
order in the absence of disorder. Although these theories do not yield an
anomalously large exponent $\nu$ within the large $N_f \gg N_c$ expansion, they
do offer a new parameter space of theories that is apparently different from
prior works involving abelian Chern-Simons gauge fields.
| 0 | 1 | 0 | 0 | 0 | 0 |
Periodic solutions and regularization of a Kepler problem with time-dependent perturbation | We consider a Kepler problem in dimension two or three, with a time-dependent
$T$-periodic perturbation. We prove that for any prescribed positive integer
$N$, there exist at least $N$ periodic solutions (with period $T$) as long as
the perturbation is small enough. Here the solutions are understood in a
general sense as they can have collisions. The concept of generalized solutions
is defined intrinsically and it coincides with the notion obtained in Celestial
Mechanics via the theory of regularization of collisions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Ultra-wide-band slow light in photonic crystal coupled-cavity waveguides | Slow light propagation in structured materials is a highly promising approach
for realizing on-chip integrated photonic devices based on enhanced optical
nonlinearities. One of the most successful research avenues consists in
engineering the band dispersion of light-guiding photonic crystal (PC)
structures. The primary goal of such devices is to achieve slow-light operation
over the largest possible bandwidth, with large group index, minimal index
dispersion, and constant transmission spectrum. Here, we report on the
experimental demonstration of to date record high GBP in silicon-based
coupled-cavity waveguides (CCWs) operating at telecom wavelengths. Our results
rely on novel CCW designs, optimized using a genetic algorithm, and refined
nanofabrication processes.
| 0 | 1 | 0 | 0 | 0 | 0 |
Sketch Layer Separation in Multi-Spectral Historical Document Images | High-resolution imaging has delivered new prospects for detecting the
material composition and structure of cultural treasures. Despite the various
techniques for analysis, a significant diagnostic gap remained in the range of
available research capabilities for works on paper. Old master drawings were
mostly composed in a multi-step manner with various materials. This resulted in
the overlapping of different layers which made the subjacent strata difficult
to differentiate. The separation of stratified layers using imaging methods
could provide insights into the artistic work processes and help answer
questions about the object, its attribution, or in identifying forgeries. The
pattern recognition procedure was tested with mock replicas to achieve the
separation and the capability of displaying concealed red chalk under ink. In
contrast to RGB-sensor based imaging, the multi- or hyperspectral technology
allows accurate layer separation by recording the characteristic signatures of
the material's reflectance. The risk of damage to the artworks as a result of
the examination can be reduced by using combinations of defined spectra for
lightning and image capturing. By guaranteeing the maximum level of
readability, our results suggest that the technique can be applied to a broader
range of objects and assist in diagnostic research into cultural treasures in
the future.
| 1 | 0 | 0 | 0 | 0 | 0 |
Comment on "Spatial optical solitons in highly nonlocal media" and related papers | In a recent paper [A. Alberucci, C. Jisha, N. Smyth, and G. Assanto, Phys.
Rev. A 91, 013841 (2015)], Alberucci et al. have studied the propagation of
bright spatial solitary waves in highly nonlocal media. We find that the main
results in that and related papers, concerning soliton shape and dynamics,
based on the accessible soliton (AS) approximation, are incorrect; the correct
results have already been published by others. These and other inconsistencies
in the paper follow from the problems in applying the AS approximation in
earlier papers by the group that propagated to the later papers. The accessible
soliton theory cannot describe accurately the features and dynamics of solitons
in highly nonlocal media.
| 0 | 1 | 0 | 0 | 0 | 0 |
Local Descriptor for Robust Place Recognition using LiDAR Intensity | Place recognition is a challenging problem in mobile robotics, especially in
unstructured environments or under viewpoint and illumination changes. Most
LiDAR-based methods rely on geometrical features to overcome such challenges,
as generally scene geometry is invariant to these changes, but tend to affect
camera-based solutions significantly. Compared to cameras, however, LiDARs lack
the strong and descriptive appearance information that imaging can provide.
To combine the benefits of geometry and appearance, we propose coupling the
conventional geometric information from the LiDAR with its calibrated intensity
return. This strategy extracts extremely useful information in the form of a
new descriptor design, coined ISHOT, outperforming popular state-of-art
geometric-only descriptors by significant margin in our local descriptor
evaluation. To complete the framework, we furthermore develop a probabilistic
keypoint voting place recognition algorithm, leveraging the new descriptor and
yielding sublinear place recognition performance. The efficacy of our approach
is validated in challenging global localization experiments in large-scale
built-up and unstructured environments.
| 1 | 0 | 0 | 0 | 0 | 0 |
Assessing the performance of self-consistent hybrid functional for band gap calculation in oxide semiconductors | In this paper we assess the predictive power of the self-consistent hybrid
functional scPBE0 in calculating the band gap of oxide semiconductors. The
computational procedure is based on the self-consistent evaluation of the
mixing parameter $\alpha$ by means of an iterative calculation of the static
dielectric constant using the perturbation expansion after discretization
(PEAD) method and making use of the relation $\alpha = 1/\epsilon_{\infty}$.
Our materials dataset is formed by 30 compounds covering a wide range of band
gaps and dielectric properties, and includes materials with a wide spectrum of
application as thermoelectrics, photocatalysis, photovoltaics, transparent
conducting oxides, and refractory materials. Our results show that the scPBE0
functional provides better band gaps than the non self-consistent hybrids PBE0
and HSE06, but scPBE0 does not show significant improvement on the description
of the static dielectric constants. Overall, the scPBE0 data exhibit a mean
absolute percentage error of 14 \% (band gaps) and 10 \% ($\epsilon_\infty$).
For materials with weak dielectric screening and large excitonic biding
energies scPBE0, unlike PBE0 and HSE06, overestimates the band gaps, but the
value of the gap become very close to the experimental value when excitonic
effects are included (e.g. for SiO$_2$). However, special caution must be given
to the compounds with small band gaps due to the tendency of scPBE0 to
overestimate the dielectric constant in proximity of the metallic limit.
| 0 | 1 | 0 | 0 | 0 | 0 |
Local Synchronization of Sampled-Data Systems on Lie Groups | We present a smooth distributed nonlinear control law for local
synchronization of identical driftless kinematic agents on a Cartesian product
of matrix Lie groups with a connected communication graph. If the agents are
initialized sufficiently close to one another, then synchronization is achieved
exponentially fast. We first analyze the special case of commutative Lie groups
and show that in exponential coordinates, the closed-loop dynamics are linear.
We characterize all equilibria of the network and, in the case of an
unweighted, complete graph, characterize the settling time and conditions for
deadbeat performance. Using the Baker-Campbell-Hausdorff theorem, we show that,
in a neighbourhood of the identity element, all results generalize to arbitrary
matrix Lie groups.
| 1 | 0 | 1 | 0 | 0 | 0 |
On the orbits that generate the X-shape in the Milky Way bulge | The Milky Way bulge shows a box/peanut or X-shaped bulge (hereafter BP/X)
when viewed in infrared or microwave bands. We examine orbits in an N-body
model of a barred disk galaxy that is scaled to match the kinematics of the
Milky Way (MW) bulge. We generate maps of projected stellar surface density,
unsharp masked images, 3D excess-mass distributions (showing mass outside
ellipsoids), line-of-sight number count distributions, and 2D line-of-sight
kinematics for the simulation as well as co-added orbit families, in order to
identify the orbits primarily responsible for the BP/X shape. We estimate that
between 19-23\% of the mass of the bar is associated with the BP/X shape and
that most bar orbits contribute to this shape which is clearly seen in
projected surface density maps and 3D excess mass for non-resonant box orbits,
"banana" orbits, "fish/pretzel" orbits and "brezel" orbits. {We find that
nearly all bar orbit families contribute some mass to the 3D BP/X-shape. All
co-added orbit families show a bifurcation in stellar number count distribution
with heliocentric distance that resembles the bifurcation observed in red clump
stars in the MW. However, only the box orbit family shows an increasing
separation of peaks with increasing galactic latitude $|b|$, similar to that
observed.} Our analysis shows that no single orbit family fully explains all
the observed features associated with the MW's BP/X shaped bulge, but
collectively the non-resonant boxes and various resonant boxlet orbits
contribute at different distances from the center to produce this feature. We
propose that since box orbits have three incommensurable orbital fundamental
frequencies, their 3-dimensional shapes are highly flexible and, like Lissajous
figures, this family of orbits is most easily able to adapt to evolution in the
shape of the underlying potential.
| 0 | 1 | 0 | 0 | 0 | 0 |
How Peer Effects Influence Energy Consumption | This paper analyzes the impact of peer effects on electricity consumption of
a network of rational, utility-maximizing users. Users derive utility from
consuming electricity as well as consuming less energy than their neighbors.
However, a disutility is incurred for consuming more than their neighbors. To
maximize the profit of the load-serving entity that provides electricity to
such users, we develop a two-stage game-theoretic model, where the entity sets
the prices in the first stage. In the second stage, consumers decide on their
demand in response to the observed price set in the first stage so as to
maximize their utility. To this end, we derive theoretical statements under
which such peer effects reduce aggregate user consumption. Further, we obtain
expressions for the resulting electricity consumption and profit of the load
serving entity for the case of perfect price discrimination and a single price
under complete information, and approximations under incomplete information.
Simulations suggest that exposing only a selected subset of all users to peer
effects maximizes the entity's profit.
| 1 | 0 | 0 | 0 | 0 | 0 |
Physical Origins of Gas Motions in Galaxy Cluster Cores: Interpreting Hitomi Observations of the Perseus Cluster | The Hitomi X-ray satellite has provided the first direct measurements of the
plasma velocity dispersion in a galaxy cluster. It finds a relatively
"quiescent" gas with a line-of-sight velocity dispersion ~ 160 km/s, at 30 kpc
to 60 kpc from the cluster center. This is surprising given the presence of
jets and X-ray cavities that indicates on-going activity and feedback from the
active galactic nucleus (AGN) at the cluster center. Using a set of mock Hitomi
observations generated from a suite of state-of-the-art cosmological cluster
simulations, and an isolated but higher resolution simulation of gas physics in
the cluster core, including the effects of cooling and AGN feedback, we examine
the likelihood of Hitomi detecting a cluster with the observed velocities. As
long as the Perseus has not experienced a major merger in the last few
gigayears, and AGN feedback is operating in a "gentle" mode, we reproduce the
level of gas motions observed by Hitomi. The frequent mechanical AGN feedback
generates net line-of-sight velocity dispersions ~100-200 km/s, bracketing the
values measured in the Perseus core. The large-scale velocity shear observed
across the core, on the other hand, is generated mainly by cosmic accretion
such as mergers. We discuss the implications of these results for AGN feedback
physics and cluster cosmology and progress that needs to be made in both
simulations and observations, including a Hitomi re-flight and
calorimeter-based instruments with higher spatial resolution.
| 0 | 1 | 0 | 0 | 0 | 0 |
Variational Approaches for Auto-Encoding Generative Adversarial Networks | Auto-encoding generative adversarial networks (GANs) combine the standard GAN
algorithm, which discriminates between real and model-generated data, with a
reconstruction loss given by an auto-encoder. Such models aim to prevent mode
collapse in the learned generative model by ensuring that it is grounded in all
the available training data. In this paper, we develop a principle upon which
auto-encoders can be combined with generative adversarial networks by
exploiting the hierarchical structure of the generative model. The underlying
principle shows that variational inference can be used a basic tool for
learning, but with the in- tractable likelihood replaced by a synthetic
likelihood, and the unknown posterior distribution replaced by an implicit
distribution; both synthetic likelihoods and implicit posterior distributions
can be learned using discriminators. This allows us to develop a natural fusion
of variational auto-encoders and generative adversarial networks, combining the
best of both these methods. We describe a unified objective for optimization,
discuss the constraints needed to guide learning, connect to the wide range of
existing work, and use a battery of tests to systematically and quantitatively
assess the performance of our method.
| 1 | 0 | 0 | 1 | 0 | 0 |
Verification of the anecdote about Edwin Hubble and the Nobel Prize | Edwin Powel Hubble is regarded as one of the most important astronomers of
20th century. In despite of his great contributions to the field of astronomy,
he never received the Nobel Prize because astronomy was not considered as the
field of the Nobel Prize in Physics at that era. There is an anecdote about the
relation between Hubble and the Nobel Prize. According to this anecdote, the
Nobel Committee decided to award the Nobel Prize in Physics in 1953 to Hubble
as the first Nobel laureate as an astronomer (Christianson 1995). However,
Hubble was died just before its announcement, and the Nobel prize is not
awarded posthumously. Documents of the Nobel selection committee are open after
50 years, thus this anecdote can be verified. I confirmed that the Nobel
selection committee endorsed Frederik Zernike as the Nobel laureate in Physics
in 1953 on September 15th, 1953, which is 13 days before the Hubble's death in
September 28th, 1953. I also confirmed that Hubble and Henry Norris Russell
were nominated but they are not endorsed because the Committee concluded their
astronomical works were not appropriate for the Nobel Prize in Physics.
| 0 | 1 | 0 | 0 | 0 | 0 |
LSTM Fully Convolutional Networks for Time Series Classification | Fully convolutional neural networks (FCN) have been shown to achieve
state-of-the-art performance on the task of classifying time series sequences.
We propose the augmentation of fully convolutional networks with long short
term memory recurrent neural network (LSTM RNN) sub-modules for time series
classification. Our proposed models significantly enhance the performance of
fully convolutional networks with a nominal increase in model size and require
minimal preprocessing of the dataset. The proposed Long Short Term Memory Fully
Convolutional Network (LSTM-FCN) achieves state-of-the-art performance compared
to others. We also explore the usage of attention mechanism to improve time
series classification with the Attention Long Short Term Memory Fully
Convolutional Network (ALSTM-FCN). Utilization of the attention mechanism
allows one to visualize the decision process of the LSTM cell. Furthermore, we
propose fine-tuning as a method to enhance the performance of trained models.
An overall analysis of the performance of our model is provided and compared to
other techniques.
| 1 | 0 | 0 | 1 | 0 | 0 |
Flat $F$-manifolds, Miura invariants and integrable systems of conservation laws | We extend some of the results proved for scalar equations in [3,4], to the
case of systems of integrable conservation laws. In particular, for such
systems we prove that the eigenvalues of a matrix obtained from the quasilinear
part of the system are invariants under Miura transformations and we show how
these invariants are related to dispersion relations. Furthermore, focusing on
one-parameter families of dispersionless systems of integrable conservation
laws associated to the Coxeter groups of rank $2$ found in [1], we study the
corresponding integrable deformations up to order $2$ in the deformation
parameter $\epsilon$. Each family contains both bi-Hamiltonian and
non-Hamiltonian systems of conservation laws and therefore we use it to probe
to which extent the properties of the dispersionless limit impact the nature
and the existence of integrable deformations. It turns out that a part two
values of the parameter all deformations of order one in $\epsilon$ are
Miura-trivial, while all those of order two in $\epsilon$ are essentially
parameterized by two arbitrary functions of single variables (the Riemann
invariants) both in the bi-Hamiltonian and in the non-Hamiltonian case. In the
two remaining cases, due to the existence of non-trivial first order
deformations, there is an additional functional parameter.
| 0 | 1 | 0 | 0 | 0 | 0 |
Analysis and Design of Cost-Effective, High-Throughput LDPC Decoders | This paper introduces a new approach to cost-effective, high-throughput
hardware designs for Low Density Parity Check (LDPC) decoders. The proposed
approach, called Non-Surjective Finite Alphabet Iterative Decoders (NS-FAIDs),
exploits the robustness of message-passing LDPC decoders to inaccuracies in the
calculation of exchanged messages, and it is shown to provide a unified
framework for several designs previously proposed in the literature. NS-FAIDs
are optimized by density evolution for regular and irregular LDPC codes, and
are shown to provide different trade-offs between hardware complexity and
decoding performance. Two hardware architectures targeting high-throughput
applications are also proposed, integrating both Min-Sum (MS) and NS-FAID
decoding kernels. ASIC post synthesis implementation results on 65nm CMOS
technology show that NS-FAIDs yield significant improvements in the throughput
to area ratio, by up to 58.75% with respect to the MS decoder, with even better
or only slightly degraded error correction performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
TensorFlow Distributions | The TensorFlow Distributions library implements a vision of probability
theory adapted to the modern deep-learning paradigm of end-to-end
differentiable computation. Building on two basic abstractions, it offers
flexible building blocks for probabilistic computation. Distributions provide
fast, numerically stable methods for generating samples and computing
statistics, e.g., log density. Bijectors provide composable volume-tracking
transformations with automatic caching. Together these enable modular
construction of high dimensional distributions and transformations not possible
with previous libraries (e.g., pixelCNNs, autoregressive flows, and reversible
residual networks). They are the workhorse behind deep probabilistic
programming systems like Edward and empower fast black-box inference in
probabilistic models built on deep-network components. TensorFlow Distributions
has proven an important part of the TensorFlow toolkit within Google and in the
broader deep learning community.
| 1 | 0 | 0 | 1 | 0 | 0 |
A generalised Davydov-Scott model for polarons in linear peptide chains | We present a one-parameter family of mathematical models describing the
dynamics of polarons in linear periodic structures such as polypeptides. By
tuning the parameter, we are able to recover the Davydov and the Scott models.
We describe the physical significance of this parameter. In the continuum
limit, we derive analytical solutions which represent stationary polarons. On a
discrete lattice, we compute stationary polaron solutions numerically. We
investigate polaron propagation induced by several external forcing mechanisms.
We show that an electric field consisting of a constant and a periodic
component can induce polaron motion with minimal energy loss. We also show that
thermal fluctuations can facilitate the onset of polaron motion. Finally, we
discuss the bio-physical implications of our results.
| 0 | 1 | 0 | 0 | 0 | 0 |
Wide Binaries in Tycho-{\it Gaia}: Search Method and the Distribution of Orbital Separations | We mine the Tycho-{\it Gaia} astrometric solution (TGAS) catalog for wide
stellar binaries by matching positions, proper motions, and astrometric
parallaxes. We separate genuine binaries from unassociated stellar pairs
through a Bayesian formulation that includes correlated uncertainties in the
proper motions and parallaxes. Rather than relying on assumptions about the
structure of the Galaxy, we calculate Bayesian priors and likelihoods based on
the nature of Keplerian orbits and the TGAS catalog itself. We calibrate our
method using radial velocity measurements and obtain 6196 high-confidence
candidate wide binaries with projected separations $s\lesssim1$ pc. The
normalization of this distribution suggests that at least 0.6\% of TGAS stars
have an associated, distant TGAS companion in a wide binary. We demonstrate
that {\it Gaia}'s astrometry is precise enough that it can detect projected
orbital velocities in wide binaries with orbital periods as large as 10$^6$ yr.
For pairs with $s\ \lesssim\ 4\times10^4$~AU, characterization of random
alignments indicate our contamination to be $\approx$5\%. For $s \lesssim
5\times10^3$~AU, our distribution is consistent with Öpik's Law. At larger
separations, the distribution is steeper and consistent with a power-law
$P(s)\propto s^{-1.6}$; there is no evidence in our data of any bimodality in
this distribution for $s \lesssim$ 1 pc. Using radial velocities, we
demonstrate that at large separations, i.e., of order $s \sim$ 1 pc and beyond,
any potential sample of genuine wide binaries in TGAS cannot be easily
distinguished from ionized former wide binaries, moving groups, or
contamination from randomly aligned stars.
| 0 | 1 | 0 | 0 | 0 | 0 |
Classification on Large Networks: A Quantitative Bound via Motifs and Graphons | When each data point is a large graph, graph statistics such as densities of
certain subgraphs (motifs) can be used as feature vectors for machine learning.
While intuitive, motif counts are expensive to compute and difficult to work
with theoretically. Via graphon theory, we give an explicit quantitative bound
for the ability of motif homomorphisms to distinguish large networks under both
generative and sampling noise. Furthermore, we give similar bounds for the
graph spectrum and connect it to homomorphism densities of cycles. This results
in an easily computable classifier on graph data with theoretical performance
guarantee. Our method yields competitive results on classification tasks for
the autoimmune disease Lupus Erythematosus.
| 1 | 0 | 0 | 1 | 0 | 0 |
Predicting Future Machine Failure from Machine State Using Logistic Regression | Accurately predicting machine failures in advance can decrease maintenance
cost and help allocate maintenance resources more efficiently. Logistic
regression was applied to predict machine state 24 hours in the future given
the current machine state.
| 0 | 0 | 0 | 1 | 0 | 0 |
Design and Optimisation of the FlyFast Front-end for Attribute-based Coordination | Collective Adaptive Systems (CAS) consist of a large number of interacting
objects. The design of such systems requires scalable analysis tools and
methods, which have necessarily to rely on some form of approximation of the
system's actual behaviour. Promising techniques are those based on mean-field
approximation. The FlyFast model-checker uses an on-the-fly algorithm for
bounded PCTL model-checking of selected individual(s) in the context of very
large populations whose global behaviour is approximated using deterministic
limit mean-field techniques. Recently, a front-end for FlyFast has been
proposed which provides a modelling language, PiFF in the sequel, for the
Predicate-based Interaction for FlyFast. In this paper we present details of
PiFF design and an approach to state-space reduction based on probabilistic
bisimulation for inhomogeneous DTMCs.
| 1 | 0 | 0 | 0 | 0 | 0 |
Distributed rank-1 dictionary learning: Towards fast and scalable solutions for fMRI big data analytics | The use of functional brain imaging for research and diagnosis has benefitted
greatly from the recent advancements in neuroimaging technologies, as well as
the explosive growth in size and availability of fMRI data. While it has been
shown in literature that using multiple and large scale fMRI datasets can
improve reproducibility and lead to new discoveries, the computational and
informatics systems supporting the analysis and visualization of such fMRI big
data are extremely limited and largely under-discussed. We propose to address
these shortcomings in this work, based on previous success in using dictionary
learning method for functional network decomposition studies on fMRI data. We
presented a distributed dictionary learning framework based on rank-1 matrix
decomposition with sparseness constraint (D-r1DL framework). The framework was
implemented using the Spark distributed computing engine and deployed on three
different processing units: an in-house server, in-house high performance
clusters, and the Amazon Elastic Compute Cloud (EC2) service. The whole
analysis pipeline was integrated with our neuroinformatics system for data
management, user input/output, and real-time visualization. Performance and
accuracy of D-r1DL on both individual and group-wise fMRI Human Connectome
Project (HCP) dataset shows that the proposed framework is highly scalable. The
resulting group-wise functional network decompositions are highly accurate, and
the fast processing time confirm this claim. In addition, D-r1DL can provide
real-time user feedback and results visualization which are vital for
large-scale data analysis.
| 1 | 0 | 0 | 0 | 0 | 0 |
HARPO: 1.7 - 74 MeV gamma-ray beam validation of a high angular resolutio n, high linear polarisation dilution, gas time projection chamber telescope and polarimeter | A presentation at the SciNeGHE conference of the past achievements, of the
present activities and of the perspectives for the future of the HARPO project,
the development of a time projection chamber as a high-performance gamma-ray
telescope and linear polarimeter in the e+e- pair creation regime.
| 0 | 1 | 0 | 0 | 0 | 0 |
One-to-one composant mappings of $[0,\infty)$ and $(-\infty,\infty)$ | Knaster continua and solenoids are well-known examples of indecomposable
continua whose composants (maximal arcwise-connected subsets) are one-to-one
images of lines. We show that essentially all non-trivial one-to-one composant
images of (half-)lines are indecomposable. And if $f$ is a one-to-one mapping
of $[0,\infty)$ or $(-\infty,\infty)$, then there is an indecomposable
continuum of which $X:=$ran$(f)$ is a composant if and only if $f$ maps all
final or initial segments densely and every non-closed sequence of arcs in $X$
has a convergent subsequence in the hyperspace $K(X)\cup \{X\}$. We also prove
the existence of composant-preserving embeddings in Euclidean $3$-space.
Accompanying the proofs are illustrations and examples.
| 0 | 0 | 1 | 0 | 0 | 0 |
Valued fields, Metastable groups | We introduce a class of theories called metastable, including the theory of
algebraically closed valued fields (ACVF) as a motivating example. The key
local notion is that of definable types dominated by their stable part. A
theory is metastable (over a sort $\Gamma$) if every type over a sufficiently
rich base structure can be viewed as part of a $\Gamma$-parametrized family of
stably dominated types. We initiate a study of definable groups in metastable
theories of finite rank. Groups with a stably dominated generic type are shown
to have a canonical stable quotient. Abelian groups are shown to be
decomposable into a part coming from $\Gamma$, and a definable direct limit
system of groups with stably dominated generic. In the case of ACVF, among
definable subgroups of affine algebraic groups, we characterize the groups with
stably dominated generics in terms of group schemes over the valuation ring.
Finally, we classify all fields definable in ACVF.
| 0 | 0 | 1 | 0 | 0 | 0 |
Radio Galaxy Zoo: Cosmological Alignment of Radio Sources | We study the mutual alignment of radio sources within two surveys, FIRST and
TGSS. This is done by producing two position angle catalogues containing the
preferential directions of respectively $30\,059$ and $11\,674$ extended
sources distributed over more than $7\,000$ and $17\,000$ square degrees. The
identification of the sources in the FIRST sample was performed in advance by
volunteers of the Radio Galaxy Zoo project, while for the TGSS sample it is the
result of an automated process presented here. After taking into account
systematic effects, marginal evidence of a local alignment on scales smaller
than $2.5°$ is found in the FIRST sample. The probability of this happening
by chance is found to be less than $2$ per cent. Further study suggests that on
scales up to $1.5°$ the alignment is maximal. For one third of the sources,
the Radio Galaxy Zoo volunteers identified an optical counterpart. Assuming a
flat $\Lambda$CDM cosmology with $\Omega_m = 0.31, \Omega_\Lambda = 0.69$, we
convert the maximum angular scale on which alignment is seen into a physical
scale in the range $[19, 38]$ Mpc $h_{70}^{-1}$. This result supports recent
evidence reported by Taylor and Jagannathan of radio jet alignment in the $1.4$
deg$^2$ ELAIS N1 field observed with the Giant Metrewave Radio Telescope. The
TGSS sample is found to be too sparsely populated to manifest a similar signal.
| 0 | 1 | 0 | 0 | 0 | 0 |
Non-iterative Label Propagation in Optimal Leading Forest | Graph based semi-supervised learning (GSSL) has intuitive representation and
can be improved by exploiting the matrix calculation. However, it has to
perform iterative optimization to achieve a preset objective, which usually
leads to low efficiency. Another inconvenience lying in GSSL is that when new
data come, the graph construction and the optimization have to be conducted all
over again. We propose a sound assumption, arguing that: the neighboring data
points are not in peer-to-peer relation, but in a partial-ordered relation
induced by the local density and distance between the data; and the label of a
center can be regarded as the contribution of its followers. Starting from the
assumption, we develop a highly efficient non-iterative label propagation
algorithm based on a novel data structure named as optimal leading forest
(LaPOLeaF). The major weaknesses of the traditional GSSL are addressed by this
study. We further scale LaPOLeaF to accommodate big data by utilizing block
distance matrix technique, parallel computing, and Locality-Sensitive Hashing
(LSH). Experiments on large datasets have shown the promising results of the
proposed methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
PeerHunter: Detecting Peer-to-Peer Botnets through Community Behavior Analysis | Peer-to-peer (P2P) botnets have become one of the major threats in network
security for serving as the infrastructure that responsible for various of
cyber-crimes. Though a few existing work claimed to detect traditional botnets
effectively, the problem of detecting P2P botnets involves more challenges. In
this paper, we present PeerHunter, a community behavior analysis based method,
which is capable of detecting botnets that communicate via a P2P structure.
PeerHunter starts from a P2P hosts detection component. Then, it uses mutual
contacts as the main feature to cluster bots into communities. Finally, it uses
community behavior analysis to detect potential botnet communities and further
identify bot candidates. Through extensive experiments with real and simulated
network traces, PeerHunter can achieve very high detection rate and low false
positives.
| 1 | 0 | 0 | 0 | 0 | 0 |
An X-ray/SDSS sample (II): outflowing gas plasma properties | Galaxy-scale outflows are nowadays observed in many active galactic nuclei
(AGNs); however, their characterisation in terms of (multi-) phase nature,
amount of flowing material, effects on the host galaxy, is still unsettled. In
particular, ionized gas mass outflow rate and related energetics are still
affected by many sources of uncertainties. In this respect, outflowing gas
plasma conditions, being largely unknown, play a crucial role.
Taking advantage of the spectroscopic analysis results we obtained studying
the X-ray/SDSS sample of 563 AGNs at z $<0.8$ presented in our companion paper,
we analyse stacked spectra and sub-samples of sources with high signal-to-noise
temperature- and density-sensitive emission lines to derive the plasma
properties of the outflowing ionized gas component. For these sources, we also
study in detail various diagnostic diagrams to infer information about
outflowing gas ionization mechanisms. We derive, for the first time, median
values for electron temperature and density of outflowing gas from medium-size
samples ($\sim 30$ targets) and stacked spectra of AGNs. Evidences of shock
excitation are found for outflowing gas.
We measure electron temperatures of the order of $\sim 1.7\times10^4$ K and
densities of $\sim 1200$ cm$^{-3}$ for faint and moderately luminous AGNs
(intrinsic X-ray luminosity $40.5<log(L_X)<44$ in the 2-10 keV band). We
caution that the usually assumed electron density ($N_e=100$ cm$^{-3}$) in
ejected material might result in relevant overestimates of flow mass rates and
energetics and, as a consequence, of the effects of AGN-driven outflows on the
host galaxy.
| 0 | 1 | 0 | 0 | 0 | 0 |
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models | Deep neural networks (DNNs) are one of the most prominent technologies of our
time, as they achieve state-of-the-art performance in many machine learning
tasks, including but not limited to image classification, text mining, and
speech processing. However, recent research on DNNs has indicated
ever-increasing concern on the robustness to adversarial examples, especially
for security-critical tasks such as traffic sign identification for autonomous
driving. Studies have unveiled the vulnerability of a well-trained DNN by
demonstrating the ability of generating barely noticeable (to both human and
machines) adversarial images that lead to misclassification. Furthermore,
researchers have shown that these adversarial images are highly transferable by
simply training and attacking a substitute model built upon the target model,
known as a black-box attack to DNNs.
Similar to the setting of training substitute models, in this paper we
propose an effective black-box attack that also only has access to the input
(images) and the output (confidence scores) of a targeted DNN. However,
different from leveraging attack transferability from substitute models, we
propose zeroth order optimization (ZOO) based attacks to directly estimate the
gradients of the targeted DNN for generating adversarial examples. We use
zeroth order stochastic coordinate descent along with dimension reduction,
hierarchical attack and importance sampling techniques to efficiently attack
black-box models. By exploiting zeroth order optimization, improved attacks to
the targeted DNN can be accomplished, sparing the need for training substitute
models and avoiding the loss in attack transferability. Experimental results on
MNIST, CIFAR10 and ImageNet show that the proposed ZOO attack is as effective
as the state-of-the-art white-box attack and significantly outperforms existing
black-box attacks via substitute models.
| 1 | 0 | 0 | 1 | 0 | 0 |
Existence of Noise Induced Order, a Computer Aided Proof | We prove, by a computer aided proof, the existence of noise induced order in
the model of chaotic chemical reactions where it was first discovered
numerically by Matsumoto and Tsuda in 1983. We prove that in this random
dynamical system the increase in amplitude of the noise causes the Lyapunov
exponent to decrease from positive to negative, stabilizing the system. The
method used is based on a certified approximation of the stationary measure in
the $L^{1}$ norm. This is done by an efficient algorithm which is general
enough to be adapted to any piecewise differentiable dynamical system on the
interval with additive noise. We also prove that the stationary measure of the
system and its Lyapunov exponent have a Lipschitz stability under several kinds
of perturbation of the noise and of the system itself. The Lipschitz constants
of this stability result are also estimated explicitly.
| 0 | 0 | 1 | 0 | 0 | 0 |
Residual Unfairness in Fair Machine Learning from Prejudiced Data | Recent work in fairness in machine learning has proposed adjusting for
fairness by equalizing accuracy metrics across groups and has also studied how
datasets affected by historical prejudices may lead to unfair decision
policies. We connect these lines of work and study the residual unfairness that
arises when a fairness-adjusted predictor is not actually fair on the target
population due to systematic censoring of training data by existing biased
policies. This scenario is particularly common in the same applications where
fairness is a concern. We characterize theoretically the impact of such
censoring on standard fairness metrics for binary classifiers and provide
criteria for when residual unfairness may or may not appear. We prove that,
under certain conditions, fairness-adjusted classifiers will in fact induce
residual unfairness that perpetuates the same injustices, against the same
groups, that biased the data to begin with, thus showing that even
state-of-the-art fair machine learning can have a "bias in, bias out" property.
When certain benchmark data is available, we show how sample reweighting can
estimate and adjust fairness metrics while accounting for censoring. We use
this to study the case of Stop, Question, and Frisk (SQF) and demonstrate that
attempting to adjust for fairness perpetuates the same injustices that the
policy is infamous for.
| 0 | 0 | 0 | 1 | 0 | 0 |
Egocentric Vision-based Future Vehicle Localization for Intelligent Driving Assistance Systems | Predicting the future location of vehicles is essential for safety-critical
applications such as advanced driver assistance systems (ADAS) and autonomous
driving. This paper introduces a novel approach to simultaneously predict both
the location and scale of target vehicles in the first-person (egocentric) view
of an ego-vehicle. We present a multi-stream recurrent neural network (RNN)
encoder-decoder model that separately captures both object location and scale
and pixel-level observations for future vehicle localization. We show that
incorporating dense optical flow improves prediction results significantly
since it captures information about motion as well as appearance change. We
also find that explicitly modeling future motion of the ego-vehicle improves
the prediction accuracy, which could be especially beneficial in intelligent
and automated vehicles that have motion planning capability. To evaluate the
performance of our approach, we present a new dataset of first-person videos
collected from a variety of scenarios at road intersections, which are
particularly challenging moments for prediction because vehicle trajectories
are diverse and dynamic.
| 1 | 0 | 0 | 0 | 0 | 0 |
Metriplectic formalism: friction and much more | The metriplectic formalism couples Poisson brackets of the Hamiltonian
description with metric brackets for describing systems with both Hamiltonian
and dissipative components. The construction builds in asymptotic convergence
to a preselected equilibrium state. Phenomena such as friction, electric
resistivity, thermal conductivity and collisions in kinetic theories are well
represented in this framework. In this paper we present an application of the
metriplectic formalism of interest for the theory of control: a suitable torque
is applied to a free rigid body, which is expressed through a metriplectic
extension of its "natural" Poisson algebra. On practical grounds, the effect is
to drive the body to align its angular velocity to rotation about a stable
principal axis of inertia, while conserving its kinetic energy in the process.
On theoretical grounds, this example shows how the non-Hamiltonian part of a
metriplectic system may include convergence to a limit cycle, the first example
of a non-zero dimensional attractor in this formalism. The method suggests a
way to extend metriplectic dynamics to systems with general attractors, e.g.
chaotic ones, with the hope of representing bio-physical, geophysical and
ecological models.
| 0 | 1 | 0 | 0 | 0 | 0 |
Hierarchical Video Understanding | We introduce a hierarchical architecture for video understanding that
exploits the structure of real world actions by capturing targets at different
levels of granularity. We design the model such that it first learns simpler
coarse-grained tasks, and then moves on to learn more fine-grained targets. The
model is trained with a joint loss on different granularity levels. We
demonstrate empirical results on the recent release of Something-Something
dataset, which provides a hierarchy of targets, namely coarse-grained action
groups, fine-grained action categories, and captions. Experiments suggest that
models that exploit targets at different levels of granularity achieve better
performance on all levels.
| 0 | 0 | 0 | 1 | 0 | 0 |
When Hashes Met Wedges: A Distributed Algorithm for Finding High Similarity Vectors | Finding similar user pairs is a fundamental task in social networks, with
numerous applications in ranking and personalization tasks such as link
prediction and tie strength detection. A common manifestation of user
similarity is based upon network structure: each user is represented by a
vector that represents the user's network connections, where pairwise cosine
similarity among these vectors defines user similarity. The predominant task
for user similarity applications is to discover all similar pairs that have a
pairwise cosine similarity value larger than a given threshold $\tau$. In
contrast to previous work where $\tau$ is assumed to be quite close to 1, we
focus on recommendation applications where $\tau$ is small, but still
meaningful. The all pairs cosine similarity problem is computationally
challenging on networks with billions of edges, and especially so for settings
with small $\tau$. To the best of our knowledge, there is no practical solution
for computing all user pairs with, say $\tau = 0.2$ on large social networks,
even using the power of distributed algorithms.
Our work directly addresses this challenge by introducing a new algorithm ---
WHIMP --- that solves this problem efficiently in the MapReduce model. The key
insight in WHIMP is to combine the "wedge-sampling" approach of Cohen-Lewis for
approximate matrix multiplication with the SimHash random projection techniques
of Charikar. We provide a theoretical analysis of WHIMP, proving that it has
near optimal communication costs while maintaining computation cost comparable
with the state of the art. We also empirically demonstrate WHIMP's scalability
by computing all highly similar pairs on four massive data sets, and show that
it accurately finds high similarity pairs. In particular, we note that WHIMP
successfully processes the entire Twitter network, which has tens of billions
of edges.
| 1 | 0 | 0 | 0 | 0 | 0 |
Hysteretic vortex matching effects in high-$T_c$ superconductors with nanoscale periodic pinning landscapes fabricated by He ion beam projection technique | Square arrays of sub-micrometer columnar defects in thin
YBa$_{2}$Cu$_{3}$O$_{7-\delta}$ (YBCO) films with spacings down to 300 nm have
been fabricated by a He ion beam projection technique. Pronounced peaks in the
critical current and corresponding minima in the resistance demonstrate the
commensurate arrangement of flux quanta with the artificial pinning landscape,
despite the strong intrinsic pinning in epitaxial YBCO films. Whereas these
vortex matching signatures are exactly at predicted values in field-cooled
experiments, they are displaced in zero-field cooled, magnetic-field ramped
experiments, conserving the equidistance of the matching peaks and minima.
These observations reveal an unconventional critical state in a cuprate
superconductor with an artificial, periodic pinning array. The long-term
stability of such out-of-equilibrium vortex arrangements paves the way for
electronic applications employing fluxons.
| 0 | 1 | 0 | 0 | 0 | 0 |
Spatial Factor Models for High-Dimensional and Large Spatial Data: An Application in Forest Variable Mapping | Gathering information about forest variables is an expensive and arduous
activity. As such, directly collecting the data required to produce
high-resolution maps over large spatial domains is infeasible. Next generation
collection initiatives of remotely sensed Light Detection and Ranging (LiDAR)
data are specifically aimed at producing complete-coverage maps over large
spatial domains. Given that LiDAR data and forest characteristics are often
strongly correlated, it is possible to make use of the former to model,
predict, and map forest variables over regions of interest. This entails
dealing with the high-dimensional ($\sim$$10^2$) spatially dependent LiDAR
outcomes over a large number of locations (~10^5-10^6). With this in mind, we
develop the Spatial Factor Nearest Neighbor Gaussian Process (SF-NNGP) model,
and embed it in a two-stage approach that connects the spatial structure found
in LiDAR signals with forest variables. We provide a simulation experiment that
demonstrates inferential and predictive performance of the SF-NNGP, and use the
two-stage modeling strategy to generate complete-coverage maps of forest
variables with associated uncertainty over a large region of boreal forests in
interior Alaska.
| 0 | 0 | 0 | 1 | 0 | 0 |
Detecting Adversarial Image Examples in Deep Networks with Adaptive Noise Reduction | Recently, many studies have demonstrated deep neural network (DNN)
classifiers can be fooled by the adversarial example, which is crafted via
introducing some perturbations into an original sample. Accordingly, some
powerful defense techniques were proposed. However, existing defense techniques
often require modifying the target model or depend on the prior knowledge of
attacks. In this paper, we propose a straightforward method for detecting
adversarial image examples, which can be directly deployed into unmodified
off-the-shelf DNN models. We consider the perturbation to images as a kind of
noise and introduce two classic image processing techniques, scalar
quantization and smoothing spatial filter, to reduce its effect. The image
entropy is employed as a metric to implement an adaptive noise reduction for
different kinds of images. Consequently, the adversarial example can be
effectively detected by comparing the classification results of a given sample
and its denoised version, without referring to any prior knowledge of attacks.
More than 20,000 adversarial examples against some state-of-the-art DNN models
are used to evaluate the proposed method, which are crafted with different
attack techniques. The experiments show that our detection method can achieve a
high overall F1 score of 96.39% and certainly raises the bar for defense-aware
attacks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Crowdsourcing for Beyond Polarity Sentiment Analysis A Pure Emotion Lexicon | Sentiment analysis aims to uncover emotions conveyed through information. In
its simplest form, it is performed on a polarity basis, where the goal is to
classify information with positive or negative emotion. Recent research has
explored more nuanced ways to capture emotions that go beyond polarity. For
these methods to work, they require a critical resource: a lexicon that is
appropriate for the task at hand, in terms of the range of emotions it captures
diversity. In the past, sentiment analysis lexicons have been created by
experts, such as linguists and behavioural scientists, with strict rules.
Lexicon evaluation was also performed by experts or gold standards. In our
paper, we propose a crowdsourcing method for lexicon acquisition, which is
scalable, cost-effective, and doesn't require experts or gold standards. We
also compare crowd and expert evaluations of the lexicon, to assess the overall
lexicon quality, and the evaluation capabilities of the crowd.
| 1 | 0 | 0 | 0 | 0 | 0 |
Sketched Ridge Regression: Optimization Perspective, Statistical Perspective, and Model Averaging | We address the statistical and optimization impacts of the classical sketch
and Hessian sketch used to approximately solve the Matrix Ridge Regression
(MRR) problem. Prior research has quantified the effects of classical sketch on
the strictly simpler least squares regression (LSR) problem. We establish that
classical sketch has a similar effect upon the optimization properties of MRR
as it does on those of LSR: namely, it recovers nearly optimal solutions. By
contrast, Hessian sketch does not have this guarantee, instead, the
approximation error is governed by a subtle interplay between the "mass" in the
responses and the optimal objective value.
For both types of approximation, the regularization in the sketched MRR
problem results in significantly different statistical properties from those of
the sketched LSR problem. In particular, there is a bias-variance trade-off in
sketched MRR that is not present in sketched LSR. We provide upper and lower
bounds on the bias and variance of sketched MRR, these bounds show that
classical sketch significantly increases the variance, while Hessian sketch
significantly increases the bias. Empirically, sketched MRR solutions can have
risks that are higher by an order-of-magnitude than those of the optimal MRR
solutions.
We establish theoretically and empirically that model averaging greatly
decreases the gap between the risks of the true and sketched solutions to the
MRR problem. Thus, in parallel or distributed settings, sketching combined with
model averaging is a powerful technique that quickly obtains near-optimal
solutions to the MRR problem while greatly mitigating the increased statistical
risk incurred by sketching.
| 1 | 0 | 0 | 1 | 0 | 0 |
Ballistic magnon heat conduction and possible Poiseuille flow in the helimagnetic insulator Cu$_2$OSeO$_3$ | We report on the observation of magnon thermal conductivity $\kappa_m\sim$ 70
W/mK near 5 K in the helimagnetic insulator Cu$_2$OSeO$_3$, exceeding that
measured in any other ferromagnet by almost two orders of magnitude. Ballistic,
boundary-limited transport for both magnons and phonons is established below 1
K, and Poiseuille flow of magnons is proposed to explain a magnon mean-free
path substantially exceeding the specimen width for the least defective
specimens in the range 2 K $<T<$ 10 K. These observations establish
Cu$_2$OSeO$_3$ as a model system for studying long-wavelength magnon dynamics.
| 0 | 1 | 0 | 0 | 0 | 0 |
On Alzer's inequality | Extensions and generalizations of Alzer's inequality; which is of Wirtinger
type are proved. As applications, sharp trapezoid type inequality and sharp
bound for the geometric mean are deduced.
| 0 | 0 | 1 | 0 | 0 | 0 |
Oncilla robot: a versatile open-source quadruped research robot with compliant pantograph legs | We present Oncilla robot, a novel mobile, quadruped legged locomotion
machine. This large-cat sized, 5.1 robot is one of a kind of a recent,
bioinspired legged robot class designed with the capability of model-free
locomotion control. Animal legged locomotion in rough terrain is clearly shaped
by sensor feedback systems. Results with Oncilla robot show that agile and
versatile locomotion is possible without sensory signals to some extend, and
tracking becomes robust when feedback control is added (Ajaoolleian 2015). By
incorporating mechanical and control blueprints inspired from animals, and by
observing the resulting robot locomotion characteristics, we aim to understand
the contribution of individual components. Legged robots have a wide mechanical
and control design parameter space, and a unique potential as research tools to
investigate principles of biomechanics and legged locomotion control. But the
hardware and controller design can be a steep initial hurdle for academic
research. To facilitate the easy start and development of legged robots,
Oncilla-robot's blueprints are available through open-source. [...]
| 1 | 0 | 0 | 0 | 0 | 0 |
A generative model for sparse, evolving digraphs | Generating graphs that are similar to real ones is an open problem, while the
similarity notion is quite elusive and hard to formalize. In this paper, we
focus on sparse digraphs and propose SDG, an algorithm that aims at generating
graphs similar to real ones. Since real graphs are evolving and this evolution
is important to study in order to understand the underlying dynamical system,
we tackle the problem of generating series of graphs. We propose SEDGE, an
algorithm meant to generate series of graphs similar to a real series. SEDGE is
an extension of SDG. We consider graphs that are representations of software
programs and show experimentally that our approach outperforms other existing
approaches. Experiments show the performance of both algorithms.
| 1 | 0 | 0 | 0 | 0 | 0 |
Penalized Interaction Estimation for Ultrahigh Dimensional Quadratic Regression | Quadratic regression goes beyond the linear model by simultaneously including
main effects and interactions between the covariates. The problem of
interaction estimation in high dimensional quadratic regression has received
extensive attention in the past decade. In this article we introduce a novel
method which allows us to estimate the main effects and interactions
separately. Unlike existing methods for ultrahigh dimensional quadratic
regressions, our proposal does not require the widely used heredity assumption.
In addition, our proposed estimates have explicit formulas and obey the
invariance principle at the population level. We estimate the interactions of
matrix form under penalized convex loss function. The resulting estimates are
shown to be consistent even when the covariate dimension is an exponential
order of the sample size. We develop an efficient ADMM algorithm to implement
the penalized estimation. This ADMM algorithm fully explores the cheap
computational cost of matrix multiplication and is much more efficient than
existing penalized methods such as all pairs LASSO. We demonstrate the
promising performance of our proposal through extensive numerical studies.
| 0 | 0 | 0 | 1 | 0 | 0 |
4D limit of melting crystal model and its integrable structure | This paper addresses the problems of quantum spectral curves and 4D limit for
the melting crystal model of 5D SUSY $U(1)$ Yang-Mills theory on
$\mathbb{R}^4\times S^1$. The partition function $Z(\mathbf{t})$ deformed by an
infinite number of external potentials is a tau function of the KP hierarchy
with respect to the coupling constants $\mathbf{t} = (t_1,t_2,\ldots)$. A
single-variate specialization $Z(x)$ of $Z(\mathbf{t})$ satisfies a
$q$-difference equation representing the quantum spectral curve of the melting
crystal model. In the limit as the radius $R$ of $S^1$ in $\mathbb{R}^4\times
S^1$ tends to $0$, it turns into a difference equation for a 4D counterpart
$Z_{\mathrm{4D}}(X)$ of $Z(x)$. This difference equation reproduces the quantum
spectral curve of Gromov-Witten theory of $\mathbb{CP}^1$. $Z_{\mathrm{4D}}(X)$
is obtained from $Z(x)$ by letting $R \to 0$ under an $R$-dependent
transformation $x = x(X,R)$ of $x$ to $X$. A similar prescription of 4D limit
can be formulated for $Z(\mathbf{t})$ with an $R$-dependent transformation
$\mathbf{t} = \mathbf{t}(\mathbf{T},R)$ of $\mathbf{t}$ to $\mathbf{T} =
(T_1,T_2,\ldots)$. This yields a 4D counterpart $Z_{\mathrm{4D}}(\mathbf{T})$
of $Z(\mathbf{t})$. $Z_{\mathrm{4D}}(\mathbf{T})$ agrees with a generating
function of all-genus Gromov-Witten invariants of $\mathbb{CP}^1$. Fay-type
bilinear equations for $Z_{\mathrm{4D}}(\mathbf{T})$ can be derived from
similar equations satisfied by $Z(\mathbf{t})$. The bilinear equations imply
that $Z_{\mathrm{4D}}(\mathbf{T})$, too, is a tau function of the KP hierarchy.
These results are further extended to deformations $Z(\mathbf{t},s)$ and
$Z_{\mathrm{4D}}(\mathbf{T},s)$ by a discrete variable $s \in \mathbb{Z}$,
which are shown to be tau functions of the 1D Toda hierarchy.
| 0 | 1 | 1 | 0 | 0 | 0 |
On constraints and dividing in ternary homogeneous structures | Let M be ternary, homogeneous and simple. We prove that if M is finitely
constrained, then it is supersimple with finite SU-rank and dependence is
$k$-trivial for some $k < \omega$ and for finite sets of real elements. Now
suppose that, in addition, M is supersimple with SU-rank 1. If M is finitely
constrained then algebraic closure in M is trivial. We also find connections
between the nature of the constraints of M, the nature of the amalgamations
allowed by the age of M, and the nature of definable equivalence relations. A
key method of proof is to "extract" constraints (of M) from instances of
dividing and from definable equivalence relations. Finally, we give new
examples, including an uncountable family, of ternary homogeneous supersimple
structures of SU-rank 1.
| 0 | 0 | 1 | 0 | 0 | 0 |
Changes in lipid membranes may trigger amyloid toxicity in Alzheimer's disease | Amyloid beta peptides (A\b{eta}), implicated in Alzheimers disease (AD),
interact with the cellular membrane and induce amyloid toxicity. The
composition of cellular membranes changes in aging and AD. We designed multi
component lipid models to mimic healthy and diseased states of the neuronal
membrane. Using atomic force microscopy (AFM), Kelvin probe force microscopy
(KPFM) and black lipid membrane (BLM) techniques, we demonstrated that these
model membranes differ in their nanoscale structure and physical properties,
and interact differently with A\b{eta}. Based on our data, we propose a new
hypothesis that changes in lipid membrane due to aging and AD may trigger
amyloid toxicity through electrostatic mechanisms, similar to the accepted
mechanism of antimicrobial peptide action. Understanding the role of the
membrane changes as a key activating amyloid toxicity may aid in the
development of a new avenue for the prevention and treatment of AD.
| 0 | 1 | 0 | 0 | 0 | 0 |
Transcendency Degree One Function Fields Over a Finite Field with Many Automorphisms | Let $\mathbb{K}$ be the algebraic closure of a finite field $\mathbb{F}_q$ of
odd characteristic $p$. For a positive integer $m$ prime to $p$, let
$F=\mathbb{K}(x,y)$ be the transcendency degree $1$ function field defined by
$y^q+y=x^m+x^{-m}$. Let $t=x^{m(q-1)}$ and $H=\mathbb{K}(t)$. The extension
$F|H$ is a non-Galois extension. Let $K$ be the Galois closure of $F$ with
respect to $H$. By a result of Stichtenoth, $K$ has genus $g(K)=(qm-1)(q-1)$,
$p$-rank (Hasse-Witt invariant) $\gamma(K)=(q-1)^2$ and a
$\mathbb{K}$-automorphism group of order at least $2q^2m(q-1)$. In this paper
we prove that this subgroup is the full $\mathbb{K}$-automorphism group of $K$;
more precisely $Aut_{\mathbb {K}}(K)=Q\rtimes D$ where $Q$ is an elementary
abelian $p$-group of order $q^2$ and $D$ has a index $2$ cyclic subgroup of
order $m(q-1)$. In particular, $\sqrt{m}|Aut_{\mathbb{K}}(K)|> g(K)^{3/2}$, and
if $K$ is ordinary (i.e. $g(K)=\gamma(K)$) then
$|Aut_{\mathbb{K}}(K)|>g^{3/2}$. On the other hand, if $G$ is a solvable
subgroup of the $\mathbb{K}$-automorphism group of an ordinary, transcendency
degree $1$ function field $L$ of genus $g(L)\geq 2$ defined over $\mathbb{K}$,
then by a result due to Korchmáros and Montanucci, $|Aut_{\mathbb{K}}(K)|\le
34 (g(L)+1)^{3/2}<68\sqrt{2}g(L)^{3/2}$. This shows that $K$ hits this bound up
to the constant $68\sqrt{2}$.
Since $Aut_{\mathbb{K}}(K)$ has several subgroups, the fixed subfield $F^N$
of such a subgroup $N$ may happen to have many automorphisms provided that the
normalizer of $N$ in $Aut_{\mathbb{K}}(K)$ is large enough. This possibility is
worked out for subgroups of $Q$.
| 0 | 0 | 1 | 0 | 0 | 0 |
The phase transitions between $Z_n\times Z_n$ bosonic topological phases in 1+1 D, and a constraint on the central charge for the critical points between bosonic symmetry protected topological phases | The study of continuous phase transitions triggered by spontaneous symmetry
breaking has brought revolutionary ideas to physics. Recently, through the
discovery of symmetry protected topological phases, it is realized that
continuous quantum phase transition can also occur between states with the same
symmetry but different topology. Here we study a specific class of such phase
transitions in 1+1 dimensions -- the phase transition between bosonic
topological phases protected by $Z_n\times Z_n$. We find in all cases the
critical point possesses two gap opening relevant operators: one leads to a
Landau-forbidden symmetry breaking phase transition and the other to the
topological phase transition. We also obtained a constraint on the central
charge for general phase transitions between symmetry protected bosonic
topological phases in 1+1D.
| 0 | 1 | 0 | 0 | 0 | 0 |
Universal Reinforcement Learning Algorithms: Survey and Experiments | Many state-of-the-art reinforcement learning (RL) algorithms typically assume
that the environment is an ergodic Markov Decision Process (MDP). In contrast,
the field of universal reinforcement learning (URL) is concerned with
algorithms that make as few assumptions as possible about the environment. The
universal Bayesian agent AIXI and a family of related URL algorithms have been
developed in this setting. While numerous theoretical optimality results have
been proven for these agents, there has been no empirical investigation of
their behavior to date. We present a short and accessible survey of these URL
algorithms under a unified notation and framework, along with results of some
experiments that qualitatively illustrate some properties of the resulting
policies, and their relative performance on partially-observable gridworld
environments. We also present an open-source reference implementation of the
algorithms which we hope will facilitate further understanding of, and
experimentation with, these ideas.
| 1 | 0 | 0 | 0 | 0 | 0 |
Cell Identity Codes: Understanding Cell Identity from Gene Expression Profiles using Deep Neural Networks | Understanding cell identity is an important task in many biomedical areas.
Expression patterns of specific marker genes have been used to characterize
some limited cell types, but exclusive markers are not available for many cell
types. A second approach is to use machine learning to discriminate cell types
based on the whole gene expression profiles (GEPs). The accuracies of simple
classification algorithms such as linear discriminators or support vector
machines are limited due to the complexity of biological systems. We used deep
neural networks to analyze 1040 GEPs from 16 different human tissues and cell
types. After comparing different architectures, we identified a specific
structure of deep autoencoders that can encode a GEP into a vector of 30
numeric values, which we call the cell identity code (CIC). The original GEP
can be reproduced from the CIC with an accuracy comparable to technical
replicates of the same experiment. Although we use an unsupervised approach to
train the autoencoder, we show different values of the CIC are connected to
different biological aspects of the cell, such as different pathways or
biological processes. This network can use CIC to reproduce the GEP of the cell
types it has never seen during the training. It also can resist some noise in
the measurement of the GEP. Furthermore, we introduce classifier autoencoder,
an architecture that can accurately identify cell type based on the GEP or the
CIC.
| 0 | 0 | 0 | 1 | 1 | 0 |
Full replica symmetry breaking in p-spin-glass-like systems | It is shown that continuously changing the effective number of interacting
particles in p-spin-glass-like model allows to describe the transition from the
full replica symmetry breaking glass solution to stable first replica symmetry
breaking glass solution in the case of non-reflective symmetry diagonal
operators used instead of Ising spins. As an example, axial quadrupole moments
in place of Ising spins are considered and the boundary value $p_{c_{1}}\cong
2.5$ is found.
| 0 | 1 | 0 | 0 | 0 | 0 |
Formation and condensation of excitonic bound states in the generalized Falicov-Kimball model | The density-matrix-renormalization-group (DMRG) method and the Hartree-Fock
(HF) approximation with the charge-density-wave (CDW) instability are used to
study a formation and condensation of excitonic bound states in the generalized
Falicov-Kimball model. In particular, we examine effects of various factors,
like the $f$-electron hopping, the local and nonlocal hybridization, as well as
the increasing dimension of the system on the excitonic momentum distribution
$N(q)$ and especially on the number of zero momentum excitons $N_0=N(q=0)$ in
the condensate. It is found that the negative values of the $f$-electron
hopping integrals $t_f$ support the formation of zero-momentum condensate,
while the positive values of $t_f$ have the fully opposite effect. The opposite
effects on the formation of condensate exhibit also the local and nonlocal
hybridization. The first one strongly supports the formation of condensate,
while the second one destroys it completely. Moreover, it was shown that the
zero-momentum condensate remains robust with increasing dimension of the
system.
| 0 | 1 | 0 | 0 | 0 | 0 |
Tunable Emergent Heterostructures in a Prototypical Correlated Metal | At the interface between two distinct materials desirable properties, such as
superconductivity, can be greatly enhanced, or entirely new functionalities may
emerge. Similar to in artificially engineered heterostructures, clean
functional interfaces alternatively exist in electronically textured bulk
materials. Electronic textures emerge spontaneously due to competing
atomic-scale interactions, the control of which, would enable a top-down
approach for designing tunable intrinsic heterostructures. This is particularly
attractive for correlated electron materials, where spontaneous
heterostructures strongly affect the interplay between charge and spin degrees
of freedom. Here we report high-resolution neutron spectroscopy on the
prototypical strongly-correlated metal CeRhIn5, revealing competition between
magnetic frustration and easy-axis anisotropy -- a well-established mechanism
for generating spontaneous superstructures. Because the observed easy-axis
anisotropy is field-induced and anomalously large, it can be controlled
efficiently with small magnetic fields. The resulting field-controlled magnetic
superstructure is closely tied to the formation of superconducting and
electronic nematic textures in CeRhIn5, suggesting that in-situ tunable
heterostructures can be realized in correlated electron materials.
| 0 | 1 | 0 | 0 | 0 | 0 |
Weak Label Supervision for Monaural Source Separation Using Non-negative Denoising Variational Autoencoders | Deep learning models are very effective in source separation when there are
large amounts of labeled data available. However it is not always possible to
have carefully labeled datasets. In this paper, we propose a weak supervision
method that only uses class information rather than source signals for learning
to separate short utterance mixtures. We associate a variational autoencoder
(VAE) with each class within a non-negative model. We demonstrate that deep
convolutional VAEs provide a prior model to identify complex signals in a sound
mixture without having access to any source signal. We show that the separation
results are on par with source signal supervision.
| 1 | 0 | 0 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.