ID
int64 1
21k
| TITLE
stringlengths 7
239
| ABSTRACT
stringlengths 7
2.76k
| Computer Science
int64 0
1
| Physics
int64 0
1
| Mathematics
int64 0
1
| Statistics
int64 0
1
| Quantitative Biology
int64 0
1
| Quantitative Finance
int64 0
1
|
---|---|---|---|---|---|---|---|---|
17,001 | Miraculous cancellations for quantum $SL_2$ | In earlier work, Helen Wong and the author discovered certain "miraculous
cancellations" for the quantum trace map connecting the Kauffman bracket skein
algebra of a surface to its quantum Teichmueller space, occurring when the
quantum parameter $q$ is a root of unity. The current paper is devoted to
giving a more representation theoretic interpretation of this phenomenon, in
terms of the quantum group $U_q(sl_2)$ and its dual Hopf algebra $SL_2^q$.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,002 | Energy and time measurements with high-granular silicon devices | This note is a short summary of the workshop on "Energy and time measurements
with high-granular silicon devices" that took place on the 13/6/16 and the
14/6/16 at DESY/Hamburg in the frame of the first AIDA-2020 Annual Meeting.
This note tries to put forward trends that could be spotted and to emphasise in
particular open issues that were addressed by the speakers.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,003 | Action Tubelet Detector for Spatio-Temporal Action Localization | Current state-of-the-art approaches for spatio-temporal action localization
rely on detections at the frame level that are then linked or tracked across
time. In this paper, we leverage the temporal continuity of videos instead of
operating at the frame level. We propose the ACtion Tubelet detector
(ACT-detector) that takes as input a sequence of frames and outputs tubelets,
i.e., sequences of bounding boxes with associated scores. The same way
state-of-the-art object detectors rely on anchor boxes, our ACT-detector is
based on anchor cuboids. We build upon the SSD framework. Convolutional
features are extracted for each frame, while scores and regressions are based
on the temporal stacking of these features, thus exploiting information from a
sequence. Our experimental results show that leveraging sequences of frames
significantly improves detection performance over using individual frames. The
gain of our tubelet detector can be explained by both more accurate scores and
more precise localization. Our ACT-detector outperforms the state-of-the-art
methods for frame-mAP and video-mAP on the J-HMDB and UCF-101 datasets, in
particular at high overlap thresholds.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,004 | Significance of Side Information in the Graph Matching Problem | Percolation based graph matching algorithms rely on the availability of seed
vertex pairs as side information to efficiently match users across networks.
Although such algorithms work well in practice, there are other types of side
information available which are potentially useful to an attacker. In this
paper, we consider the problem of matching two correlated graphs when an
attacker has access to side information, either in the form of community labels
or an imperfect initial matching. In the former case, we propose a naive graph
matching algorithm by introducing the community degree vectors which harness
the information from community labels in an efficient manner. Furthermore, we
analyze a variant of the basic percolation algorithm proposed in literature for
graphs with community structure. In the latter case, we propose a novel
percolation algorithm with two thresholds which uses an imperfect matching as
input to match correlated graphs.
We evaluate the proposed algorithms on synthetic as well as real world
datasets using various experiments. The experimental results demonstrate the
importance of communities as side information especially when the number of
seeds is small and the networks are weakly correlated.
| 1 | 1 | 0 | 0 | 0 | 0 |
17,005 | Extended Gray-Wyner System with Complementary Causal Side Information | We establish the rate region of an extended Gray-Wyner system for 2-DMS
$(X,Y)$ with two additional decoders having complementary causal side
information. This extension is interesting because in addition to the
operationally significant extreme points of the Gray-Wyner rate region, which
include Wyner's common information, G{á}cs-K{ö}rner common information and
information bottleneck, the rate region for the extended system also includes
the K{ö}rner graph entropy, the privacy funnel and excess functional
information, as well as three new quantities of potential interest, as extreme
points. To simplify the investigation of the 5-dimensional rate region of the
extended Gray-Wyner system, we establish an equivalence of this region to a
3-dimensional mutual information region that consists of the set of all triples
of the form $(I(X;U),\,I(Y;U),\,I(X,Y;U))$ for some $p_{U|X,Y}$. We further
show that projections of this mutual information region yield the rate regions
for many settings involving a 2-DMS, including lossless source coding with
causal side information, distributed channel synthesis, and lossless source
coding with a helper.
| 1 | 0 | 1 | 0 | 0 | 0 |
17,006 | Learning Powers of Poisson Binomial Distributions | We introduce the problem of simultaneously learning all powers of a Poisson
Binomial Distribution (PBD). A PBD of order $n$ is the distribution of a sum of
$n$ mutually independent Bernoulli random variables $X_i$, where
$\mathbb{E}[X_i] = p_i$. The $k$'th power of this distribution, for $k$ in a
range $[m]$, is the distribution of $P_k = \sum_{i=1}^n X_i^{(k)}$, where each
Bernoulli random variable $X_i^{(k)}$ has $\mathbb{E}[X_i^{(k)}] = (p_i)^k$.
The learning algorithm can query any power $P_k$ several times and succeeds in
learning all powers in the range, if with probability at least $1- \delta$:
given any $k \in [m]$, it returns a probability distribution $Q_k$ with total
variation distance from $P_k$ at most $\epsilon$. We provide almost matching
lower and upper bounds on query complexity for this problem. We first show a
lower bound on the query complexity on PBD powers instances with many distinct
parameters $p_i$ which are separated, and we almost match this lower bound by
examining the query complexity of simultaneously learning all the powers of a
special class of PBD's resembling the PBD's of our lower bound. We study the
fundamental setting of a Binomial distribution, and provide an optimal
algorithm which uses $O(1/\epsilon^2)$ samples. Diakonikolas, Kane and Stewart
[COLT'16] showed a lower bound of $\Omega(2^{1/\epsilon})$ samples to learn the
$p_i$'s within error $\epsilon$. The question whether sampling from powers of
PBDs can reduce this sampling complexity, has a negative answer since we show
that the exponential number of samples is inevitable. Having sampling access to
the powers of a PBD we then give a nearly optimal algorithm that learns its
$p_i$'s. To prove our two last lower bounds we extend the classical minimax
risk definition from statistics to estimating functions of sequences of
distributions.
| 1 | 0 | 1 | 1 | 0 | 0 |
17,007 | Geometry of simplices in Minkowski spaces | There are many problems and configurations in Euclidean geometry that were
never extended to the framework of (normed or) finite dimensional real Banach
spaces, although their original versions are inspiring for this type of
generalization, and the analogous definitions for normed spaces represent a
promising topic. An example is the geometry of simplices in non-Euclidean
normed spaces. We present new generalizations of well known properties of
Euclidean simplices. These results refer to analogues of circumcenters, Euler
lines, and Feuerbach spheres of simplices in normed spaces. Using duality, we
also get natural theorems on angular bisectors as well as in- and exspheres of
(dual) simplices.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,008 | DLR : Toward a deep learned rhythmic representation for music content analysis | In the use of deep neural networks, it is crucial to provide appropriate
input representations for the network to learn from. In this paper, we propose
an approach to learn a representation that focus on rhythmic representation
which is named as DLR (Deep Learning Rhythmic representation). The proposed
approach aims to learn DLR from the raw audio signal and use it for other music
informatics tasks. A 1-dimensional convolutional network is utilised in the
learning of DLR. In the experiment, we present the results from the source task
and the target task as well as visualisations of DLRs. The results reveals that
DLR provides compact rhythmic information which can be used on multi-tagging
task.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,009 | Phylogeny-based tumor subclone identification using a Bayesian feature allocation model | Tumor cells acquire different genetic alterations during the course of
evolution in cancer patients. As a result of competition and selection, only a
few subgroups of cells with distinct genotypes survive. These subgroups of
cells are often referred to as subclones. In recent years, many statistical and
computational methods have been developed to identify tumor subclones, leading
to biologically significant discoveries and shedding light on tumor
progression, metastasis, drug resistance and other processes. However, most
existing methods are either not able to infer the phylogenetic structure among
subclones, or not able to incorporate copy number variations (CNV). In this
article, we propose SIFA (tumor Subclone Identification by Feature Allocation),
a Bayesian model which takes into account both CNV and tumor phylogeny
structure to infer tumor subclones. We compare the performance of SIFA with two
other commonly used methods using simulation studies with varying sequencing
depth, evolutionary tree size, and tree complexity. SIFA consistently yields
better results in terms of Rand Index and cellularity estimation accuracy. The
usefulness of SIFA is also demonstrated through its application to whole genome
sequencing (WGS) samples from four patients in a breast cancer study.
| 0 | 0 | 0 | 1 | 1 | 0 |
17,010 | Confidence-based Graph Convolutional Networks for Semi-Supervised Learning | Predicting properties of nodes in a graph is an important problem with
applications in a variety of domains. Graph-based Semi-Supervised Learning
(SSL) methods aim to address this problem by labeling a small subset of the
nodes as seeds and then utilizing the graph structure to predict label scores
for the rest of the nodes in the graph. Recently, Graph Convolutional Networks
(GCNs) have achieved impressive performance on the graph-based SSL task. In
addition to label scores, it is also desirable to have confidence scores
associated with them. Unfortunately, confidence estimation in the context of
GCN has not been previously explored. We fill this important gap in this paper
and propose ConfGCN, which estimates labels scores along with their confidences
jointly in GCN-based setting. ConfGCN uses these estimated confidences to
determine the influence of one node on another during neighborhood aggregation,
thereby acquiring anisotropic capabilities. Through extensive analysis and
experiments on standard benchmarks, we find that ConfGCN is able to outperform
state-of-the-art baselines. We have made ConfGCN's source code available to
encourage reproducible research.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,011 | Long-range fluctuations and multifractality in connectivity density time series of a wind speed monitoring network | This paper studies the daily connectivity time series of a wind
speed-monitoring network using multifractal detrended fluctuation analysis. It
investigates the long-range fluctuation and multifractality in the residuals of
the connectivity time series. Our findings reveal that the daily connectivity
of the correlation-based network is persistent for any correlation threshold.
Further, the multifractality degree is higher for larger absolute values of the
correlation threshold
| 0 | 0 | 0 | 1 | 0 | 0 |
17,012 | The Dynamics of Norm Change in the Cultural Evolution of Language | What happens when a new social convention replaces an old one? While the
possible forces favoring norm change - such as institutions or committed
activists - have been identified since a long time, little is known about how a
population adopts a new convention, due to the difficulties of finding
representative data. Here we address this issue by looking at changes occurred
to 2,541 orthographic and lexical norms in English and Spanish through the
analysis of a large corpora of books published between the years 1800 and 2008.
We detect three markedly distinct patterns in the data, depending on whether
the behavioral change results from the action of a formal institution, an
informal authority or a spontaneous process of unregulated evolution. We
propose a simple evolutionary model able to capture all the observed behaviors
and we show that it reproduces quantitatively the empirical data. This work
identifies general mechanisms of norm change and we anticipate that it will be
of interest to researchers investigating the cultural evolution of language
and, more broadly, human collective behavior.
| 0 | 0 | 0 | 0 | 1 | 0 |
17,013 | Bayesian Joint Spike-and-Slab Graphical Lasso | In this article, we propose a new class of priors for Bayesian inference with
multiple Gaussian graphical models. We introduce fully Bayesian treatments of
two popular procedures, the group graphical lasso and the fused graphical
lasso, and extend them to a continuous spike-and-slab framework to allow
self-adaptive shrinkage and model selection simultaneously. We develop an EM
algorithm that performs fast and dynamic explorations of posterior modes. Our
approach selects sparse models efficiently with substantially smaller bias than
would be induced by alternative regularization procedures. The performance of
the proposed methods are demonstrated through simulation and two real data
examples.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,014 | Variations on the theme of the uniform boundary condition | The uniform boundary condition in a normed chain complex asks for a uniform
linear bound on fillings of null-homologous cycles. For the $\ell^1$-norm on
the singular chain complex, Matsumoto and Morita established a characterisation
of the uniform boundary condition in terms of bounded cohomology. In
particular, spaces with amenable fundamental group satisfy the uniform boundary
condition in every degree. We will give an alternative proof of statements of
this type, using geometric F{\o}lner arguments on the chain level instead of
passing to the dual cochain complex. These geometric methods have the advantage
that they also lead to integral refinements. In particular, we obtain
applications in the context of integral foliated simplicial volume.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,015 | revisit: a Workflow Tool for Data Science | In recent years there has been widespread concern in the scientific community
over a reproducibility crisis. Among the major causes that have been identified
is statistical: In many scientific research the statistical analysis (including
data preparation) suffers from a lack of transparency and methodological
problems, major obstructions to reproducibility. The revisit package aims
toward remedying this problem, by generating a "software paper trail" of the
statistical operations applied to a dataset. This record can be "replayed" for
verification purposes, as well as be modified to enable alternative analyses.
The software also issues warnings of certain kinds of potential errors in
statistical methodology, again related to the reproducibility issue.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,016 | Programmatically Interpretable Reinforcement Learning | We present a reinforcement learning framework, called Programmatically
Interpretable Reinforcement Learning (PIRL), that is designed to generate
interpretable and verifiable agent policies. Unlike the popular Deep
Reinforcement Learning (DRL) paradigm, which represents policies by neural
networks, PIRL represents policies using a high-level, domain-specific
programming language. Such programmatic policies have the benefits of being
more easily interpreted than neural networks, and being amenable to
verification by symbolic methods. We propose a new method, called Neurally
Directed Program Search (NDPS), for solving the challenging nonsmooth
optimization problem of finding a programmatic policy with maximal reward. NDPS
works by first learning a neural policy network using DRL, and then performing
a local search over programmatic policies that seeks to minimize a distance
from this neural "oracle". We evaluate NDPS on the task of learning to drive a
simulated car in the TORCS car-racing environment. We demonstrate that NDPS is
able to discover human-readable policies that pass some significant performance
bars. We also show that PIRL policies can have smoother trajectories, and can
be more easily transferred to environments not encountered during training,
than corresponding policies discovered by DRL.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,017 | Kinetic Simulation of Collisional Magnetized Plasmas with Semi-Implicit Time Integration | Plasmas with varying collisionalities occur in many applications, such as
tokamak edge regions, where the flows are characterized by significant
variations in density and temperature. While a kinetic model is necessary for
weakly-collisional high-temperature plasmas, high collisionality in colder
regions render the equations numerically stiff due to disparate time scales. In
this paper, we propose an implicit-explicit algorithm for such cases, where the
collisional term is integrated implicitly in time, while the advective term is
integrated explicitly in time, thus allowing time step sizes that are
comparable to the advective time scales. This partitioning results in a more
efficient algorithm than those using explicit time integrators, where the time
step sizes are constrained by the stiff collisional time scales. We implement
semi-implicit additive Runge-Kutta methods in COGENT, a finite-volume
gyrokinetic code for mapped, multiblock grids and test the accuracy,
convergence, and computational cost of these semi-implicit methods for test
cases with highly-collisional plasmas.
| 1 | 1 | 0 | 0 | 0 | 0 |
17,018 | VC-dimension of short Presburger formulas | We study VC-dimension of short formulas in Presburger Arithmetic, defined to
have a bounded number of variables, quantifiers and atoms. We give both lower
and upper bounds, which are tight up to a polynomial factor in the bit length
of the formula.
| 1 | 0 | 1 | 0 | 0 | 0 |
17,019 | Real-time Traffic Accident Risk Prediction based on Frequent Pattern Tree | Traffic accident data are usually noisy, contain missing values, and
heterogeneous. How to select the most important variables to improve real-time
traffic accident risk prediction has become a concern of many recent studies.
This paper proposes a novel variable selection method based on the Frequent
Pattern tree (FP tree) algorithm. First, all the frequent patterns in the
traffic accident dataset are discovered. Then for each frequent pattern, a new
criterion, called the Relative Object Purity Ratio (ROPR) which we proposed, is
calculated. This ROPR is added to the importance score of the variables that
differentiate one frequent pattern from the others. To test the proposed
method, a dataset was compiled from the traffic accidents records detected by
only one detector on interstate highway I-64 in Virginia in 2005. This dataset
was then linked to other variables such as real-time traffic information and
weather conditions. Both the proposed method based on the FP tree algorithm, as
well as the widely utilized, random forest method, were then used to identify
the important variables or the Virginia dataset. The results indicate that
there are some differences between the variables deemed important by the FP
tree and those selected by the random forest method. Following this, two
baseline models (i.e. a nearest neighbor (k-NN) method and a Bayesian network)
were developed to predict accident risk based on the variables identified by
both the FP tree method and the random forest method. The results show that the
models based on the variable selection using the FP tree performed better than
those based on the random forest method for several versions of the k-NN and
Bayesian network models.The best results were derived from a Bayesian network
model using variables from FP tree. That model could predict 61.11% of
accidents accurately while having a false alarm rate of 38.16%.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,020 | Do Developers Update Their Library Dependencies? An Empirical Study on the Impact of Security Advisories on Library Migration | Third-party library reuse has become common practice in contemporary software
development, as it includes several benefits for developers. Library
dependencies are constantly evolving, with newly added features and patches
that fix bugs in older versions. To take full advantage of third-party reuse,
developers should always keep up to date with the latest versions of their
library dependencies. In this paper, we investigate the extent of which
developers update their library dependencies. Specifically, we conducted an
empirical study on library migration that covers over 4,600 GitHub software
projects and 2,700 library dependencies. Results show that although many of
these systems rely heavily on dependencies, 81.5% of the studied systems still
keep their outdated dependencies. In the case of updating a vulnerable
dependency, the study reveals that affected developers are not likely to
respond to a security advisory. Surveying these developers, we find that 69% of
the interviewees claim that they were unaware of their vulnerable dependencies.
Furthermore, developers are not likely to prioritize library updates, citing it
as extra effort and added responsibility. This study concludes that even though
third-party reuse is commonplace, the practice of updating a dependency is not
as common for many developers.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,021 | Is Smaller Better: A Proposal To Consider Bacteria For Biologically Inspired Modeling | Bacteria are easily characterizable model organisms with an impressively
complicated set of capabilities. Among their capabilities is quorum sensing, a
detailed cell-cell signaling system that may have a common origin with
eukaryotic cell-cell signaling. Not only are the two phenomena similar, but
quorum sensing, as is the case with any bacterial phenomenon when compared to
eukaryotes, is also easier to study in depth than eukaryotic cell-cell
signaling. This ease of study is a contrast to the only partially understood
cellular dynamics of neurons. Here we review the literature on the strikingly
neuron-like qualities of bacterial colonies and biofilms, including ion-based
and hormonal signaling, and action potential-like behavior. This allows them to
feasibly act as an analog for neurons that could produce more detailed and more
accurate biologically-based computational models. Using bacteria as the basis
for biologically feasible computational models may allow models to better
harness the tremendous ability of biological organisms to make decisions and
process information. Additionally, principles gleaned from bacterial function
have the potential to influence computational efforts divorced from biology,
just as neuronal function has in the abstract influenced countless machine
learning efforts.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,022 | A Bayesian Data Augmentation Approach for Learning Deep Models | Data augmentation is an essential part of the training process applied to
deep learning models. The motivation is that a robust training process for deep
learning models depends on large annotated datasets, which are expensive to be
acquired, stored and processed. Therefore a reasonable alternative is to be
able to automatically generate new annotated training samples using a process
known as data augmentation. The dominant data augmentation approach in the
field assumes that new training samples can be obtained via random geometric or
appearance transformations applied to annotated training samples, but this is a
strong assumption because it is unclear if this is a reliable generative model
for producing new training samples. In this paper, we provide a novel Bayesian
formulation to data augmentation, where new annotated training points are
treated as missing variables and generated based on the distribution learned
from the training set. For learning, we introduce a theoretically sound
algorithm --- generalised Monte Carlo expectation maximisation, and demonstrate
one possible implementation via an extension of the Generative Adversarial
Network (GAN). Classification results on MNIST, CIFAR-10 and CIFAR-100 show the
better performance of our proposed method compared to the current dominant data
augmentation approach mentioned above --- the results also show that our
approach produces better classification results than similar GAN models.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,023 | Interpreting Embedding Models of Knowledge Bases: A Pedagogical Approach | Knowledge bases are employed in a variety of applications from natural
language processing to semantic web search; alas, in practice their usefulness
is hurt by their incompleteness. Embedding models attain state-of-the-art
accuracy in knowledge base completion, but their predictions are notoriously
hard to interpret. In this paper, we adapt "pedagogical approaches" (from the
literature on neural networks) so as to interpret embedding models by
extracting weighted Horn rules from them. We show how pedagogical approaches
have to be adapted to take upon the large-scale relational aspects of knowledge
bases and show experimentally their strengths and weaknesses.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,024 | Parameterized complexity of machine scheduling: 15 open problems | Machine scheduling problems are a long-time key domain of algorithms and
complexity research. A novel approach to machine scheduling problems are
fixed-parameter algorithms. To stimulate this thriving research direction, we
propose 15 open questions in this area whose resolution we expect to lead to
the discovery of new approaches and techniques both in scheduling and
parameterized complexity theory.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,025 | Potential Conditional Mutual Information: Estimators, Properties and Applications | The conditional mutual information I(X;Y|Z) measures the average information
that X and Y contain about each other given Z. This is an important primitive
in many learning problems including conditional independence testing, graphical
model inference, causal strength estimation and time-series problems. In
several applications, it is desirable to have a functional purely of the
conditional distribution p_{Y|X,Z} rather than of the joint distribution
p_{X,Y,Z}. We define the potential conditional mutual information as the
conditional mutual information calculated with a modified joint distribution
p_{Y|X,Z} q_{X,Z}, where q_{X,Z} is a potential distribution, fixed airport. We
develop K nearest neighbor based estimators for this functional, employing
importance sampling, and a coupling trick, and prove the finite k consistency
of such an estimator. We demonstrate that the estimator has excellent practical
performance and show an application in dynamical system inference.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,026 | A new approach to divergences in quantum electrodynamics, concrete examples | An interesting attempt for solving infrared divergence problems via the
theory of generalized wave operators was made by P. Kulish and L. Faddeev. Our
method of using the ideas from the theory of generalized wave operators is
essentially different. We assume that the unperturbed operator $A_0$ is known
and that the scattering operator $S$ and the unperturbed operator $A_0$ are
permutable. (In the Kulish-Faddeev theory this basic property is not
fulfilled.) The permutability of $S$ and $A_0$ gives us an important
information about the structure of the scattering operator. We show that the
divergences appeared because the deviations of the initial and final waves from
the free waves were not taken into account. The approach is demonstrated on
important examples.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,027 | Indefinite boundary value problems on graphs | We consider the spectral structure of indefinite second order boundary-value
problems on graphs. A variational formulation for such boundary-value problems
on graphs is given and we obtain both full and half-range completeness results.
This leads to a max-min principle and as a consequence we can formulate an
analogue of Dirichlet-Neumann bracketing and this in turn gives rise to
asymptotic approximations for the eigenvalues.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,028 | Integral curvatures of Finsler manifolds and applications | In this paper, we study the integral curvatures of Finsler manifolds. Some
Bishop-Gromov relative volume comparisons and several Myers type theorems are
obtained. We also establish a Gromov type precompactness theorem and a
Yamaguchi type finiteness theorem. Furthermore, the isoperimetric and Sobolev
constants of a closed Finsler manifold are estimated by integral curvature
bounds.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,029 | L-functions and sharp resonances of infinite index congruence subgroups of $SL_2(\mathbb{Z})$ | For convex co-compact subgroups of SL2(Z) we consider the "congruence
subgroups" for p prime. We prove a factorization formula for the Selberg zeta
function in term of L-functions related to irreducible representations of the
Galois group SL2(Fp) of the covering, together with a priori bounds and
analytic continuation. We use this factorization property combined with an
averaging technique over representations to prove a new existence result of
non-trivial resonances in an effective low frequency strip.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,030 | An Enhanced Lumped Element Electrical Model of a Double Barrier Memristive Device | The massive parallel approach of neuromorphic circuits leads to effective
methods for solving complex problems. It has turned out that resistive
switching devices with a continuous resistance range are potential candidates
for such applications. These devices are memristive systems - nonlinear
resistors with memory. They are fabricated in nanotechnology and hence
parameter spread during fabrication may aggravate reproducible analyses. This
issue makes simulation models of memristive devices worthwhile.
Kinetic Monte-Carlo simulations based on a distributed model of the device
can be used to understand the underlying physical and chemical phenomena.
However, such simulations are very time-consuming and neither convenient for
investigations of whole circuits nor for real-time applications, e.g. emulation
purposes. Instead, a concentrated model of the device can be used for both fast
simulations and real-time applications, respectively. We introduce an enhanced
electrical model of a valence change mechanism (VCM) based double barrier
memristive device (DBMD) with a continuous resistance range. This device
consists of an ultra-thin memristive layer sandwiched between a tunnel barrier
and a Schottky-contact. The introduced model leads to very fast simulations by
using usual circuit simulation tools while maintaining physically meaningful
parameters.
Kinetic Monte-Carlo simulations based on a distributed model and experimental
data have been utilized as references to verify the concentrated model.
| 1 | 1 | 0 | 0 | 0 | 0 |
17,031 | Non-perturbative positive Lyapunov exponent of Schrödinger equations and its applications to skew-shift | We first study the discrete Schrödinger equations with analytic potentials
given by a class of transformations. It is shown that if the coupling number is
large, then its logarithm equals approximately to the Lyapunov exponents. When
the transformation becomes the skew-shift, we prove that the Lyapunov exponent
is week Hölder continuous, and the spectrum satisfies Anderson Localization
and contains large intervals. Moreover, all of these conclusions are
non-perturbative.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,032 | Greedy Algorithms for Cone Constrained Optimization with Convergence Guarantees | Greedy optimization methods such as Matching Pursuit (MP) and Frank-Wolfe
(FW) algorithms regained popularity in recent years due to their simplicity,
effectiveness and theoretical guarantees. MP and FW address optimization over
the linear span and the convex hull of a set of atoms, respectively. In this
paper, we consider the intermediate case of optimization over the convex cone,
parametrized as the conic hull of a generic atom set, leading to the first
principled definitions of non-negative MP algorithms for which we give explicit
convergence rates and demonstrate excellent empirical performance. In
particular, we derive sublinear ($\mathcal{O}(1/t)$) convergence on general
smooth and convex objectives, and linear convergence ($\mathcal{O}(e^{-t})$) on
strongly convex objectives, in both cases for general sets of atoms.
Furthermore, we establish a clear correspondence of our algorithms to known
algorithms from the MP and FW literature. Our novel algorithms and analyses
target general atom sets and general objective functions, and hence are
directly applicable to a large variety of learning settings.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,033 | Probing the accretion disc structure by the twin kHz QPOs and spins of neutron stars in LMXBs | We analyze the relation between the emission radii of twin kilohertz
quasi-periodic oscillations (kHz QPOs) and the co-rotation radii of the 12
neutron star low mass X-ray binaries (NS-LMXBs) which are simultaneously
detected with the twin kHz QPOs and NS spins. We find that the average
co-rotation radius of these sources is r_co about 32 km, and all the emission
positions of twin kHz QPOs lie inside the corotation radii, indicating that the
twin kHz QPOs are formed in the spin-up process. It is noticed that the upper
frequency of twin kHz QPOs is higher than NS spin frequency by > 10%, which may
account for a critical velocity difference between the Keplerian motion of
accretion matter and NS spin that is corresponding to the production of twin
kHz QPOs. In addition, we also find that about 83% of twin kHz QPOs cluster
around the radius range of 15-20 km, which may be affected by the hard surface
or the local strong magnetic field of NS. As a special case, SAX J1808.4-3658
shows the larger emission radii of twin kHz QPOs of r about 21-24 km, which may
be due to its low accretion rate or small measured NS mass (< 1.4 solar mass).
| 0 | 1 | 0 | 0 | 0 | 0 |
17,034 | Can scientists and their institutions become their own open access publishers? | This article offers a personal perspective on the current state of academic
publishing, and posits that the scientific community is beset with journals
that contribute little valuable knowledge, overload the community's capacity
for high-quality peer review, and waste resources. Open access publishing can
offer solutions that benefit researchers and other information users, as well
as institutions and funders, but commercial journal publishers have influenced
open access policies and practices in ways that favor their economic interests
over those of other stakeholders in knowledge creation and sharing. One way to
free research from constraints on access is the diamond route of open access
publishing, in which institutions and funders that produce new knowledge
reclaim responsibility for publication via institutional journals or other open
platforms. I argue that research journals (especially those published for
profit) may no longer be fit for purpose, and hope that readers will consider
whether the time has come to put responsibility for publishing back into the
hands of researchers and their institutions. The potential advantages and
challenges involved in a shift away from for-profit journals in favor of
institutional open access publishing are explored.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,035 | Character sums for elliptic curve densities | If $E$ is an elliptic curve over $\mathbb{Q}$, then it follows from work of
Serre and Hooley that, under the assumption of the Generalized Riemann
Hypothesis, the density of primes $p$ such that the group of
$\mathbb{F}_p$-rational points of the reduced curve $\tilde{E}(\mathbb{F}_p)$
is cyclic can be written as an infinite product $\prod \delta_\ell$ of local
factors $\delta_\ell$ reflecting the degree of the $\ell$-torsion fields,
multiplied by a factor that corrects for the entanglements between the various
torsion fields. We show that this correction factor can be interpreted as a
character sum, and the resulting description allows us to easily determine
non-vanishing criteria for it. We apply this method in a variety of other
settings. Among these, we consider the aforementioned problem with the
additional condition that the primes $p$ lie in a given arithmetic progression.
We also study the conjectural constants appearing in Koblitz's conjecture, a
conjecture which relates to the density of primes $p$ for which the cardinality
of the group of $\mathbb{F}_p$-points of $E$ is prime.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,036 | A monolithic fluid-structure interaction formulation for solid and liquid membranes including free-surface contact | A unified fluid-structure interaction (FSI) formulation is presented for
solid, liquid and mixed membranes. Nonlinear finite elements (FE) and the
generalized-alpha scheme are used for the spatial and temporal discretization.
The membrane discretization is based on curvilinear surface elements that can
describe large deformations and rotations, and also provide a straightforward
description for contact. The fluid is described by the incompressible
Navier-Stokes equations, and its discretization is based on stabilized
Petrov-Galerkin FE. The coupling between fluid and structure uses a conforming
sharp interface discretization, and the resulting non-linear FE equations are
solved monolithically within the Newton-Raphson scheme. An arbitrary
Lagrangian-Eulerian formulation is used for the fluid in order to account for
the mesh motion around the structure. The formulation is very general and
admits diverse applications that include contact at free surfaces. This is
demonstrated by two analytical and three numerical examples exhibiting strong
coupling between fluid and structure. The examples include balloon inflation,
droplet rolling and flapping flags. They span a Reynolds-number range from
0.001 to 2000. One of the examples considers the extension to rotation-free
shells using isogeometric FE.
| 1 | 1 | 0 | 0 | 0 | 0 |
17,037 | Different Non-extensive Models for heavy-ion collisions | The transverse momentum ($p_T$) spectra from heavy-ion collisions at
intermediate momenta are described by non-extensive statistical models.
Assuming a fixed relative variance of the temperature fluctuating event by
event or alternatively a fixed mean multiplicity in a negative binomial
distribution (NBD), two different linear relations emerge between the
temperature, $T$, and the Tsallis parameter $q-1$. Our results qualitatively
agree with that of G.~Wilk. Furthermore we revisit the "Soft+Hard" model,
proposed recently by G.~G.~Barnaföldi \textit{et.al.}, by a $T$-independent
average $p_T^2$ assumption. Finally we compare results with those predicted by
another deformed distribution, using Kaniadakis' $\kappa$ parametrization.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,038 | Efficient Toxicity Prediction via Simple Features Using Shallow Neural Networks and Decision Trees | Toxicity prediction of chemical compounds is a grand challenge. Lately, it
achieved significant progress in accuracy but using a huge set of features,
implementing a complex blackbox technique such as a deep neural network, and
exploiting enormous computational resources. In this paper, we strongly argue
for the models and methods that are simple in machine learning characteristics,
efficient in computing resource usage, and powerful to achieve very high
accuracy levels. To demonstrate this, we develop a single task-based chemical
toxicity prediction framework using only 2D features that are less compute
intensive. We effectively use a decision tree to obtain an optimum number of
features from a collection of thousands of them. We use a shallow neural
network and jointly optimize it with decision tree taking both network
parameters and input features into account. Our model needs only a minute on a
single CPU for its training while existing methods using deep neural networks
need about 10 min on NVidia Tesla K40 GPU. However, we obtain similar or better
performance on several toxicity benchmark tasks. We also develop a cumulative
feature ranking method which enables us to identify features that can help
chemists perform prescreening of toxic compounds effectively.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,039 | Minmax Hierarchies and Minimal Surfaces in Manifolds | We introduce a general scheme that permits to generate successive min-max
problems for producing critical points of higher and higher indices to
Palais-Smale Functionals in Banach manifolds equipped with Finsler structures.
We call the resulting tree of minmax problems a minmax hierarchy. Using the
viscosity approach to the minmax theory of minimal surfaces introduced by the
author in a series of recent works, we explain how this scheme can be deformed
for producing smooth minimal surfaces of strictly increasing area in arbitrary
codimension. We implement this scheme to the case of the $3-$dimensional
sphere. In particular we are giving a min-max characterization of the Clifford
Torus and conjecture what are the next minimal surfaces to come in the $S^3$
hierarchy. Among other results we prove here the lower semi continuity of the
Morse Index in the viscosity method below an area level.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,040 | Nonseparable Multinomial Choice Models in Cross-Section and Panel Data | Multinomial choice models are fundamental for empirical modeling of economic
choices among discrete alternatives. We analyze identification of binary and
multinomial choice models when the choice utilities are nonseparable in
observed attributes and multidimensional unobserved heterogeneity with
cross-section and panel data. We show that derivatives of choice probabilities
with respect to continuous attributes are weighted averages of utility
derivatives in cross-section models with exogenous heterogeneity. In the
special case of random coefficient models with an independent additive effect,
we further characterize that the probability derivative at zero is proportional
to the population mean of the coefficients. We extend the identification
results to models with endogenous heterogeneity using either a control function
or panel data. In time stationary panel models with two periods, we find that
differences over time of derivatives of choice probabilities identify utility
derivatives "on the diagonal," i.e. when the observed attributes take the same
values in the two periods. We also show that time stationarity does not
identify structural derivatives "off the diagonal" both in continuous and
multinomial choice panel models.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,041 | Corona limits of tilings : Periodic case | We study the limit shape of successive coronas of a tiling, which models the
growth of crystals. We define basic terminologies and discuss the existence and
uniqueness of corona limits, and then prove that corona limits are completely
characterized by directional speeds. As an application, we give another proof
that the corona limit of a periodic tiling is a centrally symmetric convex
polyhedron (see [Zhuravlev 2001], [Maleev-Shutov 2011]).
| 0 | 0 | 1 | 0 | 0 | 0 |
17,042 | The Spatial Range of Conformity | Properties of galaxies like their absolute magnitude and their stellar mass
content are correlated. These correlations are tighter for close pairs of
galaxies, which is called galactic conformity. In hierarchical structure
formation scenarios, galaxies form within dark matter halos. To explain the
amplitude and the spatial range of galactic conformity two--halo terms or
assembly bias become important. With the scale dependent correlation
coefficients the amplitude and the spatial range of conformity are determined
from galaxy and halo samples. The scale dependent correlation coefficients are
introduced as a new descriptive statistic to quantify the correlations between
properties of galaxies or halos, depending on the distances to other galaxies
or halos. These scale dependent correlation coefficients can be applied to the
galaxy distribution directly. Neither a splitting of the sample into
subsamples, nor an a priori clustering is needed. This new descriptive
statistic is applied to galaxy catalogues derived from the Sloan Digital Sky
Survey III and to halo catalogues from the MultiDark simulations. In the galaxy
sample the correlations between absolute Magnitude, velocity dispersion,
ellipticity, and stellar mass content are investigated. The correlations of
mass, spin, and ellipticity are explored in the halo samples. Both for galaxies
and halos a scale dependent conformity is confirmed. Moreover the scale
dependent correlation coefficients reveal a signal of conformity out to 40Mpc
and beyond. The halo and galaxy samples show a differing amplitude and range of
conformity.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,043 | Non-convex learning via Stochastic Gradient Langevin Dynamics: a nonasymptotic analysis | Stochastic Gradient Langevin Dynamics (SGLD) is a popular variant of
Stochastic Gradient Descent, where properly scaled isotropic Gaussian noise is
added to an unbiased estimate of the gradient at each iteration. This modest
change allows SGLD to escape local minima and suffices to guarantee asymptotic
convergence to global minimizers for sufficiently regular non-convex objectives
(Gelfand and Mitter, 1991). The present work provides a nonasymptotic analysis
in the context of non-convex learning problems, giving finite-time guarantees
for SGLD to find approximate minimizers of both empirical and population risks.
As in the asymptotic setting, our analysis relates the discrete-time SGLD
Markov chain to a continuous-time diffusion process. A new tool that drives the
results is the use of weighted transportation cost inequalities to quantify the
rate of convergence of SGLD to a stationary distribution in the Euclidean
$2$-Wasserstein distance.
| 1 | 0 | 1 | 1 | 0 | 0 |
17,044 | Multilevel preconditioner of Polynomial Chaos Method for quantifying uncertainties in a blood pump | More than 23 million people are suffered by Heart failure worldwide. Despite
the modern transplant operation is well established, the lack of heart
donations becomes a big restriction on transplantation frequency. With respect
to this matter, ventricular assist devices (VADs) can play an important role in
supporting patients during waiting period and after the surgery. Moreover, it
has been shown that VADs by means of blood pump have advantages for working
under different conditions. While a lot of work has been done on modeling the
functionality of the blood pump, but quantifying uncertainties in a numerical
model is a challenging task. We consider the Polynomial Chaos (PC) method,
which is introduced by Wiener for modeling stochastic process with Gaussian
distribution. The Galerkin projection, the intrusive version of the generalized
Polynomial Chaos (gPC), has been densely studied and applied for various
problems. The intrusive Galerkin approach could represent stochastic process
directly at once with Polynomial Chaos series expansions, it would therefore
optimize the total computing effort comparing with classical non-intrusive
methods. We compared different preconditioning techniques for a steady state
simulation of a blood pump configuration in our previous work, the comparison
shows that an inexact multilevel preconditioner has a promising performance. In
this work, we show an instationary blood flow through a FDA blood pump
configuration with Galerkin Projection method, which is implemented in our open
source Finite Element library Hiflow3. Three uncertainty sources are
considered: inflow boundary condition, rotor angular speed and dynamic
viscosity, the numerical results are demonstrated with more than 30 Million
degrees of freedom by using supercomputer.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,045 | Superradiant Mott Transition | The combination of strong correlation and emergent lattice can be achieved
when quantum gases are confined in a superradiant Fabry-Perot cavity. In
addition to the discoveries of exotic phases, such as density wave ordered Mott
insulator and superfluid, a surprising kink structure is found in the slope of
the cavity strength as a function of the pumping strength. In this Letter, we
show that the appearance of such a kink is a manifestation of a liquid-gas like
transition between two superfluids with different densities. The slopes in the
immediate neighborhood of the kink become divergent at the liquid-gas critical
points and display a critical scaling law with a critical exponent 1 in the
quantum critical region. Our predictions could be tested in current
experimental set-up.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,046 | Communication via FRET in Nanonetworks of Mobile Proteins | A practical, biologically motivated case of protein complexes (immunoglobulin
G and FcRII receptors) moving on the surface of mastcells, that are common
parts of an immunological system, is investigated. Proteins are considered as
nanomachines creating a nanonetwork. Accurate molecular models of the proteins
and the fluorophores which act as their nanoantennas are used to simulate the
communication between the nanomachines when they are close to each other. The
theory of diffusion-based Brownian motion is applied to model movements of the
proteins. It is assumed that fluorophore molecules send and receive signals
using the Forster Resonance Energy Transfer. The probability of the efficient
signal transfer and the respective bit error rate are calculated and discussed.
| 0 | 0 | 0 | 0 | 1 | 0 |
17,047 | Multivariate generalized Pareto distributions: parametrizations, representations, and properties | Multivariate generalized Pareto distributions arise as the limit
distributions of exceedances over multivariate thresholds of random vectors in
the domain of attraction of a max-stable distribution. These distributions can
be parametrized and represented in a number of different ways. Moreover,
generalized Pareto distributions enjoy a number of interesting stability
properties. An overview of the main features of such distributions are given,
expressed compactly in several parametrizations, giving the potential user of
these distributions a convenient catalogue of ways to handle and work with
generalized Pareto distributions.
| 0 | 0 | 1 | 1 | 0 | 0 |
17,048 | Invertibility of spectral x-ray data with pileup--two dimension-two spectrum case | In the Alvarez-Macovski method, the line integrals of the x-ray basis set
coefficients are computed from measurements with multiple spectra. An important
question is whether the transformation from measurements to line integrals is
invertible. This paper presents a proof that for a system with two spectra and
a photon counting detector, pileup does not affect the invertibility of the
system. If the system is invertible with no pileup, it will remain invertible
with pileup although the reduced Jacobian may lead to increased noise.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,049 | Steinberg representations and harmonic cochains for split adjoint quasi-simple groups | Let $G$ be an adjoint quasi-simple group defined and split over a
non-archimedean local field $K$. We prove that the dual of the Steinberg
representation of $G$ is isomorphic to a certain space of harmonic cochains on
the Bruhat-Tits building of $G$. The Steinberg representation is considered
with coefficients in any commutative ring.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,050 | Lorentzian surfaces and the curvature of the Schmidt metric | The b-boundary is a mathematical tool used to attach a topological boundary
to incomplete Lorentzian manifolds using a Riemaniann metric called the Schmidt
metric on the frame bundle. In this paper, we give the general form of the
Schmidt metric in the case of Lorentzian surfaces. Furthermore, we write the
Ricci scalar of the Schmidt metric in terms of the Ricci scalar of the
Lorentzian manifold and give some examples. Finally, we discuss some
applications to general relativity.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,051 | Mixed Precision Solver Scalable to 16000 MPI Processes for Lattice Quantum Chromodynamics Simulations on the Oakforest-PACS System | Lattice Quantum Chromodynamics (Lattice QCD) is a quantum field theory on a
finite discretized space-time box so as to numerically compute the dynamics of
quarks and gluons to explore the nature of subatomic world. Solving the
equation of motion of quarks (quark solver) is the most compute-intensive part
of the lattice QCD simulations and is one of the legacy HPC applications. We
have developed a mixed-precision quark solver for a large Intel Xeon Phi (KNL)
system named "Oakforest-PACS", employing the $O(a)$-improved Wilson quarks as
the discretized equation of motion. The nested-BiCGSTab algorithm for the
solver was implemented and optimized using mixed-precision,
communication-computation overlapping with MPI-offloading, SIMD vectorization,
and thread stealing techniques. The solver achieved 2.6 PFLOPS in the
single-precision part on a $400^3\times 800$ lattice using 16000 MPI processes
on 8000 nodes on the system.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,052 | A spectral/hp element MHD solver | A new MHD solver, based on the Nektar++ spectral/hp element framework, is
presented in this paper. The velocity and electric potential quasi-static MHD
model is used. The Hartmann flow in plane channel and its stability, the
Hartmann flow in rectangular duct, and the stability of Hunt's flow are
explored as examples. Exponential convergence is achieved and the resulting
numerical values were found to have an accuracy up to $10^{-12}$ for the state
flows compared to an exact solution, and $10^{-5}$ for the stability
eigenvalues compared to independent numerical results.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,053 | Journalists' information needs, seeking behavior, and its determinants on social media | We describe the results of a qualitative study on journalists' information
seeking behavior on social media. Based on interviews with eleven journalists
along with a study of a set of university level journalism modules, we
determined the categories of information need types that lead journalists to
social media. We also determined the ways that social media is exploited as a
tool to satisfy information needs and to define influential factors, which
impacted on journalists' information seeking behavior. We find that not only is
social media used as an information source, but it can also be a supplier of
stories found serendipitously. We find seven information need types that expand
the types found in previous work. We also find five categories of influential
factors that affect the way journalists seek information.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,054 | Fast Low-Rank Bayesian Matrix Completion with Hierarchical Gaussian Prior Models | The problem of low rank matrix completion is considered in this paper. To
exploit the underlying low-rank structure of the data matrix, we propose a
hierarchical Gaussian prior model, where columns of the low-rank matrix are
assumed to follow a Gaussian distribution with zero mean and a common precision
matrix, and a Wishart distribution is specified as a hyperprior over the
precision matrix. We show that such a hierarchical Gaussian prior has the
potential to encourage a low-rank solution. Based on the proposed hierarchical
prior model, a variational Bayesian method is developed for matrix completion,
where the generalized approximate massage passing (GAMP) technique is embedded
into the variational Bayesian inference in order to circumvent cumbersome
matrix inverse operations. Simulation results show that our proposed method
demonstrates superiority over existing state-of-the-art matrix completion
methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,055 | Emergence of superconductivity in the canonical heavy-electron metal YbRh2Si2 | We report magnetic and calorimetric measurements down to T = 1 mK on the
canonical heavy-electron metal YbRh2Si2. The data reveal the development of
nuclear antiferromagnetic order slightly above 2 mK. The latter weakens the
primary electronic antiferromagnetism, thereby paving the way for
heavy-electron superconductivity below Tc = 2 mK. Our results demonstrate that
superconductivity driven by quantum criticality is a general phenomenon.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,056 | Obtaining a Proportional Allocation by Deleting Items | We consider the following control problem on fair allocation of indivisible
goods. Given a set $I$ of items and a set of agents, each having strict linear
preference over the items, we ask for a minimum subset of the items whose
deletion guarantees the existence of a proportional allocation in the remaining
instance; we call this problem Proportionality by Item Deletion (PID). Our main
result is a polynomial-time algorithm that solves PID for three agents. By
contrast, we prove that PID is computationally intractable when the number of
agents is unbounded, even if the number $k$ of item deletions allowed is small,
since the problem turns out to be W[3]-hard with respect to the parameter $k$.
Additionally, we provide some tight lower and upper bounds on the complexity of
PID when regarded as a function of $|I|$ and $k$.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,057 | DeepFace: Face Generation using Deep Learning | We use CNNs to build a system that both classifies images of faces based on a
variety of different facial attributes and generates new faces given a set of
desired facial characteristics. After introducing the problem and providing
context in the first section, we discuss recent work related to image
generation in Section 2. In Section 3, we describe the methods used to
fine-tune our CNN and generate new images using a novel approach inspired by a
Gaussian mixture model. In Section 4, we discuss our working dataset and
describe our preprocessing steps and handling of facial attributes. Finally, in
Sections 5, 6 and 7, we explain our experiments and results and conclude in the
following section. Our classification system has 82\% test accuracy.
Furthermore, our generation pipeline successfully creates well-formed faces.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,058 | High quality mesh generation using cross and asterisk fields: Application on coastal domains | This paper presents a method to generate high quality triangular or
quadrilateral meshes that uses direction fields and a frontal point insertion
strategy. Two types of direction fields are considered: asterisk fields and
cross fields. With asterisk fields we generate high quality triangulations,
while with cross fields we generate right-angled triangulations that are
optimal for transformation to quadrilateral meshes. The input of our algorithm
is an initial triangular mesh and a direction field calculated on it. The goal
is to compute the vertices of the final mesh by an advancing front strategy
along the direction field. We present an algorithm that enables to efficiently
generate the points using solely information from the base mesh. A
multi-threaded implementation of our algorithm is presented, allowing us to
achieve significant speedup of the point generation. Regarding the
quadrangulation process, we develop a quality criterion for right-angled
triangles with respect to the local cross field and an optimization process
based on it. Thus we are able to further improve the quality of the output
quadrilaterals. The algorithm is demonstrated on the sphere and examples of
high quality triangular and quadrilateral meshes of coastal domains are
presented.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,059 | ELFI: Engine for Likelihood-Free Inference | Engine for Likelihood-Free Inference (ELFI) is a Python software library for
performing likelihood-free inference (LFI). ELFI provides a convenient syntax
for arranging components in LFI, such as priors, simulators, summaries or
distances, to a network called ELFI graph. The components can be implemented in
a wide variety of languages. The stand-alone ELFI graph can be used with any of
the available inference methods without modifications. A central method
implemented in ELFI is Bayesian Optimization for Likelihood-Free Inference
(BOLFI), which has recently been shown to accelerate likelihood-free inference
up to several orders of magnitude by surrogate-modelling the distance. ELFI
also has an inbuilt support for output data storing for reuse and analysis, and
supports parallelization of computation from multiple cores up to a cluster
environment. ELFI is designed to be extensible and provides interfaces for
widening its functionality. This makes the adding of new inference methods to
ELFI straightforward and automatically compatible with the inbuilt features.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,060 | Boosting Adversarial Attacks with Momentum | Deep neural networks are vulnerable to adversarial examples, which poses
security concerns on these algorithms due to the potentially severe
consequences. Adversarial attacks serve as an important surrogate to evaluate
the robustness of deep learning models before they are deployed. However, most
of existing adversarial attacks can only fool a black-box model with a low
success rate. To address this issue, we propose a broad class of momentum-based
iterative algorithms to boost adversarial attacks. By integrating the momentum
term into the iterative process for attacks, our methods can stabilize update
directions and escape from poor local maxima during the iterations, resulting
in more transferable adversarial examples. To further improve the success rates
for black-box attacks, we apply momentum iterative algorithms to an ensemble of
models, and show that the adversarially trained models with a strong defense
ability are also vulnerable to our black-box attacks. We hope that the proposed
methods will serve as a benchmark for evaluating the robustness of various deep
models and defense methods. With this method, we won the first places in NIPS
2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack
competitions.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,061 | Information spreading during emergencies and anomalous events | The most critical time for information to spread is in the aftermath of a
serious emergency, crisis, or disaster. Individuals affected by such situations
can now turn to an array of communication channels, from mobile phone calls and
text messages to social media posts, when alerting social ties. These channels
drastically improve the speed of information in a time-sensitive event, and
provide extant records of human dynamics during and afterward the event.
Retrospective analysis of such anomalous events provides researchers with a
class of "found experiments" that may be used to better understand social
spreading. In this chapter, we study information spreading due to a number of
emergency events, including the Boston Marathon Bombing and a plane crash at a
western European airport. We also contrast the different information which may
be gleaned by social media data compared with mobile phone data and we estimate
the rate of anomalous events in a mobile phone dataset using a proposed anomaly
detection method.
| 1 | 1 | 0 | 0 | 0 | 0 |
17,062 | Large-Scale Plant Classification with Deep Neural Networks | This paper discusses the potential of applying deep learning techniques for
plant classification and its usage for citizen science in large-scale
biodiversity monitoring. We show that plant classification using near
state-of-the-art convolutional network architectures like ResNet50 achieves
significant improvements in accuracy compared to the most widespread plant
classification application in test sets composed of thousands of different
species labels. We find that the predictions can be confidently used as a
baseline classification in citizen science communities like iNaturalist (or its
Spanish fork, Natusfera) which in turn can share their data with biodiversity
portals like GBIF.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,063 | Deep Reinforcement Learning for General Video Game AI | The General Video Game AI (GVGAI) competition and its associated software
framework provides a way of benchmarking AI algorithms on a large number of
games written in a domain-specific description language. While the competition
has seen plenty of interest, it has so far focused on online planning,
providing a forward model that allows the use of algorithms such as Monte Carlo
Tree Search.
In this paper, we describe how we interface GVGAI to the OpenAI Gym
environment, a widely used way of connecting agents to reinforcement learning
problems. Using this interface, we characterize how widely used implementations
of several deep reinforcement learning algorithms fare on a number of GVGAI
games. We further analyze the results to provide a first indication of the
relative difficulty of these games relative to each other, and relative to
those in the Arcade Learning Environment under similar conditions.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,064 | Purely infinite labeled graph $C^*$-algebras | In this paper, we consider pure infiniteness of generalized Cuntz-Krieger
algebras associated to labeled spaces $(E,\mathcal{L},\mathcal{E})$. It is
shown that a $C^*$-algebra $C^*(E,\mathcal{L},\mathcal{E})$ is purely infinite
in the sense that every nonzero hereditary subalgebra contains an infinite
projection (we call this property (IH)) if $(E, \mathcal{L},\mathcal{E})$ is
disagreeable and every vertex connects to a loop. We also prove that under the
condition analogous to (K) for usual graphs,
$C^*(E,\mathcal{L},\mathcal{E})=C^*(p_A, s_a)$ is purely infinite in the sense
of Kirchberg and R{\o}rdam if and only if every generating projection $p_A$,
$A\in \mathcal{E}$, is properly infinite, and also if and only if every
quotient of $C^*(E,\mathcal{L},\mathcal{E})$ has the property (IH).
| 0 | 0 | 1 | 0 | 0 | 0 |
17,065 | From safe screening rules to working sets for faster Lasso-type solvers | Convex sparsity-promoting regularizations are ubiquitous in modern
statistical learning. By construction, they yield solutions with few non-zero
coefficients, which correspond to saturated constraints in the dual
optimization formulation. Working set (WS) strategies are generic optimization
techniques that consist in solving simpler problems that only consider a subset
of constraints, whose indices form the WS. Working set methods therefore
involve two nested iterations: the outer loop corresponds to the definition of
the WS and the inner loop calls a solver for the subproblems. For the Lasso
estimator a WS is a set of features, while for a Group Lasso it refers to a set
of groups. In practice, WS are generally small in this context so the
associated feature Gram matrix can fit in memory. Here we show that the
Gauss-Southwell rule (a greedy strategy for block coordinate descent
techniques) leads to fast solvers in this case. Combined with a working set
strategy based on an aggressive use of so-called Gap Safe screening rules, we
propose a solver achieving state-of-the-art performance on sparse learning
problems. Results are presented on Lasso and multi-task Lasso estimators.
| 1 | 0 | 1 | 1 | 0 | 0 |
17,066 | Exoplanet Radius Gap Dependence on Host Star Type | Exoplanets smaller than Neptune are numerous, but the nature of the planet
populations in the 1-4 Earth radii range remains a mystery. The complete Kepler
sample of Q1-Q17 exoplanet candidates shows a radius gap at ~ 2 Earth radii, as
reported by us in January 2017 in LPSC conference abstract #1576 (Zeng et al.
2017). A careful analysis of Kepler host stars spectroscopy by the CKS survey
allowed Fulton et al. (2017) in March 2017 to unambiguously show this radius
gap. The cause of this gap is still under discussion (Ginzburg et al. 2017;
Lehmer & Catling 2017; Owen & Wu 2017). Here we add to our original analysis
the dependence of the radius gap on host star type.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,067 | Mapping the Invocation Structure of Online Political Interaction | The surge in political information, discourse, and interaction has been one
of the most important developments in social media over the past several years.
There is rich structure in the interaction among different viewpoints on the
ideological spectrum. However, we still have only a limited analytical
vocabulary for expressing the ways in which these viewpoints interact.
In this paper, we develop network-based methods that operate on the ways in
which users share content; we construct \emph{invocation graphs} on Web domains
showing the extent to which pages from one domain are invoked by users to reply
to posts containing pages from other domains. When we locate the domains on a
political spectrum induced from the data, we obtain an embedded graph showing
how these interaction links span different distances on the spectrum. The
structure of this embedded network, and its evolution over time, helps us
derive macro-level insights about how political interaction unfolded through
2016, leading up to the US Presidential election. In particular, we find that
the domains invoked in replies spanned increasing distances on the spectrum
over the months approaching the election, and that there was clear asymmetry
between the left-to-right and right-to-left patterns of linkage.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,068 | Collective decision for open set recognition | In open set recognition (OSR), almost all existing methods are designed
specially for recognizing individual instances, even these instances are
collectively coming in batch. Recognizers in decision either reject or
categorize them to some known class using empirically-set threshold. Thus the
threshold plays a key role, however, the selection for it usually depends on
the knowledge of known classes, inevitably incurring risks due to lacking
available information from unknown classes. On the other hand, a more realistic
OSR system should NOT just rest on a reject decision but should go further,
especially for discovering the hidden unknown classes among the reject
instances, whereas existing OSR methods do not pay special attention. In this
paper, we introduce a novel collective/batch decision strategy with an aim to
extend existing OSR for new class discovery while considering correlations
among the testing instances. Specifically, a collective decision-based OSR
framework (CD-OSR) is proposed by slightly modifying the Hierarchical Dirichlet
process (HDP). Thanks to the HDP, our CD-OSR does not need to define the
specific threshold and can automatically reserve space for unknown classes in
testing, naturally resulting in a new class discovery function. Finally,
extensive experiments on benchmark datasets indicate the validity of CD-OSR.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,069 | HARPS-N high spectral resolution observations of Cepheids I. The Baade-Wesselink projection factor of δ Cep revisited | The projection factor p is the key quantity used in the Baade-Wesselink (BW)
method for distance determination; it converts radial velocities into pulsation
velocities. Several methods are used to determine p, such as geometrical and
hydrodynamical models or the inverse BW approach when the distance is known. We
analyze new HARPS-N spectra of delta Cep to measure its cycle-averaged
atmospheric velocity gradient in order to better constrain the projection
factor. We first apply the inverse BW method to derive p directly from
observations. The projection factor can be divided into three subconcepts: (1)
a geometrical effect (p0); (2) the velocity gradient within the atmosphere
(fgrad); and (3) the relative motion of the optical pulsating photosphere with
respect to the corresponding mass elements (fo-g). We then measure the fgrad
value of delta Cep for the first time. When the HARPS-N mean cross-correlated
line-profiles are fitted with a Gaussian profile, the projection factor is
pcc-g = 1.239 +/- 0.034(stat) +/- 0.023(syst). When we consider the different
amplitudes of the radial velocity curves that are associated with 17 selected
spectral lines, we measure projection factors ranging from 1.273 to 1.329. We
find a relation between fgrad and the line depth measured when the Cepheid is
at minimum radius. This relation is consistent with that obtained from our best
hydrodynamical model of delta Cep and with our projection factor decomposition.
Using the observational values of p and fgrad found for the 17 spectral lines,
we derive a semi-theoretical value of fo-g. We alternatively obtain fo-g =
0.975+/-0.002 or 1.006+/-0.002 assuming models using radiative transfer in
plane-parallel or spherically symmetric geometries, respectively. The new
HARPS-N observations of delta Cep are consistent with our decomposition of the
projection factor.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,070 | Using Deep Learning and Google Street View to Estimate the Demographic Makeup of the US | The United States spends more than $1B each year on initiatives such as the
American Community Survey (ACS), a labor-intensive door-to-door study that
measures statistics relating to race, gender, education, occupation,
unemployment, and other demographic factors. Although a comprehensive source of
data, the lag between demographic changes and their appearance in the ACS can
exceed half a decade. As digital imagery becomes ubiquitous and machine vision
techniques improve, automated data analysis may provide a cheaper and faster
alternative. Here, we present a method that determines socioeconomic trends
from 50 million images of street scenes, gathered in 200 American cities by
Google Street View cars. Using deep learning-based computer vision techniques,
we determined the make, model, and year of all motor vehicles encountered in
particular neighborhoods. Data from this census of motor vehicles, which
enumerated 22M automobiles in total (8% of all automobiles in the US), was used
to accurately estimate income, race, education, and voting patterns, with
single-precinct resolution. (The average US precinct contains approximately
1000 people.) The resulting associations are surprisingly simple and powerful.
For instance, if the number of sedans encountered during a 15-minute drive
through a city is higher than the number of pickup trucks, the city is likely
to vote for a Democrat during the next Presidential election (88% chance);
otherwise, it is likely to vote Republican (82%). Our results suggest that
automated systems for monitoring demographic trends may effectively complement
labor-intensive approaches, with the potential to detect trends with fine
spatial resolution, in close to real time.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,071 | Gaussian Process Neurons Learn Stochastic Activation Functions | We propose stochastic, non-parametric activation functions that are fully
learnable and individual to each neuron. Complexity and the risk of overfitting
are controlled by placing a Gaussian process prior over these functions. The
result is the Gaussian process neuron, a probabilistic unit that can be used as
the basic building block for probabilistic graphical models that resemble the
structure of neural networks. The proposed model can intrinsically handle
uncertainties in its inputs and self-estimate the confidence of its
predictions. Using variational Bayesian inference and the central limit
theorem, a fully deterministic loss function is derived, allowing it to be
trained as efficiently as a conventional neural network using mini-batch
gradient descent. The posterior distribution of activation functions is
inferred from the training data alongside the weights of the network.
The proposed model favorably compares to deep Gaussian processes, both in
model complexity and efficiency of inference. It can be directly applied to
recurrent or convolutional network structures, allowing its use in audio and
image processing tasks.
As an preliminary empirical evaluation we present experiments on regression
and classification tasks, in which our model achieves performance comparable to
or better than a Dropout regularized neural network with a fixed activation
function. Experiments are ongoing and results will be added as they become
available.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,072 | The short-term price impact of trades is universal | We analyze a proprietary dataset of trades by a single asset manager,
comparing their price impact with that of the trades of the rest of the market.
In the context of a linear propagator model we find no significant difference
between the two, suggesting that both the magnitude and time dependence of
impact are universal in anonymous, electronic markets. This result is important
as optimal execution policies often rely on propagators calibrated on anonymous
data. We also find evidence that in the wake of a trade the order flow of other
market participants first adds further copy-cat trades enhancing price impact
on very short time scales. The induced order flow then quickly inverts, thereby
contributing to impact decay.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,073 | Questions on mod p representations of reductive p-adic groups | This is a list of questions raised by our joint work arXiv:1412.0737 and its
sequels.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,074 | Filamentary superconductivity in semiconducting policrystalline ZrSe2 compound with Zr vacancies | ZrSe2 is a band semiconductor studied long time ago. It has interesting
electronic properties, and because its layers structure can be intercalated
with different atoms to change some of the physical properties. In this
investigation we found that Zr deficiencies alter the semiconducting behavior
and the compound can be turned into a superconductor. In this paper we report
our studies related to this discovery. The decreasing of the number of Zr atoms
in small proportion according to the formula ZrxSe2, where x is varied from
about 8.1 to 8.6 K, changing the semiconducting behavior to a superconductor
with transition temperatures ranging between 7.8 to 8.5 K, it depending of the
deficiencies. Outside of those ranges the compound behaves as semiconducting
with the properties already known. In our experiments we found that this new
superconductor has only a very small fraction of superconducting material
determined by magnetic measurements with applied magnetic field of 10 Oe. Our
conclusions is that superconductivity is filamentary. However, in one studied
sample the fraction was about 10.2 %, whereas in others is only about 1 % or
less. We determined the superconducting characteristics; the critical fields
that indicate a type two superonductor with Ginzburg-Landau ? parameter of the
order about 2.7. The synthesis procedure is quite normal fol- lowing the
conventional solid state reaction. In this paper are included, the electronic
characteristics, transition temperature, and evolution with temperature of the
critical fields.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,075 | Stochastic Block Model Reveals the Map of Citation Patterns and Their Evolution in Time | In this study we map out the large-scale structure of citation networks of
science journals and follow their evolution in time by using stochastic block
models (SBMs). The SBM fitting procedures are principled methods that can be
used to find hierarchical grouping of journals into blocks that show similar
incoming and outgoing citations patterns. These methods work directly on the
citation network without the need to construct auxiliary networks based on
similarity of nodes. We fit the SBMs to the networks of journals we have
constructed from the data set of around 630 million citations and find a
variety of different types of blocks, such as clusters, bridges, sources, and
sinks. In addition we use a recent generalization of SBMs to determine how much
a manually curated classification of journals into subfields of science is
related to the block structure of the journal network and how this relationship
changes in time. The SBM method tries to find a network of blocks that is the
best high-level representation of the network of journals, and we illustrate
how these block networks (at various levels of resolution) can be used as maps
of science.
| 1 | 1 | 0 | 0 | 0 | 0 |
17,076 | Limits on the anomalous speed of gravitational waves from binary pulsars | A large class of modified theories of gravity used as models for dark energy
predict a propagation speed for gravitational waves which can differ from the
speed of light. This difference of propagations speeds for photons and
gravitons has an impact in the emission of gravitational waves by binary
systems. Thus, we revisit the usual quadrupolar emission of binary system for
an arbitrary propagation speed of gravitational waves and obtain the
corresponding period decay formula. We then use timing data from the
Hulse-Taylor binary pulsar and obtain that the speed of gravitational waves can
only differ from the speed of light at the percentage level. This bound places
tight constraints on dark energy models featuring an anomalous propagations
speed for the gravitational waves.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,077 | Central limit theorems for entropy-regularized optimal transport on finite spaces and statistical applications | The notion of entropy-regularized optimal transport, also known as Sinkhorn
divergence, has recently gained popularity in machine learning and statistics,
as it makes feasible the use of smoothed optimal transportation distances for
data analysis. The Sinkhorn divergence allows the fast computation of an
entropically regularized Wasserstein distance between two probability
distributions supported on a finite metric space of (possibly) high-dimension.
For data sampled from one or two unknown probability distributions, we derive
the distributional limits of the empirical Sinkhorn divergence and its centered
version (Sinkhorn loss). We also propose a bootstrap procedure which allows to
obtain new test statistics for measuring the discrepancies between multivariate
probability distributions. Our work is inspired by the results of Sommerfeld
and Munk (2016) on the asymptotic distribution of empirical Wasserstein
distance on finite space using unregularized transportation costs. Incidentally
we also analyze the asymptotic distribution of entropy-regularized Wasserstein
distances when the regularization parameter tends to zero. Simulated and real
datasets are used to illustrate our approach.
| 0 | 0 | 1 | 1 | 0 | 0 |
17,078 | Inference for Stochastically Contaminated Variable Length Markov Chains | In this paper, we present a methodology to estimate the parameters of
stochastically contaminated models under two contamination regimes. In both
regimes, we assume that the original process is a variable length Markov chain
that is contaminated by a random noise. In the first regime we consider that
the random noise is added to the original source and in the second regime, the
random noise is multiplied by the original source. Given a contaminated sample
of these models, the original process is hidden. Then we propose a two steps
estimator for the parameters of these models, that is, the probability
transitions and the noise parameter, and prove its consistency. The first step
is an adaptation of the Baum-Welch algorithm for Hidden Markov Models. This
step provides an estimate of a complete order $k$ Markov chain, where $k$ is
bigger than the order of the variable length Markov chain if it has finite
order and is a constant depending on the sample size if the hidden process has
infinite order. In the second estimation step, we propose a bootstrap Bayesian
Information Criterion, given a sample of the Markov chain estimated in the
first step, to obtain the variable length time dependence structure associated
with the hidden process. We present a simulation study showing that our
methodology is able to accurately recover the parameters of the models for a
reasonable interval of random noises.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,079 | Variable-Length Resolvability for General Sources and Channels | We introduce the problem of variable-length source resolvability, where a
given target probability distribution is approximated by encoding a
variable-length uniform random number, and the asymptotically minimum average
length rate of the uniform random numbers, called the (variable-length)
resolvability, is investigated. We first analyze the variable-length
resolvability with the variational distance as an approximation measure. Next,
we investigate the case under the divergence as an approximation measure. When
the asymptotically exact approximation is required, it is shown that the
resolvability under the two kinds of approximation measures coincides. We then
extend the analysis to the case of channel resolvability, where the target
distribution is the output distribution via a general channel due to the fixed
general source as an input. The obtained characterization of the channel
resolvability is fully general in the sense that when the channel is just the
identity mapping, the characterization reduces to the general formula for the
source resolvability. We also analyze the second-order variable-length
resolvability.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,080 | Diattenuation of Brain Tissue and its Impact on 3D Polarized Light Imaging | 3D-Polarized Light Imaging (3D-PLI) reconstructs nerve fibers in histological
brain sections by measuring their birefringence. This study investigates
another effect caused by the optical anisotropy of brain tissue -
diattenuation. Based on numerical and experimental studies and a complete
analytical description of the optical system, the diattenuation was determined
to be below 4 % in rat brain tissue. It was demonstrated that the diattenuation
effect has negligible impact on the fiber orientations derived by 3D-PLI. The
diattenuation signal, however, was found to highlight different anatomical
structures that cannot be distinguished with current imaging techniques, which
makes Diattenuation Imaging a promising extension to 3D-PLI.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,081 | Higgs Modes in the Pair Density Wave Superconducting State | The pair density wave (PDW) superconducting state has been proposed to
explain the layer- decoupling effect observed in the compound
La$_{2-x}$Ba$_x$CuO$_4$ at $x=1/8$ (Phys. Rev. Lett. 99, 127003). In this state
the superconducting order parameter is spatially modulated, in contrast with
the usual superconducting (SC) state where the order parameter is uniform. In
this work, we study the properties of the amplitude (Higgs) modes in a
unidirectional PDW state. To this end we consider a phenomenological model of
PDW type states coupled to a Fermi surface of fermionic quasiparticles. In
contrast to conventional superconductors that have a single Higgs mode,
unidirectional PDW superconductors have two Higgs modes. While in the PDW state
the Fermi surface largely remains gapless, we find that the damping of the PDW
Higgs modes into fermionic quasiparticles requires exceeding an energy
threshold. We show that this suppression of damping in the PDW state is due to
kinematics. As a result, only one of the two Higgs modes is significantly
damped. In addition, motivated by the experimental phase diagram, we discuss
the mixing of Higgs modes in the coexistence regime of the PDW and uniform SC
states. These results should be observable directly in a Raman spectroscopy, in
momentum resolved electron energy loss spectroscopy, and in resonant inelastic
X-ray scattering, thus providing evidence of the PDW states.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,082 | A Serverless Tool for Platform Agnostic Computational Experiment Management | Neuroscience has been carried into the domain of big data and high
performance computing (HPC) on the backs of initiatives in data collection and
an increasingly compute-intensive tools. While managing HPC experiments
requires considerable technical acumen, platforms and standards have been
developed to ease this burden on scientists. While web-portals make resources
widely accessible, data organizations such as the Brain Imaging Data Structure
and tool description languages such as Boutiques provide researchers with a
foothold to tackle these problems using their own datasets, pipelines, and
environments. While these standards lower the barrier to adoption of HPC and
cloud systems for neuroscience applications, they still require the
consolidation of disparate domain-specific knowledge. We present Clowdr, a
lightweight tool to launch experiments on HPC systems and clouds, record rich
execution records, and enable the accessible sharing of experimental summaries
and results. Clowdr uniquely sits between web platforms and bare-metal
applications for experiment management by preserving the flexibility of
do-it-yourself solutions while providing a low barrier for developing,
deploying and disseminating neuroscientific analysis.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,083 | Traveling-wave parametric amplifier based on three-wave mixing in a Josephson metamaterial | We have developed a recently proposed Josephson traveling-wave parametric
amplifier with three-wave mixing [A. B. Zorin, Phys. Rev. Applied 6, 034006,
2016]. The amplifier consists of a microwave transmission line formed by a
serial array of nonhysteretic one-junction SQUIDs. These SQUIDs are flux-biased
in a way that the phase drops across the Josephson junctions are equal to 90
degrees and the persistent currents in the SQUID loops are equal to the
Josephson critical current values. Such a one-dimensional metamaterial
possesses a maximal quadratic nonlinearity and zero cubic (Kerr) nonlinearity.
This property allows phase matching and exponential power gain of traveling
microwaves to take place over a wide frequency range. We report the
proof-of-principle experiment performed at a temperature of T = 4.2 K on Nb
trilayer samples, which has demonstrated that our concept of a practical
broadband Josephson parametric amplifier is valid and very promising for
achieving quantum-limited operation.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,084 | Measuring LDA Topic Stability from Clusters of Replicated Runs | Background: Unstructured and textual data is increasing rapidly and Latent
Dirichlet Allocation (LDA) topic modeling is a popular data analysis methods
for it. Past work suggests that instability of LDA topics may lead to
systematic errors. Aim: We propose a method that relies on replicated LDA runs,
clustering, and providing a stability metric for the topics. Method: We
generate k LDA topics and replicate this process n times resulting in n*k
topics. Then we use K-medioids to cluster the n*k topics to k clusters. The k
clusters now represent the original LDA topics and we present them like normal
LDA topics showing the ten most probable words. For the clusters, we try
multiple stability metrics, out of which we recommend Rank-Biased Overlap,
showing the stability of the topics inside the clusters. Results: We provide an
initial validation where our method is used for 270,000 Mozilla Firefox commit
messages with k=20 and n=20. We show how our topic stability metrics are
related to the contents of the topics. Conclusions: Advances in text mining
enable us to analyze large masses of text in software engineering but
non-deterministic algorithms, such as LDA, may lead to unreplicable
conclusions. Our approach makes LDA stability transparent and is also
complementary rather than alternative to many prior works that focus on LDA
parameter tuning.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,085 | Continuum Foreground Polarization and Na~I Absorption in Type Ia SNe | We present a study of the continuum polarization over the 400--600 nm range
of 19 Type Ia SNe obtained with FORS at the VLT. We separate them in those that
show Na I D lines at the velocity of their hosts and those that do not.
Continuum polarization of the sodium sample near maximum light displays a broad
range of values, from extremely polarized cases like SN 2006X to almost
unpolarized ones like SN 2011ae. The non--sodium sample shows, typically,
smaller polarization values. The continuum polarization of the sodium sample in
the 400--600 nm range is linear with wavelength and can be characterized by the
mean polarization (P$_{\rm{mean}}$). Its values span a wide range and show a
linear correlation with color, color excess, and extinction in the visual band.
Larger dispersion correlations were found with the equivalent width of the Na I
D and Ca II H & K lines, and also a noisy relation between P$_{\rm{mean}}$ and
$R_{V}$, the ratio of total to selective extinction. Redder SNe show stronger
continuum polarization, with larger color excesses and extinctions. We also
confirm that high continuum polarization is associated with small values of
$R_{V}$.
The correlation between extinction and polarization -- and polarization
angles -- suggest that the dominant fraction of dust polarization is imprinted
in interstellar regions of the host galaxies.
We show that Na I D lines from foreground matter in the SN host are usually
associated with non-galactic ISM, challenging the typical assumptions in
foreground interstellar polarization models.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,086 | Toward Faultless Content-Based Playlists Generation for Instrumentals | This study deals with content-based musical playlists generation focused on
Songs and Instrumentals. Automatic playlist generation relies on collaborative
filtering and autotagging algorithms. Autotagging can solve the cold start
issue and popularity bias that are critical in music recommender systems.
However, autotagging remains to be improved and cannot generate satisfying
music playlists. In this paper, we suggest improvements toward better
autotagging-generated playlists compared to state-of-the-art. To assess our
method, we focus on the Song and Instrumental tags. Song and Instrumental are
two objective and opposite tags that are under-studied compared to genres or
moods, which are subjective and multi-modal tags. In this paper, we consider an
industrial real-world musical database that is unevenly distributed between
Songs and Instrumentals and bigger than databases used in previous studies. We
set up three incremental experiments to enhance automatic playlist generation.
Our suggested approach generates an Instrumental playlist with up to three
times less false positives than cutting edge methods. Moreover, we provide a
design of experiment framework to foster research on Songs and Instrumentals.
We give insight on how to improve further the quality of generated playlists
and to extend our methods to other musical tags. Furthermore, we provide the
source code to guarantee reproducible research.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,087 | Direct observation of the band gap transition in atomically thin ReS$_2$ | ReS$_2$ is considered as a promising candidate for novel electronic and
sensor applications. The low crystal symmetry of the van der Waals compound
ReS$_2$ leads to a highly anisotropic optical, vibrational, and transport
behavior. However, the details of the electronic band structure of this
fascinating material are still largely unexplored. We present a
momentum-resolved study of the electronic structure of monolayer, bilayer, and
bulk ReS$_2$ using k-space photoemission microscopy in combination with
first-principles calculations. We demonstrate that the valence electrons in
bulk ReS$_2$ are - contrary to assumptions in recent literature - significantly
delocalized across the van der Waals gap. Furthermore, we directly observe the
evolution of the valence band dispersion as a function of the number of layers,
revealing a significantly increased effective electron mass in single-layer
crystals. We also find that only bilayer ReS$_2$ has a direct band gap. Our
results establish bilayer ReS$_2$ as a advantageous building block for
two-dimensional devices and van der Waals heterostructures.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,088 | Lattice embeddings between types of fuzzy sets. Closed-valued fuzzy sets | In this paper we deal with the problem of extending Zadeh's operators on
fuzzy sets (FSs) to interval-valued (IVFSs), set-valued (SVFSs) and type-2
(T2FSs) fuzzy sets. Namely, it is known that seeing FSs as SVFSs, or T2FSs,
whose membership degrees are singletons is not order-preserving. We then
describe a family of lattice embeddings from FSs to SVFSs. Alternatively, if
the former singleton viewpoint is required, we reformulate the intersection on
hesitant fuzzy sets and introduce what we have called closed-valued fuzzy sets.
This new type of fuzzy sets extends standard union and intersection on FSs. In
addition, it allows handling together membership degrees of different nature
as, for instance, closed intervals and finite sets. Finally, all these
constructions are viewed as T2FSs forming a chain of lattices.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,089 | Coupling of Magneto-Thermal and Mechanical Superconducting Magnet Models by Means of Mesh-Based Interpolation | In this paper we present an algorithm for the coupling of magneto-thermal and
mechanical finite element models representing superconducting accelerator
magnets. The mechanical models are used during the design of the mechanical
structure as well as the optimization of the magnetic field quality under
nominal conditions. The magneto-thermal models allow for the analysis of
transient phenomena occurring during quench initiation, propagation, and
protection. Mechanical analysis of quenching magnets is of high importance
considering the design of new protection systems and the study of new
superconductor types. We use field/circuit coupling to determine temperature
and electromagnetic force evolution during the magnet discharge. These
quantities are provided as a load to existing mechanical models. The models are
discretized with different meshes and, therefore, we employ a mesh-based
interpolation method to exchange coupled quantities. The coupling algorithm is
illustrated with a simulation of a mechanical response of a standalone
high-field dipole magnet protected with CLIQ (Coupling-Loss Induced Quench)
technology.
| 1 | 1 | 0 | 0 | 0 | 0 |
17,090 | Converging expansions for Lipschitz self-similar perforations of a plane sector | In contrast with the well-known methods of matching asymptotics and
multiscale (or compound) asymptotics, the " functional analytic approach " of
Lanza de Cristoforis (Analysis 28, 2008) allows to prove convergence of
expansions around interior small holes of size $\epsilon$ for solutions of
elliptic boundary value problems. Using the method of layer potentials, the
asymptotic behavior of the solution as $\epsilon$ tends to zero is described
not only by asymptotic series in powers of $\epsilon$, but by convergent power
series. Here we use this method to investigate the Dirichlet problem for the
Laplace operator where holes are collapsing at a polygonal corner of opening
$\omega$. Then in addition to the scale $\epsilon$ there appears the scale
$\eta = \epsilon^{\pi/\omega}$. We prove that when $\pi/\omega$ is irrational,
the solution of the Dirichlet problem is given by convergent series in powers
of these two small parameters. Due to interference of the two scales, this
convergence is obtained, in full generality, by grouping together integer
powers of the two scales that are very close to each other. Nevertheless, there
exists a dense subset of openings $\omega$ (characterized by Diophantine
approximation properties), for which real analyticity in the two variables
$\epsilon$ and $\eta$ holds and the power series converge unconditionally. When
$\pi/\omega$ is rational, the series are unconditionally convergent, but
contain terms in log $\epsilon$.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,091 | A Viral Timeline Branching Process to study a Social Network | Bio-inspired paradigms are proving to be useful in analyzing propagation and
dissemination of information in networks. In this paper we explore the use of
multi-type branching processes to analyse viral properties of content in a
social network, with and without competition from other sources. We derive and
compute various virality measures, e.g., probability of virality, expected
number of shares, or the rate of growth of expected number of shares etc. They
allow one to predict the emergence of global macro properties (e.g., viral
spread of a post in the entire network) from the laws and parameters that
determine local interactions. The local interactions, greatly depend upon the
structure of the timelines holding the content and the number of friends (i.e.,
connections) of users of the network. We then formulate a non-cooperative game
problem and study the Nash equilibria as a function of the parameters. The
branching processes modelling the social network under competition turn out to
be decomposable, multi-type and continuous time variants. For such processes
types belonging to different sub-classes evolve at different rates and have
different probabilities of extinction etc. We compute content provider wise
extinction probability, rate of growth etc. We also conjecture the
content-provider wise growth rate of expected shares.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,092 | Algorithmic Bio-surveillance For Precise Spatio-temporal Prediction of Zoonotic Emergence | Viral zoonoses have emerged as the key drivers of recent pandemics. Human
infection by zoonotic viruses are either spillover events -- isolated
infections that fail to cause a widespread contagion -- or species jumps, where
successful adaptation to the new host leads to a pandemic. Despite expensive
bio-surveillance efforts, historically emergence response has been reactive,
and post-hoc. Here we use machine inference to demonstrate a high accuracy
predictive bio-surveillance capability, designed to pro-actively localize an
impending species jump via automated interrogation of massive sequence
databases of viral proteins. Our results suggest that a jump might not purely
be the result of an isolated unfortunate cross-infection localized in space and
time; there are subtle yet detectable patterns of genotypic changes
accumulating in the global viral population leading up to emergence. Using tens
of thousands of protein sequences simultaneously, we train models that track
maximum achievable accuracy for disambiguating host tropism from the primary
structure of surface proteins, and show that the inverse classification
accuracy is a quantitative indicator of jump risk. We validate our claim in the
context of the 2009 swine flu outbreak, and the 2004 emergence of H5N1
subspecies of Influenza A from avian reservoirs; illustrating that
interrogation of the global viral population can unambiguously track a near
monotonic risk elevation over several preceding years leading to eventual
emergence.
| 0 | 0 | 0 | 1 | 1 | 0 |
17,093 | Practical Machine Learning for Cloud Intrusion Detection: Challenges and the Way Forward | Operationalizing machine learning based security detections is extremely
challenging, especially in a continuously evolving cloud environment.
Conventional anomaly detection does not produce satisfactory results for
analysts that are investigating security incidents in the cloud. Model
evaluation alone presents its own set of problems due to a lack of benchmark
datasets. When deploying these detections, we must deal with model compliance,
localization, and data silo issues, among many others. We pose the problem of
"attack disruption" as a way forward in the security data science space. In
this paper, we describe the framework, challenges, and open questions
surrounding the successful operationalization of machine learning based
security detections in a cloud environment and provide some insights on how we
have addressed them.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,094 | SOI RF Switch for Wireless Sensor Network | The objective of this research was to design a 0-5 GHz RF SOI switch, with
0.18um power Jazz SOI technology by using Cadence software, for health care
applications. This paper introduces the design of a RF switch implemented in
shunt-series topology. An insertion loss of 0.906 dB and an isolation of 30.95
dB were obtained at 5 GHz. The switch also achieved a third order distortion of
53.05 dBm and 1 dB compression point reached 50.06dBm. The RF switch
performance meets the desired specification requirements.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,095 | The Pentagonal Inequality | Given a positive linear combination of five (respectively seven) cosines,
where the angles are positive and sum to pi, the aim of this article is to
express the sharp bound of the combination as a Positive Real Fraction in the
coefficients (hence cosine-free). The method uses algebraic and arithmetic
manipulations with judicious transformations.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,096 | The Landscape of Deep Learning Algorithms | This paper studies the landscape of empirical risk of deep neural networks by
theoretically analyzing its convergence behavior to the population risk as well
as its stationary points and properties. For an $l$-layer linear neural
network, we prove its empirical risk uniformly converges to its population risk
at the rate of $\mathcal{O}(r^{2l}\sqrt{d\log(l)}/\sqrt{n})$ with training
sample size of $n$, the total weight dimension of $d$ and the magnitude bound
$r$ of weight of each layer. We then derive the stability and generalization
bounds for the empirical risk based on this result. Besides, we establish the
uniform convergence of gradient of the empirical risk to its population
counterpart. We prove the one-to-one correspondence of the non-degenerate
stationary points between the empirical and population risks with convergence
guarantees, which describes the landscape of deep neural networks. In addition,
we analyze these properties for deep nonlinear neural networks with sigmoid
activation functions. We prove similar results for convergence behavior of
their empirical risks as well as the gradients and analyze properties of their
non-degenerate stationary points.
To our best knowledge, this work is the first one theoretically
characterizing landscapes of deep learning algorithms. Besides, our results
provide the sample complexity of training a good deep neural network. We also
provide theoretical understanding on how the neural network depth $l$, the
layer width, the network size $d$ and parameter magnitude determine the neural
network landscapes.
| 1 | 0 | 1 | 1 | 0 | 0 |
17,097 | The effect of the environment on the structure, morphology and star-formation history of intermediate-redshift galaxies | With the aim of understanding the effect of the environment on the star
formation history and morphological transformation of galaxies, we present a
detailed analysis of the colour, morphology and internal structure of cluster
and field galaxies at $0.4 \le z \le 0.8$. We use {\em HST} data for over 500
galaxies from the ESO Distant Cluster Survey (EDisCS) to quantify how the
galaxies' light distribution deviate from symmetric smooth profiles. We
visually inspect the galaxies' images to identify the likely causes for such
deviations. We find that the residual flux fraction ($RFF$), which measures the
fractional contribution to the galaxy light of the residuals left after
subtracting a symmetric and smooth model, is very sensitive to the degree of
structural disturbance but not the causes of such disturbance. On the other
hand, the asymmetry of these residuals ($A_{\rm res}$) is more sensitive to the
causes of the disturbance, with merging galaxies having the highest values of
$A_{\rm res}$. Using these quantitative parameters we find that, at a fixed
morphology, cluster and field galaxies show statistically similar degrees of
disturbance. However, there is a higher fraction of symmetric and passive
spirals in the cluster than in the field. These galaxies have smoother light
distributions than their star-forming counterparts. We also find that while
almost all field and cluster S0s appear undisturbed, there is a relatively
small population of star-forming S0s in clusters but not in the field. These
findings are consistent with relatively gentle environmental processes acting
on galaxies infalling onto clusters.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,098 | Recommendations with Negative Feedback via Pairwise Deep Reinforcement Learning | Recommender systems play a crucial role in mitigating the problem of
information overload by suggesting users' personalized items or services. The
vast majority of traditional recommender systems consider the recommendation
procedure as a static process and make recommendations following a fixed
strategy. In this paper, we propose a novel recommender system with the
capability of continuously improving its strategies during the interactions
with users. We model the sequential interactions between users and a
recommender system as a Markov Decision Process (MDP) and leverage
Reinforcement Learning (RL) to automatically learn the optimal strategies via
recommending trial-and-error items and receiving reinforcements of these items
from users' feedback. Users' feedback can be positive and negative and both
types of feedback have great potentials to boost recommendations. However, the
number of negative feedback is much larger than that of positive one; thus
incorporating them simultaneously is challenging since positive feedback could
be buried by negative one. In this paper, we develop a novel approach to
incorporate them into the proposed deep recommender system (DEERS) framework.
The experimental results based on real-world e-commerce data demonstrate the
effectiveness of the proposed framework. Further experiments have been
conducted to understand the importance of both positive and negative feedback
in recommendations.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,099 | Accumulated Gradient Normalization | This work addresses the instability in asynchronous data parallel
optimization. It does so by introducing a novel distributed optimizer which is
able to efficiently optimize a centralized model under communication
constraints. The optimizer achieves this by pushing a normalized sequence of
first-order gradients to a parameter server. This implies that the magnitude of
a worker delta is smaller compared to an accumulated gradient, and provides a
better direction towards a minimum compared to first-order gradients, which in
turn also forces possible implicit momentum fluctuations to be more aligned
since we make the assumption that all workers contribute towards a single
minima. As a result, our approach mitigates the parameter staleness problem
more effectively since staleness in asynchrony induces (implicit) momentum, and
achieves a better convergence rate compared to other optimizers such as
asynchronous EASGD and DynSGD, which we show empirically.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,100 | An Optimal Algorithm for Changing from Latitudinal to Longitudinal Formation of Autonomous Aircraft Squadrons | This work presents an algorithm for changing from latitudinal to longitudinal
formation of autonomous aircraft squadrons. The maneuvers are defined
dynamically by using a predefined set of 3D basic maneuvers. This formation
changing is necessary when the squadron has to perform tasks which demand both
formations, such as lift off, georeferencing, obstacle avoidance and landing.
Simulations show that the formation changing is made without collision. The
time complexity analysis of the transformation algorithm reveals that its
efficiency is optimal, and the proof of correction ensures its longitudinal
formation features.
| 1 | 0 | 0 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.