title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Mapping of the dark exciton landscape in transition metal dichalcogenides | Transition metal dichalcogenides (TMDs) exhibit a remarkable exciton physics
including optically accessible (bright) as well as spin- and momentum-forbidden
(dark) excitonic states. So far the dark exciton landscape has not been
revealed leaving in particular the spectral position of momentum-forbidden dark
states completely unclear. This has a significant impact on the technological
application potential of TMDs, since the nature of the energetically lowest
state determines, if the material is a direct-gap semiconductor. Here, we show
how dark states can be experimentally revealed by probing the intra-excitonic
1s-2p transition. Distinguishing the optical response shortly after the
excitation (< 100$\,$fs) and after the exciton thermalization (> 1$\,$ps)
allows us to demonstrate the relative position of bright and dark excitons. We
find both in theory and experiment a clear blue-shift in the optical response
demonstrating for the first time the transition of bright exciton populations
into lower lying momentum- and spin-forbidden dark excitonic states in
monolayer WSe$_2$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Active set algorithms for estimating shape-constrained density ratios | We review and modify the active set algorithm by Duembgen et al. (2011) for
nonparametric maximum-likelihood estimation of a log-concave density. This
particular estimation problem is embedded into a more general framework
including also the estimation of a log-convex tail inflation function as
proposed by McCullagh and Polson (2012).
| 0 | 0 | 0 | 1 | 0 | 0 |
Asymptotic orthogonalization of subalgebras in II$_1$ factors | Let $M$ be a II$_1$ factor with a von Neumann subalgebra $Q\subset M$ that
has infinite index under any projection in $Q'\cap M$ (e.g., $Q$ abelian; or
$Q$ an irreducible subfactor with infinite Jones index). We prove that given
any separable subalgebra $B$ of the ultrapower II$_1$ factor $M^\omega$, for a
non-principal ultrafilter $\omega$ on $\Bbb N$, there exists a unitary element
$u\in M^\omega$ such that $uBu^*$ is orthogonal to $Q^\omega$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Neural Control Variates for Variance Reduction | In statistics and machine learning, approximation of an intractable
integration is often achieved by using the unbiased Monte Carlo estimator, but
the variances of the estimation are generally high in many applications.
Control variates approaches are well-known to reduce the variance of the
estimation. These control variates are typically constructed by employing
predefined parametric functions or polynomials, determined by using those
samples drawn from the relevant distributions. Instead, we propose to construct
those control variates by learning neural networks to handle the cases when
test functions are complex. In many applications, obtaining a large number of
samples for Monte Carlo estimation is expensive, which may result in
overfitting when training a neural network. We thus further propose to employ
auxiliary random variables induced by the original ones to extend data samples
for training the neural networks. We apply the proposed control variates with
augmented variables to thermodynamic integration and reinforcement learning.
Experimental results demonstrate that our method can achieve significant
variance reduction compared with other alternatives.
| 0 | 0 | 0 | 1 | 0 | 0 |
On tidal energy in Newtonian two-body motion | In this work, which is based on an essential linear analysis carried out by
Christodoulou, we study the evolution of tidal energy for the motion of two
gravitating incompressible fluid balls with free boundaries obeying the
Euler-Poisson equations. The orbital energy is defined as the mechanical energy
of the two bodies' center of mass. According to the classical analysis of
Kepler and Newton, when the fluids are replaced by point masses, the conic
curve describing the trajectories of the masses is a hyperbola when the orbital
energy is positive and an ellipse when the orbital energy is negative. The
orbital energy is conserved in the case of point masses. If the point masses
are initially very far, then the orbital energy is positive, corresponding to
hyperbolic motion. However, in the motion of fluid bodies the orbital energy is
no longer conserved because part of the conserved energy is used in deforming
the boundaries of the bodies. In this case the total energy
$\tilde{\mathcal{E}}$ can be decomposed into a sum
$\tilde{\mathcal{E}}:=\widetilde{\mathcal{E}_{\mathrm{orbital}}}+\widetilde{\mathcal{E}_{\mathrm{tidal}}}$,
with $\widetilde{\mathcal{E}_{\mathrm{tidal}}}$ measuring the energy used in
deforming the boundaries, such that if
$\widetilde{\mathcal{E}_{\mathrm{orbital}}}<-c<0$ for some absolute constant
$c>0$, then the orbit of the bodies must be bounded. In this work we prove that
under appropriate conditions on the initial configuration of the system, the
fluid boundaries and velocity remain regular up to the point of the first
closest approach in the orbit, and that the tidal energy
$\widetilde{\mathcal{E}_{\mathrm{tidal}}}$ can be made arbitrarily large
relative to the total energy $\tilde{\mathcal{E}}$. In particular under these
conditions $\widetilde{\mathcal{E}_{\mathrm{orbital}}}$, which is initially
positive, becomes negative before the point of the first closest approach.
| 0 | 1 | 1 | 0 | 0 | 0 |
Markov-Modulated Linear Regression | Classical linear regression is considered for a case when regression
parameters depend on the external random environment. The last is described as
a continuous time Markov chain with finite state space. Here the expected
sojourn times in various states are additional regressors. Necessary formulas
for an estimation of regression parameters have been derived. The numerical
example illustrates the results obtained.
| 0 | 0 | 0 | 1 | 0 | 0 |
Compactness of the resolvent for the Witten Laplacian | In this paper we consider the Witten Laplacian on 0-forms and give sufficient
conditions under which the Witten Laplacian admits a compact resolvent. These
conditions are imposed on the potential itself, involving the control of high
order derivatives by lower ones, as well as the control of the positive
eigenvalues of the Hessian matrix. This compactness criterion for resolvent is
inspired by the one for the Fokker-Planck operator. Our method relies on the
nilpotent group techniques developed by Helffer-Nourrigat [Hypoellipticité
maximale pour des opérateurs polynômes de champs de vecteurs, 1985].
| 0 | 0 | 1 | 0 | 0 | 0 |
On the Azuma inequality in spaces of subgaussian of rank $p$ random variables | For $p > 1$ let a function $\varphi_p(x) = x^2/2$ if $|x|\le 1$ and
$\varphi_p(x) = 1/p|x|^p -1/p + 1/2$ if $|x| > 1$. For a random variable $\xi$
let $\tau_{\varphi_p}(\xi)$ denote $\inf\{c\ge 0 :\;
\forall_{\lambda\in\mathbb{R}}\;
\ln\mathbb{E}\exp(\lambda\xi)\le\varphi_p(c\lambda)\}$; $\tau_{\varphi_p}$ is a
norm in a space $Sub_{\varphi_p}(\Omega) =\{\xi:
\; \tau_{\varphi_p}(\xi) <\infty\}$ of $\varphi_p$-subgaussian random
variables which we call {\it subgaussian of rank $p$ random variables}. For $p
= 2$ we have the classic subgaussian random variables. The Azuma inequality
gives an estimate on the probability of the deviations of a zero-mean
martingale $(\xi_n)_{n\ge 0}$ with bounded increments from zero. In its classic
form is assumed that $\xi_0 = 0$. In this paper it is shown a version of the
Azuma inequality under assumption that $\xi_0$ is any subgaussian of rank $p$
random variable.
| 0 | 0 | 1 | 0 | 0 | 0 |
Realization of the Axial Next-Nearest-Neighbor Ising model in U$_3$Al$_2$Ge$_3$ | Here we report small-angle neutron scattering (SANS) measurements and
theoretical modeling of U$_3$Al$_2$Ge$_3$. Analysis of the SANS data reveals a
phase transition to sinusoidally modulated magnetic order, at
$T_{\mathrm{N}}=63$~K to be second order, and a first order phase transition to
ferromagnetic order at $T_{\mathrm{c}}=48$~K. Within the sinusoidally modulated
magnetic phase ($T_{\mathrm{c}} < T < T_{\mathrm{N}}$), we uncover a dramatic
change, by a factor of three, in the ordering wave-vector as a function of
temperature. These observations all indicate that U$_3$Al$_2$Ge$_3$ is a close
realization of the three-dimensional Axial Next-Nearest-Neighbor Ising model, a
prototypical framework for describing commensurate to incommensurate phase
transitions in frustrated magnets.
| 0 | 1 | 0 | 0 | 0 | 0 |
Long-time asymptotics for the derivative nonlinear Schrödinger equation on the half-line | We derive asymptotic formulas for the solution of the derivative nonlinear
Schrödinger equation on the half-line under the assumption that the initial
and boundary values lie in the Schwartz class. The formulas clearly show the
effect of the boundary on the solution. The approach is based on a nonlinear
steepest descent analysis of an associated Riemann-Hilbert problem.
| 0 | 1 | 1 | 0 | 0 | 0 |
Non-Semisimple Extended Topological Quantum Field Theories | We develop the general theory for the construction of Extended Topological
Quantum Field Theories (ETQFTs) associated with the Costantino-Geer-Patureau
quantum invariants of closed 3-manifolds. In order to do so, we introduce
relative modular categories, a class of ribbon categories which are modeled on
representations of unrolled quantum groups, and which can be thought of as a
non-semisimple analogue to modular categories. Our approach exploits a
2-categorical version of the universal construction introduced by Blanchet,
Habegger, Masbaum, and Vogel. The 1+1+1-EQFTs thus obtained are realized by
symmetric monoidal 2-functors which are defined over non-rigid 2-categories of
admissible cobordisms decorated with colored ribbon graphs and cohomology
classes, and which take values in 2-categories of complete graded linear
categories. In particular, our construction extends the family of graded
2+1-TQFTs defined for the unrolled version of quantum $\mathfrak{sl}_2$ by
Blanchet, Costantino, Geer, and Patureau to a new family of graded ETQFTs. The
non-semisimplicity of the theory is witnessed by the presence of non-semisimple
graded linear categories associated with critical 1-manifolds.
| 0 | 0 | 1 | 0 | 0 | 0 |
Community Aware Random Walk for Network Embedding | Social network analysis provides meaningful information about behavior of
network members that can be used for diverse applications such as
classification, link prediction. However, network analysis is computationally
expensive because of feature learning for different applications. In recent
years, many researches have focused on feature learning methods in social
networks. Network embedding represents the network in a lower dimensional
representation space with the same properties which presents a compressed
representation of the network. In this paper, we introduce a novel algorithm
named "CARE" for network embedding that can be used for different types of
networks including weighted, directed and complex. Current methods try to
preserve local neighborhood information of nodes, whereas the proposed method
utilizes local neighborhood and community information of network nodes to cover
both local and global structure of social networks. CARE builds customized
paths, which are consisted of local and global structure of network nodes, as a
basis for network embedding and uses the Skip-gram model to learn
representation vector of nodes. Subsequently, stochastic gradient descent is
applied to optimize our objective function and learn the final representation
of nodes. Our method can be scalable when new nodes are appended to network
without information loss. Parallelize generation of customized random walks is
also used for speeding up CARE. We evaluate the performance of CARE on multi
label classification and link prediction tasks. Experimental results on various
networks indicate that the proposed method outperforms others in both Micro and
Macro-f1 measures for different size of training data.
| 1 | 0 | 0 | 0 | 0 | 0 |
A combined photometric and kinematic recipe for evaluating the nature of bulges using the CALIFA sample | Understanding the nature of bulges in disc galaxies can provide important
insights into the formation and evolution of galaxies. For instance, the
presence of a classical bulge suggests a relatively violent history, in
contrast, the presence of simply an inner disc (also referred to as a
"pseudobulge") indicates the occurrence of secular evolution processes in the
main disc. However, we still lack criteria to effectively categorise bulges,
limiting our ability to study their impact on the evolution of the host
galaxies. Here we present a recipe to separate inner discs from classical
bulges by combining four different parameters from photometric and kinematic
analyses: The bulge Sérsic index $n_\mathrm{b}$, the concentration index
$C_{20,50}$, the Kormendy (1977) relation and the inner slope of the radial
velocity dispersion profile $\nabla\sigma$. With that recipe we provide a
detailed bulge classification for a sample of 45 galaxies from the
integral-field spectroscopic survey CALIFA. To aid in categorising bulges
within these galaxies, we perform 2D image decomposition to determine bulge
Sérsic index, bulge-to-total light ratio, surface brightness and effective
radius of the bulge and use growth curve analysis to derive a new concentration
index, $C_{20,50}$. We further extract the stellar kinematics from CALIFA data
cubes and analyse the radial velocity dispersion profile. The results of the
different approaches are in good agreement and allow a safe classification for
approximately $95\%$ of the galaxies. In particular, we show that our new
"inner" concentration index performs considerably better than the traditionally
used $C_{50,90}$ when yielding the nature of bulges. We also found that a
combined use of this index and the Kormendy (1977) relation gives a very robust
indication of the physical nature of the bulge.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fast construction of efficient composite likelihood equations | Growth in both size and complexity of modern data challenges the
applicability of traditional likelihood-based inference. Composite likelihood
(CL) methods address the difficulties related to model selection and
computational intractability of the full likelihood by combining a number of
low-dimensional likelihood objects into a single objective function used for
inference. This paper introduces a procedure to combine partial likelihood
objects from a large set of feasible candidates and simultaneously carry out
parameter estimation. The new method constructs estimating equations balancing
statistical efficiency and computing cost by minimizing an approximate distance
from the full likelihood score subject to a L1-norm penalty representing the
available computing resources. This results in truncated CL equations
containing only the most informative partial likelihood score terms. An
asymptotic theory within a framework where both sample size and data dimension
grow is developed and finite-sample properties are illustrated through
numerical examples.
| 0 | 0 | 1 | 1 | 0 | 0 |
State-selective influence of the Breit interaction on the angular distribution of emitted photons following dielectronic recombination | We report a measurement of $KLL$ dielectronic recombination in charge states
from Kr$^{+34}$ through Kr$^{+28}$, in order to investigate the contribution of
Breit interaction for a wide range of resonant states. Highly charged Kr ions
were produced in an electron beam ion trap, while the electron-ion collision
energy was scanned over a range of dielectronic recombination resonances. The
subsequent $K\alpha$ x rays were recorded both along and perpendicular to the
electron beam axis, which allowed the observation of the influence of Breit
interaction on the angular distribution of the x rays. Experimental results are
in good agreement with distorted-wave calculations. We demonstrate, both
theoretically and experimentally, that there is a strong state-selective
influence of the Breit interaction that can be traced back to the angular and
radial properties of the wavefunctions in the dielectronic capture.
| 0 | 1 | 0 | 0 | 0 | 0 |
Anti-spoofing Methods for Automatic SpeakerVerification System | Growing interest in automatic speaker verification (ASV)systems has lead to
significant quality improvement of spoofing attackson them. Many research works
confirm that despite the low equal er-ror rate (EER) ASV systems are still
vulnerable to spoofing attacks. Inthis work we overview different acoustic
feature spaces and classifiersto determine reliable and robust countermeasures
against spoofing at-tacks. We compared several spoofing detection systems,
presented so far,on the development and evaluation datasets of the Automatic
SpeakerVerification Spoofing and Countermeasures (ASVspoof) Challenge
2015.Experimental results presented in this paper demonstrate that the useof
magnitude and phase information combination provides a substantialinput into
the efficiency of the spoofing detection systems. Also wavelet-based features
show impressive results in terms of equal error rate. Inour overview we compare
spoofing performance for systems based on dif-ferent classifiers. Comparison
results demonstrate that the linear SVMclassifier outperforms the conventional
GMM approach. However, manyresearchers inspired by the great success of deep
neural networks (DNN)approaches in the automatic speech recognition, applied
DNN in thespoofing detection task and obtained quite low EER for known and
un-known type of spoofing attacks.
| 1 | 0 | 0 | 1 | 0 | 0 |
Face Deidentification with Generative Deep Neural Networks | Face deidentification is an active topic amongst privacy and security
researchers. Early deidentification methods relying on image blurring or
pixelization were replaced in recent years with techniques based on formal
anonymity models that provide privacy guaranties and at the same time aim at
retaining certain characteristics of the data even after deidentification. The
latter aspect is particularly important, as it allows to exploit the
deidentified data in applications for which identity information is irrelevant.
In this work we present a novel face deidentification pipeline, which ensures
anonymity by synthesizing artificial surrogate faces using generative neural
networks (GNNs). The generated faces are used to deidentify subjects in images
or video, while preserving non-identity-related aspects of the data and
consequently enabling data utilization. Since generative networks are very
adaptive and can utilize a diverse set of parameters (pertaining to the
appearance of the generated output in terms of facial expressions, gender,
race, etc.), they represent a natural choice for the problem of face
deidentification. To demonstrate the feasibility of our approach, we perform
experiments using automated recognition tools and human annotators. Our results
show that the recognition performance on deidentified images is close to
chance, suggesting that the deidentification process based on GNNs is highly
effective.
| 1 | 0 | 0 | 0 | 0 | 0 |
Towards a theory of word order. Comment on "Dependency distance: a new perspective on syntactic patterns in natural language" by Haitao Liu et al | Comment on "Dependency distance: a new perspective on syntactic patterns in
natural language" by Haitao Liu et al
| 1 | 1 | 0 | 0 | 0 | 0 |
Optimal Frequency Ranges for Sub-Microsecond Precision Pulsar Timing | Precision pulsar timing requires optimization against measurement errors and
astrophysical variance from the neutron stars themselves and the interstellar
medium. We investigate optimization of arrival time precision as a function of
radio frequency and bandwidth. We find that increases in bandwidth that reduce
the contribution from receiver noise are countered by the strong chromatic
dependence of interstellar effects and intrinsic pulse-profile evolution. The
resulting optimal frequency range is therefore telescope and pulsar dependent.
We demonstrate the results for five pulsars included in current pulsar timing
arrays and determine that they are not optimally observed at current center
frequencies. For those objects, we find that better choices of total bandwidth
as well as center frequency can improve the arrival-time precision. Wideband
receivers centered at somewhat higher frequencies with respect to the currently
adopted receivers can reduce required overall integration times and provide
significant improvements in arrival time uncertainty by a factor of ~sqrt(2) in
most cases, assuming a fixed integration time. We also discuss how timing
programs can be extended to pulsars with larger dispersion measures through the
use of higher-frequency observations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Submodular Mini-Batch Training in Generative Moment Matching Networks | This article was withdrawn because (1) it was uploaded without the
co-authors' knowledge or consent, and (2) there are allegations of plagiarism.
| 1 | 0 | 0 | 0 | 0 | 0 |
Local Gaussian Processes for Efficient Fine-Grained Traffic Speed Prediction | Traffic speed is a key indicator for the efficiency of an urban
transportation system. Accurate modeling of the spatiotemporally varying
traffic speed thus plays a crucial role in urban planning and development. This
paper addresses the problem of efficient fine-grained traffic speed prediction
using big traffic data obtained from static sensors. Gaussian processes (GPs)
have been previously used to model various traffic phenomena, including flow
and speed. However, GPs do not scale with big traffic data due to their cubic
time complexity. In this work, we address their efficiency issues by proposing
local GPs to learn from and make predictions for correlated subsets of data.
The main idea is to quickly group speed variables in both spatial and temporal
dimensions into a finite number of clusters, so that future and unobserved
traffic speed queries can be heuristically mapped to one of such clusters. A
local GP corresponding to that cluster can then be trained on the fly to make
predictions in real-time. We call this method localization. We use non-negative
matrix factorization for localization and propose simple heuristics for cluster
mapping. We additionally leverage on the expressiveness of GP kernel functions
to model road network topology and incorporate side information. Extensive
experiments using real-world traffic data collected in the two U.S. cities of
Pittsburgh and Washington, D.C., show that our proposed local GPs significantly
improve both runtime performances and prediction accuracies compared to the
baseline global and local GPs.
| 1 | 0 | 0 | 0 | 0 | 0 |
Vertex algebras associated with hypertoric varieties | We construct a family of vertex algebras associated with a family of
symplectic singularity/resolution, called hypertoric varieties. While the
hypertoric varieties are constructed by a certain Hamiltonian reduction
associated with a torus action, our vertex algebras are constructed by
(semi-infinite) BRST reduction. The construction works algebro-geometrically
and we construct sheaves of $\hbar$-adic vertex algebras over hypertoric
varieties which localize the vertex algebras. We show when the vertex algebras
are vertex operator algebras by giving explicit conformal vectors. We also show
that the Zhu algebras of the vertex algebras, associative algebras associated
with non-negatively graded vertex algebras, gives a certain family of filtered
quantizations of the coordinate rings of the hypertoric varieties.
| 0 | 0 | 1 | 0 | 0 | 0 |
Bit-Vector Model Counting using Statistical Estimation | Approximate model counting for bit-vector SMT formulas (generalizing \#SAT)
has many applications such as probabilistic inference and quantitative
information-flow security, but it is computationally difficult. Adding random
parity constraints (XOR streamlining) and then checking satisfiability is an
effective approximation technique, but it requires a prior hypothesis about the
model count to produce useful results. We propose an approach inspired by
statistical estimation to continually refine a probabilistic estimate of the
model count for a formula, so that each XOR-streamlined query yields as much
information as possible. We implement this approach, with an approximate
probability model, as a wrapper around an off-the-shelf SMT solver or SAT
solver. Experimental results show that the implementation is faster than the
most similar previous approaches which used simpler refinement strategies. The
technique also lets us model count formulas over floating-point constraints,
which we demonstrate with an application to a vulnerability in differential
privacy mechanisms.
| 1 | 0 | 0 | 0 | 0 | 0 |
Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory | We develop a family of reformulations of an arbitrary consistent linear
system into a stochastic problem. The reformulations are governed by two
user-defined parameters: a positive definite matrix defining a norm, and an
arbitrary discrete or continuous distribution over random matrices. Our
reformulation has several equivalent interpretations, allowing for researchers
from various communities to leverage their domain specific insights. In
particular, our reformulation can be equivalently seen as a stochastic
optimization problem, stochastic linear system, stochastic fixed point problem
and a probabilistic intersection problem. We prove sufficient, and necessary
and sufficient conditions for the reformulation to be exact.
Further, we propose and analyze three stochastic algorithms for solving the
reformulated problem---basic, parallel and accelerated methods---with global
linear convergence rates. The rates can be interpreted as condition numbers of
a matrix which depends on the system matrix and on the reformulation
parameters. This gives rise to a new phenomenon which we call stochastic
preconditioning, and which refers to the problem of finding parameters (matrix
and distribution) leading to a sufficiently small condition number. Our basic
method can be equivalently interpreted as stochastic gradient descent,
stochastic Newton method, stochastic proximal point method, stochastic fixed
point method, and stochastic projection method, with fixed stepsize (relaxation
parameter), applied to the reformulations.
| 1 | 0 | 0 | 1 | 0 | 0 |
Isotropic-Nematic Phase Transitions in Gravitational Systems | We examine dense self-gravitating stellar systems dominated by a central
potential, such as nuclear star clusters hosting a central supermassive black
hole. Different dynamical properties of these systems evolve on vastly
different timescales. In particular, the orbital-plane orientations are
typically driven into internal thermodynamic equilibrium by vector resonant
relaxation before the orbital eccentricities or semimajor axes relax. We show
that the statistical mechanics of such systems exhibit a striking resemblance
to liquid crystals, with analogous ordered-nematic and disordered-isotropic
phases. The ordered phase consists of bodies orbiting in a disk in both
directions, with the disk thickness depending on temperature, while the
disordered phase corresponds to a nearly isotropic distribution of the orbit
normals. We show that below a critical value of the total angular momentum, the
system undergoes a first-order phase transition between the ordered and
disordered phases. At the critical point the phase transition becomes
second-order while for higher angular momenta there is a smooth crossover. We
also find metastable equilibria containing two identical disks with mutual
inclinations between $90^{\circ}$ and $180^\circ$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Topological Semimetals carrying Arbitrary Hopf Numbers: Hopf-Link, Solomon's-Knot, Trefoil-Knot and Other Semimetals | We propose a new type of Hopf semimetals indexed by a pair of numbers
$(p,q)$, where the Hopf number is given by $pq$. The Fermi surface is given by
the preimage of the Hopf map, which is nontrivially linked for a nonzero Hopf
number. The Fermi surface forms a torus link, whose examples are the Hopf link
indexed by $(1,1)$, the Solomon's knot $(2,1)$, the double Hopf-link $(2,2)$
and the double trefoil-knot $(3,2)$. We may choose $p$ or $q$ as a half
integer, where torus-knot Fermi surfaces such as the trefoil knot $(3/2,1)$ are
realized. It is even possible to make the Hopf number an arbitrary rational
number, where a semimetal whose Fermi surface forms open strings is generated.
| 0 | 1 | 0 | 0 | 0 | 0 |
Online Learning with Abstention | We present an extensive study of the key problem of online learning where
algorithms are allowed to abstain from making predictions. In the adversarial
setting, we show how existing online algorithms and guarantees can be adapted
to this problem. In the stochastic setting, we first point out a bias problem
that limits the straightforward extension of algorithms such as UCB-N to
time-varying feedback graphs, as needed in this context. Next, we give a new
algorithm, UCB-GT, that exploits historical data and is adapted to time-varying
feedback graphs. We show that this algorithm benefits from more favorable
regret guarantees than a possible, but limited, extension of UCB-N. We further
report the results of a series of experiments demonstrating that UCB-GT largely
outperforms that extension of UCB-N, as well as more standard baselines.
| 1 | 0 | 0 | 0 | 0 | 0 |
Stabilization of self-mode-locked quantum dash lasers by symmetric dual-loop optical feedback | We report experimental studies of the influence of symmetric dual-loop
optical feedback on the RF linewidth and timing jitter of self-mode-locked
two-section quantum dash lasers emitting at 1550 nm. Various feedback schemes
were investigated and optimum levels determined for narrowest RF linewidth and
low timing jitter, for single-loop and symmetric dual-loop feedback. Two
symmetric dual-loop configurations, with balanced and unbalanced feedback
ratios, were studied. We demonstrate that unbalanced symmetric dual loop
feedback, with the inner cavity resonant and fine delay tuning of the outer
loop, gives narrowest RF linewidth and reduced timing jitter over a wide range
of delay, unlike single and balanced symmetric dual-loop configurations. This
configuration with feedback lengths 80 and 140 m narrows the RF linewidth by
4-67x and 10-100x, respectively, across the widest delay range, compared to
free-running. For symmetric dual-loop feedback, the influence of different
power split ratios through the feedback loops was determined. Our results show
that symmetric dual-loop feedback is markedly more effective than single-loop
feedback in reducing RF linewidth and timing jitter, and is much less sensitive
to delay phase, making this technique ideal for applications where robustness
and alignment tolerance are essential.
| 0 | 1 | 0 | 0 | 0 | 0 |
Sensitivity Analysis for Mirror-Stratifiable Convex Functions | This paper provides a set of sensitivity analysis and activity identification
results for a class of convex functions with a strong geometric structure, that
we coined "mirror-stratifiable". These functions are such that there is a
bijection between a primal and a dual stratification of the space into
partitioning sets, called strata. This pairing is crucial to track the strata
that are identifiable by solutions of parametrized optimization problems or by
iterates of optimization algorithms. This class of functions encompasses all
regularizers routinely used in signal and image processing, machine learning,
and statistics. We show that this "mirror-stratifiable" structure enjoys a nice
sensitivity theory, allowing us to study stability of solutions of optimization
problems to small perturbations, as well as activity identification of
first-order proximal splitting-type algorithms. Existing results in the
literature typically assume that, under a non-degeneracy condition, the active
set associated to a minimizer is stable to small perturbations and is
identified in finite time by optimization schemes. In contrast, our results do
not require any non-degeneracy assumption: in consequence, the optimal active
set is not necessarily stable anymore, but we are able to track precisely the
set of identifiable strata.We show that these results have crucial implications
when solving challenging ill-posed inverse problems via regularization, a
typical scenario where the non-degeneracy condition is not fulfilled. Our
theoretical results, illustrated by numerical simulations, allow to
characterize the instability behaviour of the regularized solutions, by
locating the set of all low-dimensional strata that can be potentially
identified by these solutions.
| 0 | 0 | 1 | 1 | 0 | 0 |
Coupling Story to Visualization: Using Textual Analysis as a Bridge Between Data and Interpretation | Online writers and journalism media are increasingly combining visualization
(and other multimedia content) with narrative text to create narrative
visualizations. Often, however, the two elements are presented independently of
one another. We propose an approach to automatically integrate text and
visualization elements. We begin with a writer's narrative that presumably can
be supported with visual data evidence. We leverage natural language
processing, quantitative narrative analysis, and information visualization to
(1) automatically extract narrative components (who, what, when, where) from
data-rich stories, and (2) integrate the supporting data evidence with the text
to develop a narrative visualization. We also employ bidirectional interaction
from text to visualization and visualization to text to support reader
exploration in both directions. We demonstrate the approach with a case study
in the data-rich field of sports journalism.
| 1 | 0 | 0 | 0 | 0 | 0 |
Automatic Prediction of Discourse Connectives | Accurate prediction of suitable discourse connectives (however, furthermore,
etc.) is a key component of any system aimed at building coherent and fluent
discourses from shorter sentences and passages. As an example, a dialog system
might assemble a long and informative answer by sampling passages extracted
from different documents retrieved from the Web. We formulate the task of
discourse connective prediction and release a dataset of 2.9M sentence pairs
separated by discourse connectives for this task. Then, we evaluate the
hardness of the task for human raters, apply a recently proposed decomposable
attention (DA) model to this task and observe that the automatic predictor has
a higher F1 than human raters (32 vs. 30). Nevertheless, under specific
conditions the raters still outperform the DA model, suggesting that there is
headroom for future improvements.
| 1 | 0 | 0 | 0 | 0 | 0 |
Learning to Adapt in Dynamic, Real-World Environments Through Meta-Reinforcement Learning | Although reinforcement learning methods can achieve impressive results in
simulation, the real world presents two major challenges: generating samples is
exceedingly expensive, and unexpected perturbations or unseen situations cause
proficient but specialized policies to fail at test time. Given that it is
impractical to train separate policies to accommodate all situations the agent
may see in the real world, this work proposes to learn how to quickly and
effectively adapt online to new tasks. To enable sample-efficient learning, we
consider learning online adaptation in the context of model-based reinforcement
learning. Our approach uses meta-learning to train a dynamics model prior such
that, when combined with recent data, this prior can be rapidly adapted to the
local context. Our experiments demonstrate online adaptation for continuous
control tasks on both simulated and real-world agents. We first show simulated
agents adapting their behavior online to novel terrains, crippled body parts,
and highly-dynamic environments. We also illustrate the importance of
incorporating online adaptation into autonomous agents that operate in the real
world by applying our method to a real dynamic legged millirobot. We
demonstrate the agent's learned ability to quickly adapt online to a missing
leg, adjust to novel terrains and slopes, account for miscalibration or errors
in pose estimation, and compensate for pulling payloads.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Novel Receiver Design with Joint Coherent and Non-Coherent Processing | In this paper, we propose a novel splitting receiver, which involves joint
processing of coherently and non-coherently received signals. Using a passive
RF power splitter, the received signal at each receiver antenna is split into
two streams which are then processed by a conventional coherent detection (CD)
circuit and a power-detection (PD) circuit, respectively. The streams of the
signals from all the receiver antennas are then jointly used for information
detection. We show that the splitting receiver creates a three-dimensional
received signal space, due to the joint coherent and non-coherent processing.
We analyze the achievable rate of a splitting receiver, which shows that the
splitting receiver provides a rate gain of $3/2$ compared to either the
conventional (CD-based) coherent receiver or the PD-based non-coherent receiver
in the high SNR regime. We also analyze the symbol error rate (SER) for
practical modulation schemes, which shows that the splitting receiver achieves
asymptotic SER reduction by a factor of at least $\sqrt{M}-1$ for $M$-QAM
compared to either the conventional (CD-based) coherent receiver or the
PD-based non-coherent receiver.
| 1 | 0 | 0 | 0 | 0 | 0 |
Comment on the Equality Condition for the I-MMSE Proof of Entropy Power Inequality | The paper establishes the equality condition in the I-MMSE proof of the
entropy power inequality (EPI). This is done by establishing an exact
expression for the deficit between the two sides of the EPI. Interestingly, a
necessary condition for the equality is established by making a connection to
the famous Cauchy functional equation.
| 1 | 0 | 0 | 0 | 0 | 0 |
Feature discovery and visualization of robot mission data using convolutional autoencoders and Bayesian nonparametric topic models | The gap between our ability to collect interesting data and our ability to
analyze these data is growing at an unprecedented rate. Recent algorithmic
attempts to fill this gap have employed unsupervised tools to discover
structure in data. Some of the most successful approaches have used
probabilistic models to uncover latent thematic structure in discrete data.
Despite the success of these models on textual data, they have not generalized
as well to image data, in part because of the spatial and temporal structure
that may exist in an image stream.
We introduce a novel unsupervised machine learning framework that
incorporates the ability of convolutional autoencoders to discover features
from images that directly encode spatial information, within a Bayesian
nonparametric topic model that discovers meaningful latent patterns within
discrete data. By using this hybrid framework, we overcome the fundamental
dependency of traditional topic models on rigidly hand-coded data
representations, while simultaneously encoding spatial dependency in our topics
without adding model complexity. We apply this model to the motivating
application of high-level scene understanding and mission summarization for
exploratory marine robots. Our experiments on a seafloor dataset collected by a
marine robot show that the proposed hybrid framework outperforms current
state-of-the-art approaches on the task of unsupervised seafloor terrain
characterization.
| 1 | 0 | 0 | 1 | 0 | 0 |
Multiplicative models for frequency data, estimation and testing | This paper is about models for a vector of probabilities whose elements must
have a multiplicative structure and sum to 1 at the same time; in certain
applications, as basket analysis, these models may be seen as a constrained
version of quasi-independence. After reviewing the basic properties of these
models, their geometric features as a curved exponential family are
investigated. A new algorithm for computing maximum likelihood estimates is
presented and new insights are provided on the underlying geometry. The
asymptotic distribution of three statistics for hypothesis testing are derived
and a small simulation study is presented to investigate the accuracy of
asymptotic approximations.
| 0 | 0 | 1 | 1 | 0 | 0 |
An Efficient Keyless Fragmentation Algorithm for Data Protection | The family of Information Dispersal Algorithms is applied to distributed
systems for secure and reliable storage and transmission. In comparison with
perfect secret sharing it achieves a significantly smaller memory overhead and
better performance, but provides only incremental confidentiality. Therefore,
even if it is not possible to explicitly reconstruct data from less than the
required amount of fragments, it is still possible to deduce some information
about the nature of data by looking at preserved data patterns inside a
fragment. The idea behind this paper is to provide a lightweight data
fragmentation scheme, that would combine the space efficiency and simplicity
that could be find in Information Dispersal Algorithms with a computational
level of data confidentiality.
| 1 | 0 | 0 | 0 | 0 | 0 |
Schramm--Loewner-evolution-type growth processes corresponding to Wess--Zumino--Witten theories | A group theoretical formulation of Schramm--Loewner-evolution-type growth
processes corresponding to Wess--Zumino--Witten theories is developed that
makes it possible to construct stochastic differential equations associated
with more general null vectors than the ones considered in the most fundamental
example in [Alekseev et al., Lett. Math. Phys. 97, 243-261 (2011)]. Also given
are examples of Schramm--Loewner-evolution-type growth processes associated
with null vectors of conformal weight $4$ in the basic representations of
$\widehat{\mathfrak{sl}}_{2}$ and $\widehat{\mathfrak{sl}}_{3}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Explicit Commutativity Conditions for Second-order Linear Time-Varying Systems with Non-Zero Initial Conditions | Although the explicit commutativitiy conditions for second-order linear
time-varying systems have been appeared in some literature, these are all for
initially relaxed systems. This paper presents explicit necessary and
sufficient commutativity conditions for commutativity of second-order linear
time-varying systems with non-zero initial conditions. It has appeared
interesting that the second requirement for the commutativity of non-relaxed
systems plays an important role on the commutativity conditions when non-zero
initial conditions exist. Another highlight is that the commutativity of
switched systems is considered and spoiling of commutativity at the switching
instants is illustrated for the first time. The simulation results support the
theory developed in the paper.
| 1 | 0 | 0 | 0 | 0 | 0 |
Energy Acceptance of the St. George Recoil Separator | Radiative alpha-capture, ($\alpha,\gamma$), reactions play a critical role in
nucleosynthesis and nuclear energy generation in a variety of astrophysical
environments. The St. George recoil separator at the University of Notre Dame's
Nuclear Science Laboratory was developed to measure ($\alpha,\gamma$) reactions
in inverse kinematics via recoil detection in order to obtain nuclear reaction
cross sections at the low energies of astrophysical interest, while avoiding
the $\gamma$-background that plagues traditional measurement techniques. Due to
the $\gamma$-ray produced by the nuclear reaction at the target location,
recoil nuclei are produced with a variety of energies and angles, all of which
must be accepted by St. George in order to accurately determine the reaction
cross section. We demonstrate the energy acceptance of the St. George recoil
separator using primary beams of helium, hydrogen, neon, and oxygen, spanning
the magnetic and electric rigidity phase space populated by recoils of
anticipated ($\alpha,\gamma$) reaction measurements. We found the performance
of St. George meets the design specifications, demonstrating its suitability
for ($\alpha,\gamma$) reaction measurements of astrophysical interest.
| 0 | 1 | 0 | 0 | 0 | 0 |
Topological and Algebraic Characterizations of Gallai-Simplicial Complexes | We recall first Gallai-simplicial complex $\Delta_{\Gamma}(G)$ associated to
Gallai graph $\Gamma(G)$ of a planar graph $G$. The Euler characteristic is a
very useful topological and homotopic invariant to classify surfaces. In
Theorems 3.2 and 3.4, we compute Euler characteristics of Gallai-simplicial
complexes associated to triangular ladder and prism graphs, respectively.
Let $G$ be a finite simple graph on $n$ vertices of the form $n=3l+2$ or
$3l+3$. In Theorem 4.4, we prove that $G$ will be $f$-Gallai graph for the
following types of constructions of $G$.
Type 1. When $n=3l+2$. $G=\mathbb{S}_{4l}$ is a graph consisting of two
copies of star graphs $S_{2l}$ and $S'_{2l}$ with $l\geq 2$ having $l$ common
vertices.
Type 2. When $n=3l+3$. $G=\mathbb{S}_{4l+1}$ is a graph consisting of two
star graphs $S_{2l}$ and $S_{2l+1}$ with $l\geq 2$ having $l$ common vertices.
| 0 | 0 | 1 | 0 | 0 | 0 |
Fairness risk measures | Ensuring that classifiers are non-discriminatory or fair with respect to a
sensitive feature (e.g., race or gender) is a topical problem. Progress in this
task requires fixing a definition of fairness, and there have been several
proposals in this regard over the past few years. Several of these, however,
assume either binary sensitive features (thus precluding categorical or
real-valued sensitive groups), or result in non-convex objectives (thus
adversely affecting the optimisation landscape). In this paper, we propose a
new definition of fairness that generalises some existing proposals, while
allowing for generic sensitive features and resulting in a convex objective.
The key idea is to enforce that the expected losses (or risks) across each
subgroup induced by the sensitive feature are commensurate. We show how this
relates to the rich literature on risk measures from mathematical finance. As a
special case, this leads to a new convex fairness-aware objective based on
minimising the conditional value at risk (CVaR).
| 1 | 0 | 0 | 1 | 0 | 0 |
On-line tracing of XACML-based policy coverage criteria | Currently, eXtensible Access Control Markup Language (XACML) has becoming the
standard for implementing access control policies and consequently more
attention is dedicated to testing the correctness of XACML policies. In
particular, coverage measures can be adopted for assessing test strategy
effectiveness in exercising the policy elements. This study introduces a set of
XACML coverage criteria and describes the access control infrastructure, based
on a monitor engine, enabling the coverage criterion selection and the on-line
tracing of the testing activity. Examples of infrastructure usage and of
assessment of different test strategies are provided.
| 1 | 0 | 0 | 0 | 0 | 0 |
The square lattice Ising model on the rectangle II: Finite-size scaling limit | Based on the results published recently [J. Phys. A: Math. Theor. 50, 065201
(2017)], the universal finite-size contributions to the free energy of the
square lattice Ising model on the $L\times M$ rectangle, with open boundary
conditions in both directions, are calculated exactly in the finite-size
scaling limit $L,M\to\infty$, $T\to T_\mathrm{c}$, with fixed temperature
scaling variable $x\propto(T/T_\mathrm{c}-1)M$ and fixed aspect ratio
$\rho\propto L/M$. We derive exponentially fast converging series for the
related Casimir potential and Casimir force scaling functions. At the critical
point $T=T_\mathrm{c}$ we confirm predictions from conformal field theory by
Cardy & Peschel [Nucl. Phys. B 300, 377 (1988)] and by Kleban & Vassileva [J.
Phys. A: Math. Gen. 24, 3407 (1991)]. The presence of corners and the related
corner free energy has dramatic impact on the Casimir scaling functions and
leads to a logarithmic divergence of the Casimir potential scaling function at
criticality.
| 0 | 1 | 1 | 0 | 0 | 0 |
Using MRI Cell Tracking to Monitor Immune Cell Recruitment in Response to a Peptide-Based Cancer Vaccine | Purpose: MRI cell tracking can be used to monitor immune cells involved in
the immunotherapy response, providing insight into the mechanism of action,
temporal progression of tumour growth and individual potency of therapies. To
evaluate whether MRI could be used to track immune cell populations in response
to immunotherapy, CD8+ cytotoxic T cells (CTLs), CD4+CD25+FoxP3+ regulatory T
cells (Tregs) and myeloid derived suppressor cells (MDSCs) were labelled with
superparamagnetic iron oxide (SPIO) particles.
Methods: SPIO-labelled cells were injected into mice (one cell type/mouse)
implanted with an HPV-based cervical cancer model. Half of these mice were also
vaccinated with DepoVaxTM, a lipid-based vaccine platform that was developed to
enhance the potency of peptide-based vaccines.
Results: MRI visualization of CTLs, Tregs and MDSCs was apparent 24 hours
post-injection, with hypointensities due to iron labelled cells clearing
approximately 72 hours post-injection. Vaccination resulted in increased
recruitment of CTLs and decreased recruitment of MDSCs and Tregs to the tumour.
We also found that MDSC and Treg recruitment was positively correlated with
final tumour volume.
Conclusion: This type of analysis can be used to non-invasively study changes
in immune cell recruitment in individual mice over time, potentially allowing
improved application and combination of immunotherapies.
| 0 | 1 | 0 | 0 | 0 | 0 |
Entropic Spectral Learning in Large Scale Networks | We present a novel algorithm for learning the spectral density of large scale
networks using stochastic trace estimation and the method of maximum entropy.
The complexity of the algorithm is linear in the number of non-zero elements of
the matrix, offering a computational advantage over other algorithms. We apply
our algorithm to the problem of community detection in large networks. We show
state-of-the-art performance on both synthetic and real datasets.
| 0 | 0 | 0 | 1 | 0 | 0 |
On Green's proof of infinitesimal Torelli theorem for hypersurfaces | We prove an equivalence between the infinitesimal Torelli theorem for top
forms on a hypersurface contained inside a Grassmannian $\mathbb G$ and the
theory of adjoint volume forms presented in L. Rizzi, F. Zucconi, "Generalized
adjoint forms on algebraic varieties", Ann. Mat. Pura e Applicata, in press.
More precisely, via this theory and a suitable generalization of Macaulay's
theorem we show that the differential of the period map vanishes on an
infinitesimal deformation if and only if certain explicitly given twisted
volume forms go in the generalized Jacobi ideal of $X$ via the cup product
homomorphism.
| 0 | 0 | 1 | 0 | 0 | 0 |
Gentle heating by mixing in cooling flow clusters | We analyze three-dimensional hydrodynamical simulations of the interaction of
jets and the bubbles they inflate with the intra-cluster medium (ICM), and show
that the heating of the ICM by mixing hot bubble gas with the ICM operates over
tens of millions of years, and hence can smooth the sporadic activity of the
jets. The inflation process of hot bubbles by propagating jets forms many
vortices, and these vortices mix the hot bubble gas with the ICM. The mixing,
hence the heating of the ICM, starts immediately after the jets are launched,
but continues for tens of millions of years. We suggest that the smoothing of
the active galactic nucleus (AGN) sporadic activity by the long-lived vortices
accounts for the recent finding of a gentle energy coupling between AGN heating
and the ICM.
| 0 | 1 | 0 | 0 | 0 | 0 |
Cutting-off Redundant Repeating Generations for Neural Abstractive Summarization | This paper tackles the reduction of redundant repeating generation that is
often observed in RNN-based encoder-decoder models. Our basic idea is to
jointly estimate the upper-bound frequency of each target vocabulary in the
encoder and control the output words based on the estimation in the decoder.
Our method shows significant improvement over a strong RNN-based
encoder-decoder baseline and achieved its best results on an abstractive
summarization benchmark.
| 1 | 0 | 0 | 1 | 0 | 0 |
Some exercises with the Lasso and its compatibility constant | We consider the Lasso for a noiseless experiment where one has observations
$X \beta^0$ and uses the penalized version of basis pursuit. We compute for
some special designs the compatibility constant, a quantity closely related to
the restricted eigenvalue. We moreover show the dependence of the (penalized)
prediction error on this compatibility constant. This exercise illustrates that
compatibility is necessarily entering into the bounds for the (penalized)
prediction error and that the bounds in the literature therefore are - up to
constants - tight. We also give conditions that show that in the noisy case the
dominating term for the prediction error is given by the prediction error of
the noiseless case.
| 0 | 0 | 1 | 1 | 0 | 0 |
Leveraging Sensory Data in Estimating Transformer Lifetime | Transformer lifetime assessments plays a vital role in reliable operation of
power systems. In this paper, leveraging sensory data, an approach in
estimating transformer lifetime is presented. The winding hottest-spot
temperature, which is the pivotal driver that impacts transformer aging, is
measured hourly via a temperature sensor, then transformer loss of life is
calculated based on the IEEE Std. C57.91-2011. A Cumulative Moving Average
(CMA) model is subsequently applied to the data stream of the transformer loss
of life to provide hourly estimates until convergence. Numerical examples
demonstrate the effectiveness of the proposed approach for the transformer
lifetime estimation, and explores its efficiency and practical merits.
| 1 | 0 | 0 | 0 | 0 | 0 |
Spatial Projection of Multiple Climate Variables using Hierarchical Multitask Learning | Future projection of climate is typically obtained by combining outputs from
multiple Earth System Models (ESMs) for several climate variables such as
temperature and precipitation. While IPCC has traditionally used a simple model
output average, recent work has illustrated potential advantages of using a
multitask learning (MTL) framework for projections of individual climate
variables. In this paper we introduce a framework for hierarchical multitask
learning (HMTL) with two levels of tasks such that each super-task, i.e., task
at the top level, is itself a multitask learning problem over sub-tasks. For
climate projections, each super-task focuses on projections of specific climate
variables spatially using an MTL formulation. For the proposed HMTL approach, a
group lasso regularization is added to couple parameters across the
super-tasks, which in the climate context helps exploit relationships among the
behavior of different climate variables at a given spatial location. We show
that some recent works on MTL based on learning task dependency structures can
be viewed as special cases of HMTL. Experiments on synthetic and real climate
data show that HMTL produces better results than decoupled MTL methods applied
separately on the super-tasks and HMTL significantly outperforms baselines for
climate projection.
| 1 | 0 | 0 | 1 | 0 | 0 |
Two-dimensional plasmons in the random impedance network model of disordered thin-film nanocomposites | Random impedance networks are widely used as a model to describe plasmon
resonances in disordered metal-dielectric nanocomposites. In order to study
thin films, two-dimensional networks are often used despite the fact that such
networks correspond to a two-dimensional electrodynamics [J.P. Clerc et al, J.
Phys. A 29, 4781 (1996)]. In the present work, we propose a model of
two-dimensional systems with three-dimensional Coulomb interaction and show
that this model is equivalent to a planar network with long-range capacitive
connections between sites. In a case of a metal film, we get a known dispersion
$\omega \propto \sqrt{k}$ of plane-wave two-dimensional plasmons. In the
framework of the proposed model, we study the evolution of resonances with
decreasing of metal filling factor. In the subcritical region with metal
filling $p$ lower than the percolation threshold $p_c$, we observe a gap with
Lifshitz tails in the spectral density of states (DOS). In the supercritical
region $p>p_c$, the DOS demonstrates a crossover between plane-wave
two-dimensional plasmons and resonances associated with small clusters.
| 0 | 1 | 0 | 0 | 0 | 0 |
K-edge subtraction vs. A-space processing for x-ray imaging of contrast agents: SNR | Purpose: To compare two methods that use x-ray spectral information to image
externally administered contrast agents: K-edge subtraction and basis-function
decomposition (the A-space method), Methods: The K-edge method uses narrow band
x-ray spectra with energies infinitesimally below and above the contrast
material K-edge energy. The A-space method uses a broad spectrum x-ray tube
source and measures the transmitted spectrum with photon counting detectors
with pulse height analysis. The methods are compared by their signal to noise
ratio (SNR) divided by the patient dose for an imaging task to decide whether
contrast material is present in a soft tissue background. The performance with
iodine or gadolinium containing contrast material is evaluated as a function of
object thickness and the x-ray tube voltage of the A-space method. Results: For
a tube voltages above 60 kV and soft tissue thicknesses from 5 to 25 g/cm^2,
the A-space method has a larger SNR per dose than the K-edge subtraction method
for either iodine or gadolinium containing contrast agent. Conclusion: Even
with the unrealistic spectra assumed for the K-edge method, the A-space method
has a substantially larger SNR per patient dose.
| 0 | 1 | 0 | 0 | 0 | 0 |
Recall Traces: Backtracking Models for Efficient Reinforcement Learning | In many environments only a tiny subset of all states yield high reward. In
these cases, few of the interactions with the environment provide a relevant
learning signal. Hence, we may want to preferentially train on those
high-reward states and the probable trajectories leading to them. To this end,
we advocate for the use of a backtracking model that predicts the preceding
states that terminate at a given high-reward state. We can train a model which,
starting from a high value state (or one that is estimated to have high value),
predicts and sample for which the (state, action)-tuples may have led to that
high value state. These traces of (state, action) pairs, which we refer to as
Recall Traces, sampled from this backtracking model starting from a high value
state, are informative as they terminate in good states, and hence we can use
these traces to improve a policy. We provide a variational interpretation for
this idea and a practical algorithm in which the backtracking model samples
from an approximate posterior distribution over trajectories which lead to
large rewards. Our method improves the sample efficiency of both on- and
off-policy RL algorithms across several environments and tasks.
| 0 | 0 | 0 | 1 | 0 | 0 |
Path-integral formalism for stochastic resetting: Exactly solved examples and shortcuts to confinement | We study the dynamics of overdamped Brownian particles diffusing in
conservative force fields and undergoing stochastic resetting to a given
location with a generic space-dependent rate of resetting. We present a
systematic approach involving path integrals and elements of renewal theory
that allows to derive analytical expressions for a variety of statistics of the
dynamics such as (i) the propagator prior to first reset; (ii) the distribution
of the first-reset time, and (iii) the spatial distribution of the particle at
long times. We apply our approach to several representative and hitherto
unexplored examples of resetting dynamics. A particularly interesting example
for which we find analytical expressions for the statistics of resetting is
that of a Brownian particle trapped in a harmonic potential with a rate of
resetting that depends on the instantaneous energy of the particle. We find
that using energy-dependent resetting processes is more effective in achieving
spatial confinement of Brownian particles on a faster timescale than by
performing quenches of parameters of the harmonic potential.
| 0 | 1 | 0 | 0 | 0 | 0 |
A strongly convergent numerical scheme from Ensemble Kalman inversion | The Ensemble Kalman methodology in an inverse problems setting can be viewed
as an iterative scheme, which is a weakly tamed discretization scheme for a
certain stochastic differential equation (SDE). Assuming a suitable
approximation result, dynamical properties of the SDE can be rigorously pulled
back via the discrete scheme to the original Ensemble Kalman inversion.
The results of this paper make a step towards closing the gap of the missing
approximation result by proving a strong convergence result in a simplified
model of a scalar stochastic differential equation. We focus here on a toy
model with similar properties than the one arising in the context of Ensemble
Kalman filter. The proposed model can be interpreted as a single particle
filter for a linear map and thus forms the basis for further analysis. The
difficulty in the analysis arises from the formally derived limiting SDE with
non-globally Lipschitz continuous nonlinearities both in the drift and in the
diffusion. Here the standard Euler-Maruyama scheme might fail to provide a
strongly convergent numerical scheme and taming is necessary. In contrast to
the strong taming usually used, the method presented here provides a weaker
form of taming.
We present a strong convergence analysis by first proving convergence on a
domain of high probability by using a cut-off or localisation, which then
leads, combined with bounds on moments for both the SDE and the numerical
scheme, by a bootstrapping argument to strong convergence.
| 0 | 0 | 1 | 0 | 0 | 0 |
Attenuation correction for brain PET imaging using deep neural network based on dixon and ZTE MR images | Positron Emission Tomography (PET) is a functional imaging modality widely
used in neuroscience studies. To obtain meaningful quantitative results from
PET images, attenuation correction is necessary during image reconstruction.
For PET/MR hybrid systems, PET attenuation is challenging as Magnetic Resonance
(MR) images do not reflect attenuation coefficients directly. To address this
issue, we present deep neural network methods to derive the continuous
attenuation coefficients for brain PET imaging from MR images. With only Dixon
MR images as the network input, the existing U-net structure was adopted and
analysis using forty patient data sets shows it is superior than other Dixon
based methods. When both Dixon and zero echo time (ZTE) images are available,
we have proposed a modified U-net structure, named GroupU-net, to efficiently
make use of both Dixon and ZTE information through group convolution modules
when the network goes deeper. Quantitative analysis based on fourteen real
patient data sets demonstrates that both network approaches can perform better
than the standard methods, and the proposed network structure can further
reduce the PET quantification error compared to the U-net structure.
| 0 | 1 | 0 | 1 | 0 | 0 |
Resolving Local Electrochemistry at the Nanoscale via Electrochemical Strain Microscopy: Modeling and Experiments | Electrochemistry is the underlying mechanism in a variety of energy
conversion and storage systems, and it is well known that the composition,
structure, and properties of electrochemical materials near active interfaces
often deviates substantially and inhomogeneously from the bulk properties. A
universal challenge facing the development of electrochemical systems is our
lack of understanding of physical and chemical rates at local length scales,
and the recently developed electrochemical strain microscopy (ESM) provides a
promising method to probe crucial local information regarding the underlying
electrochemical mechanisms. Here we develop a computational model that couples
mechanics and electrochemistry relevant for ESM experiments, with the goal to
enable quantitative analysis of electrochemical processes underneath a charged
scanning probe. We show that the model captures the essence of a number of
different ESM experiments, making it possible to de-convolute local ionic
concentration and diffusivity via combined ESM mapping, spectroscopy, and
relaxation studies. Through the combination of ESM experiments and
computations, it is thus possible to obtain deep insight into the local
electrochemistry at the nanoscale.
| 0 | 1 | 0 | 0 | 0 | 0 |
Genuine equivariant operads | We build new algebraic structures, which we call genuine equivariant operads,
which can be thought of as a hybrid between equivariant operads and coefficient
systems. We then prove an Elmendorf-Piacenza type theorem stating that
equivariant operads, with their graph model structure, are equivalent to
genuine equivariant operads, with their projective model structure.
As an application, we build explicit models for the $N_{\infty}$-operads of
Blumberg and Hill.
| 0 | 0 | 1 | 0 | 0 | 0 |
A linear-time algorithm for the maximum-area inscribed triangle in a convex polygon | Given the n vertices of a convex polygon in cyclic order, can the triangle of
maximum area inscribed in P be determined by an algorithm with O(n) time
complexity? A purported linear-time algorithm by Dobkin and Snyder from 1979
has recently been shown to be incorrect by Keikha, Löffler, Urhausen, and van
der Hoog. These authors give an alternative algorithm with O(n log n) time
complexity. Here we give an algorithm with linear time complexity.
| 1 | 0 | 1 | 0 | 0 | 0 |
Estimation of the infinitesimal generator by square-root approximation | For the analysis of molecular processes, the estimation of time-scales, i.e.,
transition rates, is very important. Estimating the transition rates between
molecular conformations is -- from a mathematical point of view -- an invariant
subspace projection problem. A certain infinitesimal generator acting on
function space is projected to a low-dimensional rate matrix. This projection
can be performed in two steps. First, the infinitesimal generator is
discretized, then the invariant subspace is approxi-mated and used for the
subspace projection. In our approach, the discretization will be based on a
Voronoi tessellation of the conformational space. We will show that the
discretized infinitesimal generator can simply be approximated by the geometric
average of the Boltzmann weights of the Voronoi cells. Thus, there is a direct
correla-tion between the potential energy surface of molecular structures and
the transition rates of conformational changes. We present results for a
2d-diffusion process and Alanine dipeptide.
| 0 | 1 | 0 | 0 | 0 | 0 |
Korea Microlensing Telescope Network Microlensing Events from 2015: Event-Finding Algorithm, Vetting, and Photometry | We present microlensing events in the 2015 Korea Microlensing Telescope
Network (KMTNet) data and our procedure for identifying these events. In
particular, candidates were detected with a novel "completed event"
microlensing event-finder algorithm. The algorithm works by making linear fits
to a (t0,teff,u0) grid of point-lens microlensing models. This approach is
rendered computationally efficient by restricting u0 to just two values (0 and
1), which we show is quite adequate. The implementation presented here is
specifically tailored to the commission-year character of the 2015 data, but
the algorithm is quite general and has already been applied to a completely
different (non-KMTNet) data set. We outline expected improvements for 2016 and
future KMTNet data. The light curves of the 660 "clear microlensing" and 182
"possible microlensing" events that were found in 2015 are presented along with
our policy for their public release.
| 0 | 1 | 0 | 0 | 0 | 0 |
Caveat Emptor, Computational Social Science: Large-Scale Missing Data in a Widely-Published Reddit Corpus | As researchers use computational methods to study complex social behaviors at
scale, the validity of this computational social science depends on the
integrity of the data. On July 2, 2015, Jason Baumgartner published a dataset
advertised to include ``every publicly available Reddit comment'' which was
quickly shared on Bittorrent and the Internet Archive. This data quickly became
the basis of many academic papers on topics including machine learning, social
behavior, politics, breaking news, and hate speech. We have discovered
substantial gaps and limitations in this dataset which may contribute to bias
in the findings of that research. In this paper, we document the dataset,
substantial missing observations in the dataset, and the risks to research
validity from those gaps. In summary, we identify strong risks to research that
considers user histories or network analysis, moderate risks to research that
compares counts of participation, and lesser risk to machine learning research
that avoids making representative claims about behavior and participation on
Reddit.
| 1 | 0 | 0 | 0 | 0 | 0 |
Skeleton-Based Action Recognition Using Spatio-Temporal LSTM Network with Trust Gates | Skeleton-based human action recognition has attracted a lot of research
attention during the past few years. Recent works attempted to utilize
recurrent neural networks to model the temporal dependencies between the 3D
positional configurations of human body joints for better analysis of human
activities in the skeletal data. The proposed work extends this idea to spatial
domain as well as temporal domain to better analyze the hidden sources of
action-related information within the human skeleton sequences in both of these
domains simultaneously. Based on the pictorial structure of Kinect's skeletal
data, an effective tree-structure based traversal framework is also proposed.
In order to deal with the noise in the skeletal data, a new gating mechanism
within LSTM module is introduced, with which the network can learn the
reliability of the sequential data and accordingly adjust the effect of the
input data on the updating procedure of the long-term context representation
stored in the unit's memory cell. Moreover, we introduce a novel multi-modal
feature fusion strategy within the LSTM unit in this paper. The comprehensive
experimental results on seven challenging benchmark datasets for human action
recognition demonstrate the effectiveness of the proposed method.
| 1 | 0 | 0 | 0 | 0 | 0 |
Interpolation and Extrapolation of Toeplitz Matrices via Optimal Mass Transport | In this work, we propose a novel method for quantifying distances between
Toeplitz structured covariance matrices. By exploiting the spectral
representation of Toeplitz matrices, the proposed distance measure is defined
based on an optimal mass transport problem in the spectral domain. This may
then be interpreted in the covariance domain, suggesting a natural way of
interpolating and extrapolating Toeplitz matrices, such that the positive
semi-definiteness and the Toeplitz structure of these matrices are preserved.
The proposed distance measure is also shown to be contractive with respect to
both additive and multiplicative noise, and thereby allows for a quantification
of the decreased distance between signals when these are corrupted by noise.
Finally, we illustrate how this approach can be used for several applications
in signal processing. In particular, we consider interpolation and
extrapolation of Toeplitz matrices, as well as clustering problems and tracking
of slowly varying stochastic processes.
| 0 | 0 | 1 | 1 | 0 | 0 |
Performance of time delay estimation in a cognitive radar | A cognitive radar adapts the transmit waveform in response to changes in the
radar and target environment. In this work, we analyze the recently proposed
sub-Nyquist cognitive radar wherein the total transmit power in a multi-band
cognitive waveform remains the same as its full-band conventional counterpart.
For such a system, we derive lower bounds on the mean-squared-error (MSE) of a
single-target time delay estimate. We formulate a procedure to select the
optimal bands, and recommend distribution of the total power in different bands
to enhance the accuracy of delay estimation. In particular, using Cramér-Rao
bounds, we show that equi-width subbands in cognitive radar always have better
delay estimation than the conventional radar. Further analysis using Ziv-Zakai
bound reveals that cognitive radar performs well in low signal-to-noise (SNR)
regions.
| 1 | 0 | 1 | 0 | 0 | 0 |
RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
Nucleation and growth of hierarchical martensite in epitaxial shape memory films | Shape memory alloys often show a complex hierarchical morphology in the
martensitic state. To understand the formation of this twin-within-twins
microstructure, we examine epitaxial Ni-Mn-Ga films as a model system. In-situ
scanning electron microscopy experiments show beautiful complex twinning
patterns with a number of different mesoscopic twin boundaries and macroscopic
twin boundaries between already twinned regions. We explain the appearance and
geometry of these patterns by constructing an internally twinned martensitic
nucleus, which can take the shape of a diamond or a parallelogram, within the
basic phenomenological theory of martensite. These nucleus contains already the
seeds of different possible mesoscopic twin boundaries. Nucleation and growth
of these nuclei determines the creation of the hierarchical space-filling
martensitic microstructure. This is in contrast to previous approaches to
explain a hierarchical martensitic microstructure. This new picture of creation
and anisotropic, well-oriented growth of twinned martensitic nuclei explains
the morphology and exact geometrical features of our experimentally observed
twins-within-twins microstructure on the meso- and macroscopic scale.
| 0 | 1 | 0 | 0 | 0 | 0 |
Towards Sparse Hierarchical Graph Classifiers | Recent advances in representation learning on graphs, mainly leveraging graph
convolutional networks, have brought a substantial improvement on many
graph-based benchmark tasks. While novel approaches to learning node embeddings
are highly suitable for node classification and link prediction, their
application to graph classification (predicting a single label for the entire
graph) remains mostly rudimentary, typically using a single global pooling step
to aggregate node features or a hand-designed, fixed heuristic for hierarchical
coarsening of the graph structure. An important step towards ameliorating this
is differentiable graph coarsening---the ability to reduce the size of the
graph in an adaptive, data-dependent manner within a graph neural network
pipeline, analogous to image downsampling within CNNs. However, the previous
prominent approach to pooling has quadratic memory requirements during training
and is therefore not scalable to large graphs. Here we combine several recent
advances in graph neural network design to demonstrate that competitive
hierarchical graph classification results are possible without sacrificing
sparsity. Our results are verified on several established graph classification
benchmarks, and highlight an important direction for future research in
graph-based neural networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Strengths and Weaknesses of Deep Learning Models for Face Recognition Against Image Degradations | Deep convolutional neural networks (CNNs) based approaches are the
state-of-the-art in various computer vision tasks, including face recognition.
Considerable research effort is currently being directed towards further
improving deep CNNs by focusing on more powerful model architectures and better
learning techniques. However, studies systematically exploring the strengths
and weaknesses of existing deep models for face recognition are still
relatively scarce in the literature. In this paper, we try to fill this gap and
study the effects of different covariates on the verification performance of
four recent deep CNN models using the Labeled Faces in the Wild (LFW) dataset.
Specifically, we investigate the influence of covariates related to: image
quality -- blur, JPEG compression, occlusion, noise, image brightness,
contrast, missing pixels; and model characteristics -- CNN architecture, color
information, descriptor computation; and analyze their impact on the face
verification performance of AlexNet, VGG-Face, GoogLeNet, and SqueezeNet. Based
on comprehensive and rigorous experimentation, we identify the strengths and
weaknesses of the deep learning models, and present key areas for potential
future research. Our results indicate that high levels of noise, blur, missing
pixels, and brightness have a detrimental effect on the verification
performance of all models, whereas the impact of contrast changes and
compression artifacts is limited. It has been found that the descriptor
computation strategy and color information does not have a significant
influence on performance.
| 0 | 0 | 0 | 1 | 0 | 0 |
BayesVP: a Bayesian Voigt profile fitting package | We introduce a Bayesian approach for modeling Voigt profiles in absorption
spectroscopy and its implementation in the python package, BayesVP, publicly
available at this https URL. The code fits the
absorption line profiles within specified wavelength ranges and generates
posterior distributions for the column density, Doppler parameter, and
redshifts of the corresponding absorbers. The code uses publicly available
efficient parallel sampling packages to sample posterior and thus can be run on
parallel platforms. BayesVP supports simultaneous fitting for multiple
absorption components in high-dimensional parameter space. We provide other
useful utilities in the package, such as explicit specification of priors of
model parameters, continuum model, Bayesian model comparison criteria, and
posterior sampling convergence check.
| 0 | 1 | 0 | 0 | 0 | 0 |
Efficient Algorithms for Non-convex Isotonic Regression through Submodular Optimization | We consider the minimization of submodular functions subject to ordering
constraints. We show that this optimization problem can be cast as a convex
optimization problem on a space of uni-dimensional measures, with ordering
constraints corresponding to first-order stochastic dominance. We propose new
discretization schemes that lead to simple and efficient algorithms based on
zero-th, first, or higher order oracles; these algorithms also lead to
improvements without isotonic constraints. Finally, our experiments show that
non-convex loss functions can be much more robust to outliers for isotonic
regression, while still leading to an efficient optimization problem.
| 1 | 0 | 0 | 1 | 0 | 0 |
The effects of oxygen in spinel oxide Li1+xTi2-xO4-delta thin films | The evolution from superconducting LiTi2O4-delta to insulating Li4Ti5O12 thin
films has been studied by precisely adjusting the oxygen pressure during the
sample fabrication process. In the superconducting LiTi2O4-delta films, with
the increase of oxygen pressure, the oxygen vacancies are filled, and the
c-axis lattice constant decreases gradually. With the increase of the oxygen
pressure to a certain critical value, the c-axis lattice constant becomes
stable, which implies that the Li4Ti5O12 phase comes into being. The process of
oxygen filling is manifested by the angular bright-field images of the scanning
transmission electron microscopy techniques. The temperature of
magnetoresistance changed from positive and negative shows a non-monotonous
behavior with the increase of oxygen pressure. The theoretical explanation of
the oxygen effects on the structure and superconductivity of LiTi2O4-delta has
also been discussed in this work.
| 0 | 1 | 0 | 0 | 0 | 0 |
Qualitative uncertainty principle for Gabor transform on certain locally compact groups | Classes of locally compact groups having qualitative uncertainty principle
for Gabor transform have been investigated. These include Moore groups,
Heisenberg Group $\mathbb{H}_n, \mathbb{H}_{n} \times D,$ where $D$ is discrete
group and other low dimensional nilpotent Lie groups.
| 0 | 0 | 1 | 0 | 0 | 0 |
Strategyproof Mechanisms for Additively Separable Hedonic Games and Fractional Hedonic Games | Additively separable hedonic games and fractional hedonic games have received
considerable attention. They are coalition forming games of selfish agents
based on their mutual preferences. Most of the work in the literature
characterizes the existence and structure of stable outcomes (i.e., partitions
in coalitions), assuming that preferences are given. However, there is little
discussion on this assumption. In fact, agents receive different utilities if
they belong to different partitions, and thus it is natural for them to declare
their preferences strategically in order to maximize their benefit. In this
paper we consider strategyproof mechanisms for additively separable hedonic
games and fractional hedonic games, that is, partitioning methods without
payments such that utility maximizing agents have no incentive to lie about
their true preferences. We focus on social welfare maximization and provide
several lower and upper bounds on the performance achievable by strategyproof
mechanisms for general and specific additive functions. In most of the cases we
provide tight or asymptotically tight results. All our mechanisms are simple
and can be computed in polynomial time. Moreover, all the lower bounds are
unconditional, that is, they do not rely on any computational or complexity
assumptions.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Conditional Analogy GAN: Swapping Fashion Articles on People Images | We present a novel method to solve image analogy problems : it allows to
learn the relation between paired images present in training data, and then
generalize and generate images that correspond to the relation, but were never
seen in the training set. Therefore, we call the method Conditional Analogy
Generative Adversarial Network (CAGAN), as it is based on adversarial training
and employs deep convolutional neural networks. An especially interesting
application of that technique is automatic swapping of clothing on fashion
model photos. Our work has the following contributions. First, the definition
of the end-to-end trainable CAGAN architecture, which implicitly learns
segmentation masks without expensive supervised labeling data. Second,
experimental results show plausible segmentation masks and often convincing
swapped images, given the target article. Finally, we discuss the next steps
for that technique: neural network architecture improvements and more advanced
applications.
| 1 | 0 | 0 | 1 | 0 | 0 |
Inverse antiplane problem on $n$ uniformly stressed inclusions | The inverse problem of antiplane elasticity on determination of the profiles
of $n$ uniformly stressed inclusions is studied. The inclusions are in ideal
contact with the surrounding matrix, the stress field inside the inclusions is
uniform, and at infinity the body is subjected to antiplane uniform shear. The
exterior of the inclusions, an $n$-connected domain, is treated as the image by
a conformal map of an $n$-connected slit domain with the slits lying in the
same line. The inverse problem is solved by quadratures by reducing it to two
Riemann-Hilbert problems on a Riemann surface of genus $n-1$. Samples of two
and three symmetric and non-symmetric uniformly stressed inclusions are
reported.
| 0 | 0 | 1 | 0 | 0 | 0 |
Categorical Structures on Bundle Gerbes and Higher Geometric Prequantisation | We present a construction of a 2-Hilbert space of sections of a bundle gerbe,
a suitable candidate for a prequantum 2-Hilbert space in higher geometric
quantisation. We introduce a direct sum on the morphism categories in the
2-category of bundle gerbes and show that these categories are cartesian
monoidal and abelian. Endomorphisms of the trivial bundle gerbe, or higher
functions, carry the structure of a rig-category, which acts on generic
morphism categories of bundle gerbes. We continue by presenting a
categorification of the hermitean metric on a hermitean line bundle. This is
achieved by introducing a functorial dual that extends the dual of vector
bundles to morphisms of bundle gerbes, and constructing a two-variable
adjunction for the aforementioned rig-module category structure on morphism
categories. Its right internal hom is the module action, composed by taking the
dual of higher functions, while the left internal hom is interpreted as a
bundle gerbe metric. Sections of bundle gerbes are defined as morphisms from
the trivial bundle gerbe to a given bundle gerbe. The resulting categories of
sections carry a rig-module structure over the category of finite-dimensional
Hilbert spaces. A suitable definition of 2-Hilbert spaces is given, modifying
previous definitions by the use of two-variable adjunctions. We prove that the
category of sections of a bundle gerbe fits into this framework, thus obtaining
a 2-Hilbert space of sections. In particular, this can be constructed for
prequantum bundle gerbes in problems of higher geometric quantisation. We
define a dimensional reduction functor and show that the categorical structures
introduced on bundle gerbes naturally reduce to their counterparts on hermitean
line bundles with connections. In several places in this thesis, we provide
examples, making 2-Hilbert spaces of sections and dimensional reduction very
explicit.
| 0 | 0 | 1 | 0 | 0 | 0 |
The Monkeytyping Solution to the YouTube-8M Video Understanding Challenge | This article describes the final solution of team monkeytyping, who finished
in second place in the YouTube-8M video understanding challenge. The dataset
used in this challenge is a large-scale benchmark for multi-label video
classification. We extend the work in [1] and propose several improvements for
frame sequence modeling. We propose a network structure called Chaining that
can better capture the interactions between labels. Also, we report our
approaches in dealing with multi-scale information and attention pooling. In
addition, We find that using the output of model ensemble as a side target in
training can boost single model performance. We report our experiments in
bagging, boosting, cascade, and stacking, and propose a stacking algorithm
called attention weighted stacking. Our final submission is an ensemble that
consists of 74 sub models, all of which are listed in the appendix.
| 1 | 0 | 0 | 0 | 0 | 0 |
Brain EEG Time Series Selection: A Novel Graph-Based Approach for Classification | Brain Electroencephalography (EEG) classification is widely applied to
analyze cerebral diseases in recent years. Unfortunately, invalid/noisy EEGs
degrade the diagnosis performance and most previously developed methods ignore
the necessity of EEG selection for classification. To this end, this paper
proposes a novel maximum weight clique-based EEG selection approach, named
mwcEEGs, to map EEG selection to searching maximum similarity-weighted cliques
from an improved Fréchet distance-weighted undirected EEG graph
simultaneously considering edge weights and vertex weights. Our mwcEEGs
improves the classification performance by selecting intra-clique pairwise
similar and inter-clique discriminative EEGs with similarity threshold
$\delta$. Experimental results demonstrate the algorithm effectiveness compared
with the state-of-the-art time series selection algorithms on real-world EEG
datasets.
| 0 | 0 | 0 | 1 | 1 | 0 |
Modelling dependency completion in sentence comprehension as a Bayesian hierarchical mixture process: A case study involving Chinese relative clauses | We present a case-study demonstrating the usefulness of Bayesian hierarchical
mixture modelling for investigating cognitive processes. In sentence
comprehension, it is widely assumed that the distance between linguistic
co-dependents affects the latency of dependency resolution: the longer the
distance, the longer the retrieval time (the distance-based account). An
alternative theory, direct-access, assumes that retrieval times are a mixture
of two distributions: one distribution represents successful retrievals (these
are independent of dependency distance) and the other represents an initial
failure to retrieve the correct dependent, followed by a reanalysis that leads
to successful retrieval. We implement both models as Bayesian hierarchical
models and show that the direct-access model explains Chinese relative clause
reading time data better than the distance account.
| 1 | 0 | 0 | 1 | 0 | 0 |
Performance analysis of local ensemble Kalman filter | Ensemble Kalman filter (EnKF) is an important data assimilation method for
high dimensional geophysical systems. Efficient implementation of EnKF in
practice often involves the localization technique, which updates each
component using only information within a local radius. This paper rigorously
analyzes the local EnKF (LEnKF) for linear systems, and shows that the filter
error can be dominated by the ensemble covariance, as long as 1) the sample
size exceeds the logarithmic of state dimension and a constant that depends
only on the local radius; 2) the forecast covariance matrix admits a stable
localized structure. In particular, this indicates that with small system and
observation noises, the filter error will be accurate in long time even if the
initialization is not. The analysis also reveals an intrinsic inconsistency
caused by the localization technique, and a stable localized structure is
necessary to control this inconsistency. While this structure is usually taken
for granted for the operation of LEnKF, it can also be rigorously proved for
linear systems with sparse local observations and weak local interactions.
These theoretical results are also validated by numerical implementation of
LEnKF on a simple stochastic turbulence in two dynamical regimes.
| 0 | 0 | 1 | 1 | 0 | 0 |
The homology class of a Poisson transversal | This note is devoted to the study of the homology class of a compact Poisson
transversal in a Poisson manifold. For specific classes of Poisson structures,
such as unimodular Poisson structures and Poisson manifolds with closed leaves,
we prove that all their compact Poisson transversals represent non-trivial
homology classes, generalizing the symplectic case. We discuss several examples
in which this property does not hold, as well as a weaker version of this
property, which holds for log-symplectic structures. Finally, we extend our
results to Dirac geometry.
| 0 | 0 | 1 | 0 | 0 | 0 |
Discrete Time Dynamic Programming with Recursive Preferences: Optimality and Applications | This paper provides an alternative approach to the theory of dynamic
programming, designed to accommodate the kinds of recursive preference
specifications that have become popular in economic and financial analysis,
while still supporting traditional additively separable rewards. The approach
exploits the theory of monotone convex operators, which turns out to be well
suited to dynamic maximization. The intuition is that convexity is preserved
under maximization, so convexity properties found in preferences extend
naturally to the Bellman operator.
| 0 | 0 | 0 | 0 | 0 | 1 |
Anisotropic triangulations via discrete Riemannian Voronoi diagrams | The construction of anisotropic triangulations is desirable for various
applications, such as the numerical solving of partial differential equations
and the representation of surfaces in graphics. To solve this notoriously
difficult problem in a practical way, we introduce the discrete Riemannian
Voronoi diagram, a discrete structure that approximates the Riemannian Voronoi
diagram. This structure has been implemented and was shown to lead to good
triangulations in $\mathbb{R}^2$ and on surfaces embedded in $\mathbb{R}^3$ as
detailed in our experimental companion paper.
In this paper, we study theoretical aspects of our structure. Given a finite
set of points $\cal P$ in a domain $\Omega$ equipped with a Riemannian metric,
we compare the discrete Riemannian Voronoi diagram of $\cal P$ to its
Riemannian Voronoi diagram. Both diagrams have dual structures called the
discrete Riemannian Delaunay and the Riemannian Delaunay complex. We provide
conditions that guarantee that these dual structures are identical. It then
follows from previous results that the discrete Riemannian Delaunay complex can
be embedded in $\Omega$ under sufficient conditions, leading to an anisotropic
triangulation with curved simplices. Furthermore, we show that, under similar
conditions, the simplices of this triangulation can be straightened.
| 1 | 0 | 0 | 0 | 0 | 0 |
Heteroskedastic PCA: Algorithm, Optimality, and Applications | Principal component analysis (PCA) and singular value decomposition (SVD) are
widely used in statistics, machine learning, and applied mathematics. It has
been well studied in the case of homoskedastic noise, where the noise levels of
the contamination are homogeneous.
In this paper, we consider PCA and SVD in the presence of heteroskedastic
noise, which arises naturally in a range of applications. We introduce a
general framework for heteroskedastic PCA and propose an algorithm called
HeteroPCA, which involves iteratively imputing the diagonal entries to remove
the bias due to heteroskedasticity. This procedure is computationally efficient
and provably optimal under the generalized spiked covariance model. A key
technical step is a deterministic robust perturbation analysis on the singular
subspace, which can be of independent interest. The effectiveness of the
proposed algorithm is demonstrated in a suite of applications, including
heteroskedastic low-rank matrix denoising, Poisson PCA, and SVD based on
heteroskedastic and incomplete data.
| 0 | 0 | 0 | 1 | 0 | 0 |
A Manifesto for Web Science @ 10 | Twenty-seven years ago, one of the biggest societal changes in human history
began slowly when the technical foundations for the World Wide Web were defined
by Tim Berners-Lee. Ever since, the Web has grown exponentially, reaching far
beyond its original technical foundations and deeply affecting the world today
- and even more so the society of the future. We have seen that the Web can
influence the realization of human rights and even the pursuit of happiness.
The Web provides an infrastructure to help us to learn, to work, to communicate
with loved ones, and to provide entertainment. However, it also creates an
environment affected by the digital divide between those who have and those who
do not have access. Additionally, the Web provides challenges we must
understand if we are to find a viable balance between data ownership and
privacy protection, between over-whelming surveillance and the prevention of
terrorism. For the Web to succeed, we need to understand its societal
challenges including increased crime, the impact of social platforms and
socio-economic discrimination, and we must work towards fairness, social
inclusion, and open governance.
Ten Yars ago, the field of Web Science was created to explore the science
underlying the Web from a socio-technical perspective including its
mathematical properties, engineering principles, and social impacts. Ten years
later, we are learning much as the interdisciplinary endeavor to understand the
Web's global information space continues to grow.
In this article we want to elicit the major lessons we have learned through
Web Science and make some cautious predictions of what to expect next.
| 1 | 0 | 0 | 0 | 0 | 0 |
LAMOST Spectroscopic Survey of the Galactic Anticentre (LSS-GAC): the second release of value-added catalogues | We present the second release of value-added catalogues of the LAMOST
Spectroscopic Survey of the Galactic Anticentre (LSS-GAC DR2). The catalogues
present values of radial velocity $V_{\rm r}$, atmospheric parameters ---
effective temperature $T_{\rm eff}$, surface gravity log$g$, metallicity
[Fe/H], $\alpha$-element to iron (metal) abundance ratio [$\alpha$/Fe]
([$\alpha$/M]), elemental abundances [C/H] and [N/H], and absolute magnitudes
${\rm M}_V$ and ${\rm M}_{K_{\rm s}}$ deduced from 1.8 million spectra of 1.4
million unique stars targeted by the LSS-GAC since September 2011 until June
2014. The catalogues also give values of interstellar reddening, distance and
orbital parameters determined with a variety of techniques, as well as proper
motions and multi-band photometry from the far-UV to the mid-IR collected from
the literature and various surveys. Accuracies of radial velocities reach
5kms$^{-1}$ for late-type stars, and those of distance estimates range between
10 -- 30 per cent, depending on the spectral signal-to-noise ratios. Precisions
of [Fe/H], [C/H] and [N/H] estimates reach 0.1dex, and those of [$\alpha$/Fe]
and [$\alpha$/M] reach 0.05dex. The large number of stars, the contiguous sky
coverage, the simple yet non-trivial target selection function and the robust
estimates of stellar radial velocities and atmospheric parameters, distances
and elemental abundances, make the catalogues a valuable data set to study the
structure and evolution of the Galaxy, especially the solar-neighbourhood and
the outer disk.
| 0 | 1 | 0 | 0 | 0 | 0 |
Counterfactual Fairness | Machine learning can impact people with legal or ethical consequences when it
is used to automate decisions in areas such as insurance, lending, hiring, and
predictive policing. In many of these scenarios, previous decisions have been
made that are unfairly biased against certain subpopulations, for example those
of a particular race, gender, or sexual orientation. Since this past data may
be biased, machine learning predictors must account for this to avoid
perpetuating or creating discriminatory practices. In this paper, we develop a
framework for modeling fairness using tools from causal inference. Our
definition of counterfactual fairness captures the intuition that a decision is
fair towards an individual if it is the same in (a) the actual world and (b) a
counterfactual world where the individual belonged to a different demographic
group. We demonstrate our framework on a real-world problem of fair prediction
of success in law school.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Frame Tracking Model for Memory-Enhanced Dialogue Systems | Recently, resources and tasks were proposed to go beyond state tracking in
dialogue systems. An example is the frame tracking task, which requires
recording multiple frames, one for each user goal set during the dialogue. This
allows a user, for instance, to compare items corresponding to different goals.
This paper proposes a model which takes as input the list of frames created so
far during the dialogue, the current user utterance as well as the dialogue
acts, slot types, and slot values associated with this utterance. The model
then outputs the frame being referenced by each triple of dialogue act, slot
type, and slot value. We show that on the recently published Frames dataset,
this model significantly outperforms a previously proposed rule-based baseline.
In addition, we propose an extensive analysis of the frame tracking task by
dividing it into sub-tasks and assessing their difficulty with respect to our
model.
| 1 | 0 | 0 | 0 | 0 | 0 |
How to place an obstacle having a dihedral symmetry centered at a given point inside a disk so as to optimize the fundamental Dirichlet eigenvalue | A generic model for the shape optimization problems we consider in this paper
is the optimization of the Dirichlet eigenvalues of the Laplace operator with a
volume constraint. We deal with an obstacle placement problem which can be
formulated as the following eigenvalue optimization problem: Fix two positive
real numbers $r_1$ and $A$. We consider a disk $B\subset \mathbb{R}^2$ having
radius $r_1$. We want to place an obstacle $P$ of area $A$ within $B$ so as to
maximize or minimize the fundamental Dirichlet eigenvalue $\lambda_1$ for the
Laplacian on $B\setminus P$. That is, we want to study the behavior of the
function $\rho \mapsto \lambda_1(B\setminus\rho(P))$, where $\rho$ runs over
the set of all rigid motions of the plane fixing the center of mass for $P$
such that $\rho(P)\subset B$. In this paper, we consider a non-concentric
obstacle placement problem. The extremal configurations correspond to the cases
where an axis of symmetry of $P$ coincide with an axis of symmetry of $B$. We
also characterize the maximizing and the minimizing configurations in our main
result, viz., Theorem 4.1. Equation (6), Propositions 5.1 and 5.2 imply Theorem
4.1. We give many different generalizations of our result. At the end, we
provide some numerical evidence to validate our main theorem for the case where
the obstacle $P$ has $\mathbb{D}_4$ symmetry. For the $n$ odd case, we identify
some of the extremal configuration for $\lambda_1$. We prove that equation (6)
and Proposition 5.1 hold true for $n$ odd too. We highlight some of the
difficulties faced in proving Proposition 5.2 for this case. We provide
numerical evidence for $n=5$ and conjecture that Theorem 4.1 holds true for $n$
odd too.
| 0 | 0 | 1 | 0 | 0 | 0 |
The mapping class groups of reducible Heegaard splittings of genus two | The manifold which admits a genus-$2$ reducible Heegaard splitting is one of
the $3$-sphere, $\mathbb{S}^2 \times \mathbb{S}^1$, lens spaces and their
connected sums. For each of those manifolds except most lens spaces, the
mapping class group of the genus-$2$ splitting was shown to be finitely
presented. In this work, we study the remaining generic lens spaces, and show
that the mapping class group of the genus-$2$ Heegaard splitting is finitely
presented for any lens space by giving its explicit presentation. As an
application, we show that the fundamental groups of the spaces of the genus-$2$
Heegaard splittings of lens spaces are all finitely presented.
| 0 | 0 | 1 | 0 | 0 | 0 |
Linguistic Relativity and Programming Languages | The use of programming languages can wax and wane across the decades. We
examine the split-apply- combine pattern that is common in statistical
computing, and consider how its invocation or implementation in languages like
MATLAB and APL differ from R/dplyr. The differences in spelling illustrate how
the concept of linguistic relativity applies to programming languages in ways
that are analogous to human languages. Finally, we discuss how Julia, by being
a high performance yet general purpose dynamic language, allows its users to
express different abstractions to suit individual preferences.
| 1 | 0 | 0 | 1 | 0 | 0 |
Double-sided probing by map of Asplund's distances using Logarithmic Image Processing in the framework of Mathematical Morphology | We establish the link between Mathematical Morphology and the map of
Asplund's distances between a probe and a grey scale function, using the
Logarithmic Image Processing scalar multiplication. We demonstrate that the map
is the logarithm of the ratio between a dilation and an erosion of the function
by a structuring function: the probe. The dilations and erosions are mappings
from the lattice of the images into the lattice of the positive functions.
Using a flat structuring element, the expression of the map of Asplund's
distances can be simplified with a dilation and an erosion of the image; these
mappings stays in the lattice of the images. We illustrate our approach by an
example of pattern matching with a non-flat structuring function.
| 1 | 0 | 1 | 0 | 0 | 0 |
Regression approaches for Approximate Bayesian Computation | This book chapter introduces regression approaches and regression adjustment
for Approximate Bayesian Computation (ABC). Regression adjustment adjusts
parameter values after rejection sampling in order to account for the imperfect
match between simulations and observations. Imperfect match between simulations
and observations can be more pronounced when there are many summary statistics,
a phenomenon coined as the curse of dimensionality. Because of this imperfect
match, credibility intervals obtained with regression approaches can be
inflated compared to true credibility intervals. The chapter presents the main
concepts underlying regression adjustment. A theorem that compares theoretical
properties of posterior distributions obtained with and without regression
adjustment is presented. Last, a practical application of regression adjustment
in population genetics shows that regression adjustment shrinks posterior
distributions compared to rejection approaches, which is a solution to avoid
inflated credibility intervals.
| 0 | 0 | 0 | 1 | 0 | 0 |
Feature learning in feature-sample networks using multi-objective optimization | Data and knowledge representation are fundamental concepts in machine
learning. The quality of the representation impacts the performance of the
learning model directly. Feature learning transforms or enhances raw data to
structures that are effectively exploited by those models. In recent years,
several works have been using complex networks for data representation and
analysis. However, no feature learning method has been proposed for such
category of techniques. Here, we present an unsupervised feature learning
mechanism that works on datasets with binary features. First, the dataset is
mapped into a feature--sample network. Then, a multi-objective optimization
process selects a set of new vertices to produce an enhanced version of the
network. The new features depend on a nonlinear function of a combination of
preexisting features. Effectively, the process projects the input data into a
higher-dimensional space. To solve the optimization problem, we design two
metaheuristics based on the lexicographic genetic algorithm and the improved
strength Pareto evolutionary algorithm (SPEA2). We show that the enhanced
network contains more information and can be exploited to improve the
performance of machine learning methods. The advantages and disadvantages of
each optimization strategy are discussed.
| 1 | 0 | 0 | 0 | 0 | 0 |
Analog control with two Artificial Axons | The artificial axon is a recently introduced synthetic assembly of supported
lipid bilayers and voltage gated ion channels, displaying the basic
electrophysiology of nerve cells. Here we demonstrate the use of two artificial
axons as control elements to achieve a simple task. Namely, we steer a remote
control car towards a light source, using the sensory input dependent firing
rate of the axons as the control signal for turning left or right. We present
the result in the form of the analysis of a movie of the car approaching the
light source. In general terms, with this work we pursue a constructivist
approach to exploring the nexus between machine language at the nerve cell
level and behavior.
| 0 | 0 | 0 | 0 | 1 | 0 |
Designing a cost-time-quality-efficient grinding process using MODM methods | In this paper a multi-objective mathematical model has been used to optimize
grinding parameters include workpiece speed, depth of cut and wheel speed which
highly affect the final surface quality. The mathematical model of the
optimization problem consists of three conflict objective functions subject to
wheel wear and production rate constraints. Exact methods can solve the NLP
model in few seconds, therefore using Meta-heuristic algorithms which provide
near optimal solutions in not suitable. Considering this, five Multi-Objective
Decision Making methods have been used to solve the multi-objective
mathematical model using GAMS software to achieve the optimal parameters of the
grinding process. The Multi-Objective Decision Making methods provide different
effective solutions where the decision maker can choose each solution in
different situations. Different criteria have been considered to evaluate the
performance of the five Multi-Objective Decision Making methods. Also,
Technique for Order of Preference by Similarity to Ideal Solution method has
been used to obtain the priority of each method and determine which
Multi-Objective Decision Making method performs better considering all criteria
simultaneously. The results indicated that Weighted Sum Method and Goal
programming method are the best Multi-Objective Decision Making methods. The
Weighted Sum Method and Goal programming provided solutions which are
competitive to each other. In addition, these methods obtained solutions which
have minimum grinding time, cost and surface roughness among other
Multi-Objective Decision Making methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
Treewidth distance on phylogenetic trees | In this article we study the treewidth of the \emph{display graph}, an
auxiliary graph structure obtained from the fusion of phylogenetic (i.e.,
evolutionary) trees at their leaves. Earlier work has shown that the treewidth
of the display graph is bounded if the trees are in some formal sense
topologically similar. Here we further expand upon this relationship. We
analyse a number of reduction rules which are commonly used in the
phylogenetics literature to obtain fixed parameter tractable algorithms. In
some cases (the \emph{subtree} reduction) the reduction rules behave similarly
with respect to treewidth, while others (the \emph{cluster} reduction) behave
very differently, and the behaviour of the \emph{chain reduction} is
particularly intriguing because of its link with graph separators and forbidden
minors. We also show that the gap between treewidth and Tree Bisection and
Reconnect (TBR) distance can be infinitely large, and that unlike, for example,
planar graphs the treewidth of the display graph can be as much as linear in
its number of vertices. On a slightly different note we show that if a display
graph is formed from the fusion of a phylogenetic network and a tree, rather
than from two trees, the treewidth of the display graph is bounded whenever the
tree can be topologically embedded ("displayed") within the network. This opens
the door to the formulation of the display problem in Monadic Second Order
Logic (MSOL). A number of other auxiliary results are given. We conclude with a
discussion and list a number of open problems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.