title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
A visual search engine for Bangladeshi laws | Browsing and finding relevant information for Bangladeshi laws is a challenge
faced by all law students and researchers in Bangladesh, and by citizens who
want to learn about any legal procedure. Some law archives in Bangladesh are
digitized, but lack proper tools to organize the data meaningfully. We present
a text visualization tool that utilizes machine learning techniques to make the
searching of laws quicker and easier. Using Doc2Vec to layout law article
nodes, link mining techniques to visualize relevant citation networks, and
named entity recognition to quickly find relevant sections in long law
articles, our tool provides a faster and better search experience to the users.
Qualitative feedback from law researchers, students, and government officials
show promise for visually intuitive search tools in the context of
governmental, legal, and constitutional data in developing countries, where
digitized data does not necessarily pave the way towards an easy access to
information.
| 1 | 0 | 0 | 1 | 0 | 0 |
Minors of two-connected graphs of large path-width | Let $P$ be a graph with a vertex $v$ such that $P\backslash v$ is a forest,
and let $Q$ be an outerplanar graph. We prove that there exists a number
$p=p(P,Q)$ such that every 2-connected graph of path-width at least $p$ has a
minor isomorphic to $P$ or $Q$. This result answers a question of Seymour and
implies a conjecture of Marshall and Wood. The proof is based on a new property
of tree-decompositions.
| 1 | 0 | 0 | 0 | 0 | 0 |
Computable Operations on Compact Subsets of Metric Spaces with Applications to Fréchet Distance and Shape Optimization | We extend the Theory of Computation on real numbers, continuous real
functions, and bounded closed Euclidean subsets, to compact metric spaces
$(X,d)$: thereby generically including computational and optimization problems
over higher types, such as the compact 'hyper' spaces of (i) nonempty closed
subsets of $X$ w.r.t. Hausdorff metric, and of (ii) equicontinuous functions on
$X$. The thus obtained Cartesian closure is shown to exhibit the same
structural properties as in the Euclidean case, particularly regarding function
pre/image. This allows us to assert the computability of (iii) Fréchet
Distances between curves and between loops, as well as of (iv)
constrained/Shape Optimization.
| 1 | 0 | 1 | 0 | 0 | 0 |
Siamese Capsule Networks | Capsule Networks have shown encouraging results on \textit{defacto} benchmark
computer vision datasets such as MNIST, CIFAR and smallNORB. Although, they are
yet to be tested on tasks where (1) the entities detected inherently have more
complex internal representations and (2) there are very few instances per class
to learn from and (3) where point-wise classification is not suitable. Hence,
this paper carries out experiments on face verification in both controlled and
uncontrolled settings that together address these points. In doing so we
introduce \textit{Siamese Capsule Networks}, a new variant that can be used for
pairwise learning tasks. The model is trained using contrastive loss with
$\ell_2$-normalized capsule encoded pose features. We find that \textit{Siamese
Capsule Networks} perform well against strong baselines on both pairwise
learning datasets, yielding best results in the few-shot learning setting where
image pairs in the test set contain unseen subjects.
| 0 | 0 | 0 | 1 | 0 | 0 |
Isolated and Ensemble Audio Preprocessing Methods for Detecting Adversarial Examples against Automatic Speech Recognition | An adversarial attack is an exploitative process in which minute alterations
are made to natural inputs, causing the inputs to be misclassified by neural
models. In the field of speech recognition, this has become an issue of
increasing significance. Although adversarial attacks were originally
introduced in computer vision, they have since infiltrated the realm of speech
recognition. In 2017, a genetic attack was shown to be quite potent against the
Speech Commands Model. Limited-vocabulary speech classifiers, such as the
Speech Commands Model, are used in a variety of applications, particularly in
telephony; as such, adversarial examples produced by this attack pose as a
major security threat. This paper explores various methods of detecting these
adversarial examples with combinations of audio preprocessing. One particular
combined defense incorporating compressions, speech coding, filtering, and
audio panning was shown to be quite effective against the attack on the Speech
Commands Model, detecting audio adversarial examples with 93.5% precision and
91.2% recall.
| 1 | 0 | 0 | 0 | 0 | 0 |
Revisiting the cavity-method threshold for random 3-SAT | A detailed Monte Carlo-study of the satisfiability threshold for random 3-SAT
has been undertaken. In combination with a monotonicity assumption we find that
the threshold for random 3-SAT satisfies $\alpha_3 \leq 4.262$. If the
assumption is correct, this means that the actual threshold value for $k=3$ is
lower than that given by the cavity method. In contrast the latter has recently
been shown to give the correct value for large $k$. Our result thus indicate
that there are distinct behaviors for $k$ above and below some critical $k_c$,
and the cavity method may provide a correct mean-field picture for the range
above $k_c$.
| 0 | 1 | 0 | 0 | 0 | 0 |
An improved Krylov eigenvalue strategy using the FEAST algorithm with inexact system solves | The FEAST eigenvalue algorithm is a subspace iteration algorithm that uses
contour integration in the complex plane to obtain the eigenvectors of a matrix
for the eigenvalues that are located in any user-defined search interval. By
computing small numbers of eigenvalues in specific regions of the complex
plane, FEAST is able to naturally parallelize the solution of eigenvalue
problems by solving for multiple eigenpairs simultaneously. The traditional
FEAST algorithm is implemented by directly solving collections of shifted
linear systems of equations; in this paper, we describe a variation of the
FEAST algorithm that uses iterative Krylov subspace algorithms for solving the
shifted linear systems inexactly. We show that this iterative FEAST algorithm
(which we call IFEAST) is mathematically equivalent to a block Krylov subspace
method for solving eigenvalue problems. By using Krylov subspaces indirectly
through solving shifted linear systems, rather than directly for projecting the
eigenvalue problem, IFEAST is able to solve eigenvalue problems using very
large dimension Krylov subspaces, without ever having to store a basis for
those subspaces. IFEAST thus combines the flexibility and power of Krylov
methods, requiring only matrix-vector multiplication for solving eigenvalue
problems, with the natural parallelism of the traditional FEAST algorithm. We
discuss the relationship between IFEAST and more traditional Krylov methods,
and provide numerical examples illustrating its behavior.
| 1 | 0 | 0 | 0 | 0 | 0 |
Planar magnetic structures in coronal mass ejection-driven sheath regions | Planar magnetic structures (PMSs) are periods in the solar wind during which
interplanetary magnetic field vectors are nearly parallel to a single plane.
One of the specific regions where PMSs have been reported are coronal mass
ejection (CME)-driven sheaths. We use here an automated method to identify PMSs
in 95 CME sheath regions observed in-situ by the Wind and ACE spacecraft
between 1997 and 2015. The occurrence and location of the PMSs are related to
various shock, sheath and CME properties. We find that PMSs are ubiquitous in
CME sheaths; 85% of the studied sheath regions had PMSs with the mean duration
of 6.0 hours. In about one-third of the cases the magnetic field vectors
followed a single PMS plane that covered a significant part (at least 67%) of
the sheath region. Our analysis gives strong support for two suggested PMS
formation mechanisms: the amplification and alignment of solar wind
discontinuities near the CME-driven shock and the draping of the magnetic field
lines around the CME ejecta. For example, we found that the shock and PMS plane
normals generally coincided for the events where the PMSs occurred near the
shock (68% of the PMS plane normals near the shock were separated by less than
20° from the shock normal), while deviations were clearly larger when PMSs
occurred close to the ejecta leading edge. In addition, PMSs near the shock
were generally associated with lower upstream plasma beta than the cases where
PMSs occurred near the leading edge of the CME. We also demonstrate that the
planar parts of the sheath contain a higher amount of strongly southward
magnetic field than the non-planar parts, suggesting that planar sheaths are
more likely to drive magnetospheric activity.
| 0 | 1 | 0 | 0 | 0 | 0 |
Transportation analysis of denoising autoencoders: a novel method for analyzing deep neural networks | The feature map obtained from the denoising autoencoder (DAE) is investigated
by determining transportation dynamics of the DAE, which is a cornerstone for
deep learning. Despite the rapid development in its application, deep neural
networks remain analytically unexplained, because the feature maps are nested
and parameters are not faithful. In this paper, we address the problem of the
formulation of nested complex of parameters by regarding the feature map as a
transport map. Even when a feature map has different dimensions between input
and output, we can regard it as a transportation map by considering that both
the input and output spaces are embedded in a common high-dimensional space. In
addition, the trajectory is a geometric object and thus, is independent of
parameterization. In this manner, transportation can be regarded as a universal
character of deep neural networks. By determining and analyzing the
transportation dynamics, we can understand the behavior of a deep neural
network. In this paper, we investigate a fundamental case of deep neural
networks: the DAE. We derive the transport map of the DAE, and reveal that the
infinitely deep DAE transports mass to decrease a certain quantity, such as
entropy, of the data distribution. These results though analytically simple,
shed light on the correspondence between deep neural networks and the
Wasserstein gradient flows.
| 1 | 0 | 0 | 1 | 0 | 0 |
Weighted Contrastive Divergence | Learning algorithms for energy based Boltzmann architectures that rely on
gradient descent are in general computationally prohibitive, typically due to
the exponential number of terms involved in computing the partition function.
In this way one has to resort to approximation schemes for the evaluation of
the gradient. This is the case of Restricted Boltzmann Machines (RBM) and its
learning algorithm Contrastive Divergence (CD). It is well-known that CD has a
number of shortcomings, and its approximation to the gradient has several
drawbacks. Overcoming these defects has been the basis of much research and new
algorithms have been devised, such as persistent CD. In this manuscript we
propose a new algorithm that we call Weighted CD (WCD), built from small
modifications of the negative phase in standard CD. However small these
modifications may be, experimental work reported in this paper suggest that WCD
provides a significant improvement over standard CD and persistent CD at a
small additional computational cost.
| 0 | 0 | 0 | 1 | 0 | 0 |
Small-Scale Challenges to the $Λ$CDM Paradigm | The dark energy plus cold dark matter ($\Lambda$CDM) cosmological model has
been a demonstrably successful framework for predicting and explaining the
large-scale structure of Universe and its evolution with time. Yet on length
scales smaller than $\sim 1$ Mpc and mass scales smaller than $\sim 10^{11}
M_{\odot}$, the theory faces a number of challenges. For example, the observed
cores of many dark-matter dominated galaxies are both less dense and less cuspy
than naively predicted in $\Lambda$CDM. The number of small galaxies and dwarf
satellites in the Local Group is also far below the predicted count of low-mass
dark matter halos and subhalos within similar volumes. These issues underlie
the most well-documented problems with $\Lambda$CDM: Cusp/Core, Missing
Satellites, and Too-Big-to-Fail. The key question is whether a better
understanding of baryon physics, dark matter physics, or both will be required
to meet these challenges. Other anomalies, including the observed planar and
orbital configurations of Local Group satellites and the tight baryonic/dark
matter scaling relations obeyed by the galaxy population, have been less
thoroughly explored in the context of $\Lambda$CDM theory. Future surveys to
discover faint, distant dwarf galaxies and to precisely measure their masses
and density structure hold promising avenues for testing possible solutions to
the small-scale challenges going forward. Observational programs to constrain
or discover and characterize the number of truly dark low-mass halos are among
the most important, and achievable, goals in this field over then next decade.
These efforts will either further verify the $\Lambda$CDM paradigm or demand a
substantial revision in our understanding of the nature of dark matter.
| 0 | 1 | 0 | 0 | 0 | 0 |
Alternating minimization, scaling algorithms, and the null-cone problem from invariant theory | Alternating minimization heuristics seek to solve a (difficult) global
optimization task through iteratively solving a sequence of (much easier) local
optimization tasks on different parts (or blocks) of the input parameters.
While popular and widely applicable, very few examples of this heuristic are
rigorously shown to converge to optimality, and even fewer to do so
efficiently.
In this paper we present a general framework which is amenable to rigorous
analysis, and expose its applicability. Its main feature is that the local
optimization domains are each a group of invertible matrices, together
naturally acting on tensors, and the optimization problem is minimizing the
norm of an input tensor under this joint action. The solution of this
optimization problem captures a basic problem in Invariant Theory, called the
null-cone problem.
This algebraic framework turns out to encompass natural computational
problems in combinatorial optimization, algebra, analysis, quantum information
theory, and geometric complexity theory. It includes and extends to high
dimensions the recent advances on (2-dimensional) operator scaling.
Our main result is a fully polynomial time approximation scheme for this
general problem, which may be viewed as a multi-dimensional scaling algorithm.
This directly leads to progress on some of the problems in the areas above, and
a unified view of others. We explain how faster convergence of an algorithm for
the same problem will allow resolving central open problems.
Our main techniques come from Invariant Theory, and include its rich
non-commutative duality theory, and new bounds on the bitsizes of coefficients
of invariant polynomials. They enrich the algorithmic toolbox of this very
computational field of mathematics, and are directly related to some challenges
in geometric complexity theory (GCT).
| 1 | 0 | 0 | 0 | 0 | 0 |
Positivity of denominator vectors of cluster algebras | In this paper, we prove that positivity of denominator vectors holds for any
skew-symmetric cluster algebra.
| 0 | 0 | 1 | 0 | 0 | 0 |
Primordial Black Holes and Slow-Roll Violation | For primordial black holes (PBH) to be the dark matter in single-field
inflation, the slow-roll approximation must be violated by at least ${\cal
O}(1)$ in order to enhance the curvature power spectrum within the required
number of efolds between CMB scales and PBH mass scales. Power spectrum
predictions which rely on the inflaton remaining on the slow-roll attractor can
fail dramatically leading to qualitatively incorrect conclusions in models like
an inflection potential and misestimate the mass scale in a running mass model.
We show that an optimized temporal evaluation of the Hubble slow-roll
parameters to second order remains a good description for a wide range of PBH
formation models where up to a $10^7$ amplification of power occurs in $10$
efolds or more.
| 0 | 1 | 0 | 0 | 0 | 0 |
Using Optimal Ratio Mask as Training Target for Supervised Speech Separation | Supervised speech separation uses supervised learning algorithms to learn a
mapping from an input noisy signal to an output target. With the fast
development of deep learning, supervised separation has become the most
important direction in speech separation area in recent years. For the
supervised algorithm, training target has a significant impact on the
performance. Ideal ratio mask is a commonly used training target, which can
improve the speech intelligibility and quality of the separated speech.
However, it does not take into account the correlation between noise and clean
speech. In this paper, we use the optimal ratio mask as the training target of
the deep neural network (DNN) for speech separation. The experiments are
carried out under various noise environments and signal to noise ratio (SNR)
conditions. The results show that the optimal ratio mask outperforms other
training targets in general.
| 1 | 0 | 0 | 0 | 0 | 0 |
Volumetric parametrization from a level set boundary representation with PHT Splines | A challenge in isogeometric analysis is constructing analysis-suitable
volumetric meshes which can accurately represent the geometry of a given
physical domain. In this paper, we propose a method to derive a spline-based
representation of a domain of interest from voxel-based data. We show an
efficient way to obtain a boundary representation of the domain by a level-set
function. Then, we use the geometric information from the boundary (the normal
vectors and curvature) to construct a matching C1 representation with
hierarchical cubic splines. The approximation is done by a single template and
linear transformations (scaling, translations and rotations) without the need
for solving an optimization problem. We illustrate our method with several
examples in two and three dimensions, and show good performance on some
standard benchmark test problems.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the Compressive Power of Deep Rectifier Networks for High Resolution Representation of Class Boundaries | This paper provides a theoretical justification of the superior
classification performance of deep rectifier networks over shallow rectifier
networks from the geometrical perspective of piecewise linear (PWL) classifier
boundaries. We show that, for a given threshold on the approximation error, the
required number of boundary facets to approximate a general smooth boundary
grows exponentially with the dimension of the data, and thus the number of
boundary facets, referred to as boundary resolution, of a PWL classifier is an
important quality measure that can be used to estimate a lower bound on the
classification errors. However, learning naively an exponentially large number
of boundary facets requires the determination of an exponentially large number
of parameters and also requires an exponentially large number of training
patterns. To overcome this issue of "curse of dimensionality", compressive
representations of high resolution classifier boundaries are required. To show
the superior compressive power of deep rectifier networks over shallow
rectifier networks, we prove that the maximum boundary resolution of a single
hidden layer rectifier network classifier grows exponentially with the number
of units when this number is smaller than the dimension of the patterns. When
the number of units is larger than the dimension of the patterns, the growth
rate is reduced to a polynomial order. Consequently, the capacity of generating
a high resolution boundary will increase if the same large number of units are
arranged in multiple layers instead of a single hidden layer. Taking high
dimensional spherical boundaries as examples, we show how deep rectifier
networks can utilize geometric symmetries to approximate a boundary with the
same accuracy but with a significantly fewer number of parameters than single
hidden layer nets.
| 1 | 0 | 0 | 1 | 0 | 0 |
Absolute spectroscopy near 7.8 μm with a comb-locked extended-cavity quantum-cascade-laser | We report the first experimental demonstration of frequency-locking of an
extended-cavity quantum-cascade-laser (EC-QCL) to a near-infrared frequency
comb. The locking scheme is applied to carry out absolute spectroscopy of N2O
lines near 7.87 {\mu}m with an accuracy of ~60 kHz. Thanks to a single mode
operation over more than 100 cm^{-1}, the comb-locked EC-QCL shows great
potential for the accurate retrieval of line center frequencies in a spectral
region that is currently outside the reach of broadly tunable cw sources,
either based on difference frequency generation or optical parametric
oscillation. The approach described here can be straightforwardly extended up
to 12 {\mu}m, which is the current wavelength limit for commercial cw EC-QCLs.
| 0 | 1 | 0 | 0 | 0 | 0 |
Some remarks on upper bounds for Weierstrass primary factors and their application in spectral theory | We study upper bounds on Weierstrass primary factors and discuss their
application in spectral theory. One of the main aims of this note is to draw
attention to works of Blumenthal and Denjoy from 1910, but we also provide some
new results and some numerical computations of our own.
| 0 | 0 | 1 | 0 | 0 | 0 |
Topology of polyhedral products over simplicial multiwedges | We prove that certain conditions on multigraded Betti numbers of a simplicial
complex $K$ imply existence of a higher Massey product in cohomology of a
moment-angle-complex $\mathcal Z_K$, which contains a unique element (a
strictly defined product). Using the simplicial multiwedge construction, we
find a family $\mathcal{F}$ of polyhedral products being smooth closed
manifolds such that for any $l,r\geq 2$ there exists an $l$-connected manifold
$M\in\mathcal F$ with a nontrivial strictly defined $r$-fold Massey product in
$H^{*}(M)$. As an application to homological algebra, we determine a wide class
of triangulated spheres $K$ such that a nontrivial higher Massey product of any
order may exist in Koszul homology of their Stanley--Reisner rings. As an
application to rational homotopy theory, we establish a combinatorial criterion
for a simple graph $\Gamma$ to provide a (rationally) formal generalized
moment-angle manifold $\mathcal Z_{P}^{J}=(D^{2j_{i}},S^{2j_{i}-1})^{\partial
P^*}$, $J=(j_{1},\ldots,j_m)$ over a graph-associahedron $P=P_{\Gamma}$ and
compute all the diffeomorphism types of formal moment-angle manifolds over
graph-associahedra.
| 0 | 0 | 1 | 0 | 0 | 0 |
Improved Representation Learning for Predicting Commonsense Ontologies | Recent work in learning ontologies (hierarchical and partially-ordered
structures) has leveraged the intrinsic geometry of spaces of learned
representations to make predictions that automatically obey complex structural
constraints. We explore two extensions of one such model, the order-embedding
model for hierarchical relation learning, with an aim towards improved
performance on text data for commonsense knowledge representation. Our first
model jointly learns ordering relations and non-hierarchical knowledge in the
form of raw text. Our second extension exploits the partial order structure of
the training data to find long-distance triplet constraints among embeddings
which are poorly enforced by the pairwise training procedure. We find that both
incorporating free text and augmented training constraints improve over the
original order-embedding model and other strong baselines.
| 1 | 0 | 0 | 1 | 0 | 0 |
Mesoporous Silica as a Carrier for Amorphous Solid Dispersion | In the past decade, the discovery of active pharmaceutical substances with
high therapeutic value but poor aqueous solubility has increased, thus making
it challenging to formulate these compounds as oral dosage forms. The
bioavailability of these drugs can be increased by formulating these drugs as
an amorphous drug delivery system. Use of porous media like mesoporous silica
has been investigated as a potential means to increase the solubility of poorly
soluble drugs and to stabilize the amorphous drug delivery system. These
materials have nanosized capillaries and the large surface area which enable
the materials to accommodate high drug loading and promote the controlled and
fast release. Therefore, mesoporous silica has been used as a carrier in the
solid dispersion to form an amorphous solid dispersion (ASD). Mesoporous silica
is also being used as an adsorbent in a conventional solid dispersion, which
has many useful aspects. This review focuses on the use of mesoporous silica in
ASD as potential means to increase the dissolution rate and to provide or
increase the stability of the ASD. First, an overview of mesoporous silica and
the classification is discussed. Subsequently, methods of drug incorporation,
the stability of dispersion and, much more are discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Probability Monad as the Colimit of Finite Powers | We define and study a probability monad on the category of complete metric
spaces and short maps. It assigns to each space the space of Radon probability
measures on it with finite first moment, equipped with the
Kantorovich-Wasserstein distance. This monad is analogous to the Giry monad on
the category of Polish spaces, and it extends a construction due to van Breugel
for compact and for 1-bounded complete metric spaces.
We prove that this Kantorovich monad arises from a colimit construction on
finite powers, which formalizes the intuition that probability measures are
limits of finite samples. The proof relies on a criterion for when an ordinary
left Kan extension of lax monoidal functors is a monoidal Kan extension. The
colimit characterization allows the development of integration theory and the
treatment of measures on spaces of measures, without measure theory.
We also show that the category of algebras of the Kantorovich monad is
equivalent to the category of closed convex subsets of Banach spaces with short
affine maps as morphisms.
| 1 | 0 | 0 | 0 | 0 | 0 |
Robust Optimal Design of Energy Efficient Series Elastic Actuators: Application to a Powered Prosthetic Ankle | Design of robotic systems that safely and efficiently operate in uncertain
operational conditions, such as rehabilitation and physical assistance robots,
remains an important challenge in the field. Current methods for the design of
energy efficient series elastic actuators use an optimization formulation that
typically assumes known operational conditions. This approach could lead to
actuators that cannot perform in uncertain environments because elongation,
speed, or torque requirements may be beyond actuator specifications when the
operation deviates from its nominal conditions. Addressing this gap, we propose
a convex optimization formulation to design the stiffness of series elastic
actuators to minimize energy consumption and satisfy actuator constraints
despite uncertainty due to manufacturing of the spring, unmodeled dynamics,
efficiency of the transmission, and the kinematics and kinetics of the load. In
our formulation, we express energy consumption as a scalar convex-quadratic
function of compliance. In the unconstrained case, this quadratic equation
provides an analytical solution to the optimal value of stiffness that
minimizes energy consumption for arbitrary periodic reference trajectories. As
actuator constraints, we consider peak motor torque, peak motor velocity,
limitations due to the speed-torque relationship of DC motors, and peak
elongation of the spring. As a simulation case study, we apply our formulation
to the robust design of a series elastic actuator for a powered prosthetic
ankle. Our simulation results indicate that a small trade-off between energy
efficiency and robustness is justified to design actuators that can operate
with uncertainty.
| 1 | 0 | 0 | 0 | 0 | 0 |
Subwavelength phononic bandgap opening in bubbly media | The aim of this paper is to show both analytically and numerically the
existence of a subwavelength phononic bandgap in bubble phononic crystals. The
key is an original formula for the quasi-periodic Minnaert resonance
frequencies of an arbitrarily shaped bubble. The main findings in this paper
are illustrated with a variety of numerical experiments.
| 0 | 0 | 1 | 0 | 0 | 0 |
Hybrid integration of solid-state quantum emitters on a silicon photonic chip | Scalable quantum photonic systems require efficient single photon sources
coupled to integrated photonic devices. Solid-state quantum emitters can
generate single photons with high efficiency, while silicon photonic circuits
can manipulate them in an integrated device structure. Combining these two
material platforms could, therefore, significantly increase the complexity of
integrated quantum photonic devices. Here, we demonstrate hybrid integration of
solid-state quantum emitters to a silicon photonic device. We develop a
pick-and-place technique that can position epitaxially grown InAs/InP quantum
dots emitting at telecom wavelengths on a silicon photonic chip
deterministically with nanoscale precision. We employ an adiabatic tapering
approach to transfer the emission from the quantum dots to the waveguide with
high efficiency. We also incorporate an on-chip silicon-photonic beamsplitter
to perform a Hanbury-Brown and Twiss measurement. Our approach could enable
integration of pre-characterized III-V quantum photonic devices into
large-scale photonic structures to enable complex devices composed of many
emitters and photons.
| 0 | 1 | 0 | 0 | 0 | 0 |
Perceive Your Users in Depth: Learning Universal User Representations from Multiple E-commerce Tasks | Tasks such as search and recommendation have become increas- ingly important
for E-commerce to deal with the information over- load problem. To meet the
diverse needs of di erent users, person- alization plays an important role. In
many large portals such as Taobao and Amazon, there are a bunch of di erent
types of search and recommendation tasks operating simultaneously for person-
alization. However, most of current techniques address each task separately.
This is suboptimal as no information about users shared across di erent tasks.
In this work, we propose to learn universal user representations across
multiple tasks for more e ective personalization. In partic- ular, user
behavior sequences (e.g., click, bookmark or purchase of products) are modeled
by LSTM and attention mechanism by integrating all the corresponding content,
behavior and temporal information. User representations are shared and learned
in an end-to-end setting across multiple tasks. Bene ting from better
information utilization of multiple tasks, the user representations are more e
ective to re ect their interests and are more general to be transferred to new
tasks. We refer this work as Deep User Perception Network (DUPN) and conduct an
extensive set of o ine and online experiments. Across all tested ve di erent
tasks, our DUPN consistently achieves better results by giving more e ective
user representations. Moreover, we deploy DUPN in large scale operational tasks
in Taobao. Detailed implementations, e.g., incre- mental model updating, are
also provided to address the practical issues for the real world applications.
| 0 | 0 | 0 | 1 | 0 | 0 |
Phase-diagram and dynamics of Rydberg-dressed fermions in two-dimensions | We investigate the ground-state properties and the collective modes of a
two-dimensional two-component Rydberg-dressed Fermi liquid in the
dipole-blockade regime. We find instability of the homogeneous system toward
phase separated and density ordered phases, using the Hartree-Fock and
random-phase approximations, respectively. The spectral weight of collective
density oscillations in the homogenous phase also signals the emergence of
density-wave instability. We examine the effect of exchange-hole on the
density-wave instability and on the collective mode dispersion using the
Hubbard local-field factor.
| 0 | 1 | 0 | 0 | 0 | 0 |
Ambient noise correlation-based imaging with moving sensors | Waves can be used to probe and image an unknown medium. Passive imaging uses
ambient noise sources to illuminate the medium. This paper considers passive
imaging with moving sensors. The motivation is to generate large synthetic
apertures, which should result in enhanced resolution. However Doppler effects
and lack of reciprocity significantly affect the imaging process. This paper
discusses the consequences in terms of resolution and it shows how to design
appropriate imaging functions depending on the sensor trajectory and velocity.
| 0 | 1 | 0 | 0 | 0 | 0 |
Anderson localization in the Non-Hermitian Aubry-André-Harper model with physical gain and loss | We investigate the Anderson localization in non-Hermitian
Aubry-André-Harper (AAH) models with imaginary potentials added to lattice
sites to represent the physical gain and loss during the interacting processes
between the system and environment. By checking the mean inverse participation
ratio (MIPR) of the system, we find that different configurations of physical
gain and loss have very different impacts on the localization phase transition
in the system. In the case with balanced physical gain and loss added in an
alternate way to the lattice sites, the critical region (in the case with
p-wave superconducting pairing) and the critical value (both in the situations
with and without p-wave pairing) for the Anderson localization phase transition
will be significantly reduced, which implies an enhancement of the localization
process. However, if the system is divided into two parts with one of them
coupled to physical gain and the other coupled to the corresponding physical
loss, the transition process will be impacted only in a very mild way. Besides,
we also discuss the situations with imbalanced physical gain and loss and find
that the existence of random imaginary potentials in the system will also
affect the localization process while constant imaginary potentials will not.
| 0 | 1 | 0 | 0 | 0 | 0 |
Towards de Sitter from 10D | Using a 10D lift of non-perturbative volume stabilization in type IIB string
theory we study the limitations for obtaining de Sitter vacua. Based on this we
find that the simplest KKLT vacua with a single Kahler modulus stabilized by a
gaugino condensate cannot be uplifted to de Sitter. Rather, the uplift flattens
out due to stronger backreaction on the volume modulus than has previously been
anticipated, resulting in vacua which are meta-stable and SUSY breaking, but
that are always AdS. However, we also show that setups such as racetrack
stabilization can avoid this issue. In these models it is possible to obtain
supersymmetric AdS vacua with a cosmological constant that can be tuned to zero
while retaining finite moduli stabilization. In this regime, it seems that de
Sitter uplifts are possible with negligible backreaction on the internal
volume. We exhibit this behavior also from the 10D perspective.
| 0 | 1 | 0 | 0 | 0 | 0 |
Incorporating genuine prior information about between-study heterogeneity in random effects pairwise and network meta-analyses | Background: Pairwise and network meta-analyses using fixed effect and random
effects models are commonly applied to synthesise evidence from randomised
controlled trials. The models differ in their assumptions and the
interpretation of the results. The model choice depends on the objective of the
analysis and knowledge of the included studies. Fixed effect models are often
used because there are too few studies with which to estimate the between-study
standard deviation from the data alone. Objectives: The aim is to propose a
framework for eliciting an informative prior distribution for the between-study
standard deviation in a Bayesian random effects meta-analysis model to
genuinely represent heterogeneity when data are sparse. Methods: We developed
an elicitation method using external information such as empirical evidence and
experts' beliefs on the 'range' of treatment effects in order to infer the
prior distribution for the between-study standard deviation. We also developed
the method to be implemented in R. Results: The three-stage elicitation
approach allows uncertainty to be represented by a genuine prior distribution
to avoid making misleading inferences. It is flexible to what judgments an
expert can provide, and is applicable to all types of outcome measure for which
a treatment effect can be constructed on an additive scale. Conclusions: The
choice between using a fixed effect or random effects meta-analysis model
depends on the inferences required and not on the number of available studies.
Our elicitation framework captures external evidence about heterogeneity and
overcomes the often implausible assumption that studies are estimating the same
treatment effect, thereby improving the quality of inferences in decision
making.
| 0 | 0 | 0 | 1 | 0 | 0 |
Wasserstein Distributional Robustness and Regularization in Statistical Learning | A central question in statistical learning is to design algorithms that not
only perform well on training data, but also generalize to new and unseen data.
In this paper, we tackle this question by formulating a distributionally robust
stochastic optimization (DRSO) problem, which seeks a solution that minimizes
the worst-case expected loss over a family of distributions that are close to
the empirical distribution in Wasserstein distances. We establish a connection
between such Wasserstein DRSO and regularization. More precisely, we identify a
broad class of loss functions, for which the Wasserstein DRSO is asymptotically
equivalent to a regularization problem with a gradient-norm penalty. Such
relation provides new interpretations for problems involving regularization,
including a great number of statistical learning problems and discrete choice
models (e.g. multinomial logit). The connection suggests a principled way to
regularize high-dimensional, non-convex problems. This is demonstrated through
the training of Wasserstein generative adversarial networks in deep learning.
| 1 | 0 | 0 | 1 | 0 | 0 |
InfiniteBoost: building infinite ensembles with gradient descent | In machine learning ensemble methods have demonstrated high accuracy for the
variety of problems in different areas. Two notable ensemble methods widely
used in practice are gradient boosting and random forests. In this paper we
present InfiniteBoost - a novel algorithm, which combines important properties
of these two approaches. The algorithm constructs the ensemble of trees for
which two properties hold: trees of the ensemble incorporate the mistakes done
by others; at the same time the ensemble could contain the infinite number of
trees without the over-fitting effect. The proposed algorithm is evaluated on
the regression, classification, and ranking tasks using large scale, publicly
available datasets.
| 1 | 0 | 0 | 1 | 0 | 0 |
On-demand microwave generator of shaped single photons | We demonstrate the full functionality of a circuit that generates single
microwave photons on demand, with a wave packet that can be modulated with a
near-arbitrary shape. We achieve such a high tunability by coupling a
superconducting qubit near the end of a semi-infinite transmission line. A dc
superconducting quantum interference device shunts the line to ground and is
employed to modify the spatial dependence of the electromagnetic mode structure
in the transmission line. This control allows us to couple and decouple the
qubit from the line, shaping its emission rate on fast time scales. Our
decoupling scheme is applicable to all types of superconducting qubits and
other solid-state systems and can be generalized to multiple qubits as well as
to resonators.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fast and Stable Pascal Matrix Algorithms | In this paper, we derive a family of fast and stable algorithms for
multiplying and inverting $n \times n$ Pascal matrices that run in $O(n log^2
n)$ time and are closely related to De Casteljau's algorithm for Bézier curve
evaluation. These algorithms use a recursive factorization of the triangular
Pascal matrices and improve upon the cripplingly unstable $O(n log n)$ fast
Fourier transform-based algorithms which involve a Toeplitz matrix
factorization. We conduct numerical experiments which establish the speed and
stability of our algorithm, as well as the poor performance of the Toeplitz
factorization algorithm. As an example, we show how our formulation relates to
Bézier curve evaluation.
| 1 | 0 | 0 | 0 | 0 | 0 |
Variational Continual Learning | This paper develops variational continual learning (VCL), a simple but
general framework for continual learning that fuses online variational
inference (VI) and recent advances in Monte Carlo VI for neural networks. The
framework can successfully train both deep discriminative models and deep
generative models in complex continual learning settings where existing tasks
evolve over time and entirely new tasks emerge. Experimental results show that
VCL outperforms state-of-the-art continual learning methods on a variety of
tasks, avoiding catastrophic forgetting in a fully automatic way.
| 1 | 0 | 0 | 1 | 0 | 0 |
Dynamic adaptive procedures that control the false discovery rate | In the multiple testing problem with independent tests, the classical linear
step-up procedure controls the false discovery rate (FDR) at level
$\pi_0\alpha$, where $\pi_0$ is the proportion of true null hypotheses and
$\alpha$ is the target FDR level. Adaptive procedures can improve power by
incorporating estimates of $\pi_0$, which typically rely on a tuning parameter.
Fixed adaptive procedures set their tuning parameters before seeing the data
and can be shown to control the FDR in finite samples. We develop theoretical
results for dynamic adaptive procedures whose tuning parameters are determined
by the data. We show that, if the tuning parameter is chosen according to a
left-to-right stopping time rule, the corresponding dynamic adaptive procedure
controls the FDR in finite samples. Examples include the recently proposed
right-boundary procedure and the widely used lowest-slope procedure, among
others. Simulation results show that the right-boundary procedure is more
powerful than other dynamic adaptive procedures under independence and mild
dependence conditions.
| 0 | 0 | 1 | 1 | 0 | 0 |
Compression-Based Regularization with an Application to Multi-Task Learning | This paper investigates, from information theoretic grounds, a learning
problem based on the principle that any regularity in a given dataset can be
exploited to extract compact features from data, i.e., using fewer bits than
needed to fully describe the data itself, in order to build meaningful
representations of a relevant content (multiple labels). We begin by
introducing the noisy lossy source coding paradigm with the log-loss fidelity
criterion which provides the fundamental tradeoffs between the
\emph{cross-entropy loss} (average risk) and the information rate of the
features (model complexity). Our approach allows an information theoretic
formulation of the \emph{multi-task learning} (MTL) problem which is a
supervised learning framework in which the prediction models for several
related tasks are learned jointly from common representations to achieve better
generalization performance. Then, we present an iterative algorithm for
computing the optimal tradeoffs and its global convergence is proven provided
that some conditions hold. An important property of this algorithm is that it
provides a natural safeguard against overfitting, because it minimizes the
average risk taking into account a penalization induced by the model
complexity. Remarkably, empirical results illustrate that there exists an
optimal information rate minimizing the \emph{excess risk} which depends on the
nature and the amount of available training data. An application to
hierarchical text categorization is also investigated, extending previous
works.
| 1 | 0 | 0 | 1 | 0 | 0 |
Isomorphism between Differential and Moment Invariants under Affine Transform | The invariant is one of central topics in science, technology and
engineering. The differential invariant is essential in understanding or
describing some important phenomena or procedures in mathematics, physics,
chemistry, biology or computer science etc. The derivation of differential
invariants is usually difficult or complicated. This paper reports a discovery
that under the affine transform, differential invariants have similar
structures with moment invariants up to a scalar function of transform
parameters. If moment invariants are known, relative differential invariants
can be obtained by the substitution of moments by derivatives with the same
order. Whereas moment invariants can be calculated by multiple integrals, this
method provides a simple way to derive differential invariants without the need
to resolve any equation system. Since the definition of moments on different
manifolds or in different dimension of spaces is well established, differential
invariants on or in them will also be well defined. Considering that moments
have a strong background in mathematics and physics, this technique offers a
new view angle to the inner structure of invariants. Projective differential
invariants can also be found in this way with a screening process.
| 1 | 0 | 0 | 0 | 0 | 0 |
Throughput Analysis for Wavelet OFDM in Broadband Power Line Communications | Windowed orthogonal frequency-division multiplexing (OFDM) and wavelet OFDM
have been proposed as medium access techniques for broadband communications
over the power line network by the standard IEEE 1901. Windowed OFDM has been
extensively researched and employed in different fields of communication, while
wavelet OFDM, which has been recently recommended for the first time in a
standard, has received less attention. This work is aimed to show that wavelet
OFDM, which basically is an Extended Lapped Transform-based multicarrier
modulation (ELT-MCM), is a viable and attractive alternative for data
transmission in hostile scenarios, such as in-home PLC. To this end, we obtain
theoretical expressions for ELT-MCM of: 1) the useful signal power, 2) the
inter-symbol interference (ISI) power, 3) the inter-carrier interference (ICI)
power, and 4) the noise power at the receiver side. The system capacity and the
achievable throughput are derived from these. This study includes several
computer simulations that show that ELT-MCM is an efficient alternative to
improve data rates in PLC networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Simple And Efficient Architecture Search for Convolutional Neural Networks | Neural networks have recently had a lot of success for many tasks. However,
neural network architectures that perform well are still typically designed
manually by experts in a cumbersome trial-and-error process. We propose a new
method to automatically search for well-performing CNN architectures based on a
simple hill climbing procedure whose operators apply network morphisms,
followed by short optimization runs by cosine annealing. Surprisingly, this
simple method yields competitive results, despite only requiring resources in
the same order of magnitude as training a single network. E.g., on CIFAR-10,
our method designs and trains networks with an error rate below 6% in only 12
hours on a single GPU; training for one day reduces this error further, to
almost 5%.
| 1 | 0 | 0 | 1 | 0 | 0 |
Quantum estimation of detection efficiency with no-knowledge quantum feedback | We investigate that no-knowledge measurement-based feedback control is
utilized to obtain the estimation precision of the detection efficiency. For
the feedback operators that concern us, no-knowledge measurement is the optimal
way to estimate the detection efficiency. We show that the higher precision can
be achieved for the lower or larger detection efficiency. It is found that
no-knowledge feedback can be used to cancel decoherence. No-knowledge feedback
with a high detection efficiency can perform well in estimating frequency and
detection efficiency parameters simultaneously. And simultaneous estimation is
better than independent estimation given by the same probes.
| 1 | 0 | 0 | 0 | 0 | 0 |
Intermittent Granular Dynamics at a Seismogenic Plate Boundary | Earthquakes at seismogenic plate boundaries are a response to the
differential motions of tectonic blocks embedded within a geometrically complex
network of branching and coalescing faults. Elastic strain is accumulated at a
slow strain rate of the order of $10^{-15}$ s$^{-1}$, and released
intermittently at intervals $>100$ years, in the form of rapid (seconds to
minutes) coseismic ruptures. The development of macroscopic models of
quasi-static planar tectonic dynamics at these plate boundaries has remained
challenging due to uncertainty with regard to the spatial and kinematic
complexity of fault system behaviors. In particular, the characteristic length
scale of kinematically distinct tectonic structures is poorly constrained. Here
we analyze fluctuations in GPS recordings of interseismic velocities from the
southern California plate boundary, identifying heavy-tailed scaling behavior.
This suggests that the plate boundary can be understood as a densely packed
granular medium near the jamming transition, with a characteristic length scale
of $91 \pm 20$ km. In this picture fault and block systems may rapidly
rearrange the distribution of forces within them, driving a mixture of
transient and intermittent fault slip behaviors over tectonic time scales.
| 0 | 1 | 0 | 0 | 0 | 0 |
Optimal continuous-time ALM for insurers: a martingale approach | We study a continuous-time asset-allocation problem for a firm in the
insurance industry that backs up the liabilities raised by the insurance
contracts with the underwriting profits and the income resulting from investing
in the financial market. Using the martingale approach and convex duality
techniques we characterize strategies that maximize expected utility from
consumption and final wealth under CRRA preferences. We present numerical
results for some distributions of claims/liabilities with policy limits.
| 0 | 0 | 0 | 0 | 0 | 1 |
Combating Fake News: A Survey on Identification and Mitigation Techniques | The proliferation of fake news on social media has opened up new directions
of research for timely identification and containment of fake news, and
mitigation of its widespread impact on public opinion. While much of the
earlier research was focused on identification of fake news based on its
contents or by exploiting users' engagements with the news on social media,
there has been a rising interest in proactive intervention strategies to
counter the spread of misinformation and its impact on society. In this survey,
we describe the modern-day problem of fake news and, in particular, highlight
the technical challenges associated with it. We discuss existing methods and
techniques applicable to both identification and mitigation, with a focus on
the significant advances in each method and their advantages and limitations.
In addition, research has often been limited by the quality of existing
datasets and their specific application contexts. To alleviate this problem, we
comprehensively compile and summarize characteristic features of available
datasets. Furthermore, we outline new directions of research to facilitate
future development of effective and interdisciplinary solutions.
| 1 | 0 | 0 | 1 | 0 | 0 |
A causal approach to analysis of censored medical costs in the presence of time-varying treatment | There has recently been a growing interest in the development of statistical
methods to compare medical costs between treatment groups. When cumulative cost
is the outcome of interest, right-censoring poses the challenge of informative
missingness due to heterogeneity in the rates of cost accumulation across
subjects. Existing approaches seeking to address the challenge of informative
cost trajectories typically rely on inverse probability weighting and target a
net "intent-to-treat" effect. However, no approaches capable of handling
time-dependent treatment and confounding in this setting have been developed to
date. A method to estimate the joint causal effect of a treatment regime on
cost would be of value to inform public policy when comparing interventions. In
this paper, we develop a nested g-computation approach to cost analysis in
order to accommodate time-dependent treatment and repeated outcome measures. We
demonstrate that our procedure is reasonably robust to departures from its
distributional assumptions and can provide unique insights into fundamental
differences in average cost across time-dependent treatment regimes.
| 0 | 0 | 0 | 1 | 0 | 0 |
Inferring the parameters of a Markov process from snapshots of the steady state | We seek to infer the parameters of an ergodic Markov process from samples
taken independently from the steady state. Our focus is on non-equilibrium
processes, where the steady state is not described by the Boltzmann measure,
but is generally unknown and hard to compute, which prevents the application of
established equilibrium inference methods. We propose a quantity we call
propagator likelihood, which takes on the role of the likelihood in equilibrium
processes. This propagator likelihood is based on fictitious transitions
between those configurations of the system which occur in the samples. The
propagator likelihood can be derived by minimising the relative entropy between
the empirical distribution and a distribution generated by propagating the
empirical distribution forward in time. Maximising the propagator likelihood
leads to an efficient reconstruction of the parameters of the underlying model
in different systems, both with discrete configurations and with continuous
configurations. We apply the method to non-equilibrium models from statistical
physics and theoretical biology, including the asymmetric simple exclusion
process (ASEP), the kinetic Ising model, and replicator dynamics.
| 0 | 1 | 0 | 1 | 0 | 0 |
What Does a TextCNN Learn? | TextCNN, the convolutional neural network for text, is a useful deep learning
algorithm for sentence classification tasks such as sentiment analysis and
question classification. However, neural networks have long been known as black
boxes because interpreting them is a challenging task. Researchers have
developed several tools to understand a CNN for image classification by deep
visualization, but research about deep TextCNNs is still insufficient. In this
paper, we are trying to understand what a TextCNN learns on two classical NLP
datasets. Our work focuses on functions of different convolutional kernels and
correlations between convolutional kernels.
| 0 | 0 | 0 | 1 | 0 | 0 |
The Wisdom of Polarized Crowds | As political polarization in the United States continues to rise, the
question of whether polarized individuals can fruitfully cooperate becomes
pressing. Although diversity of individual perspectives typically leads to
superior team performance on complex tasks, strong political perspectives have
been associated with conflict, misinformation and a reluctance to engage with
people and perspectives beyond one's echo chamber. It is unclear whether
self-selected teams of politically diverse individuals will create higher or
lower quality outcomes. In this paper, we explore the effect of team political
composition on performance through analysis of millions of edits to Wikipedia's
Political, Social Issues, and Science articles. We measure editors' political
alignments by their contributions to conservative versus liberal articles. A
survey of editors validates that those who primarily edit liberal articles
identify more strongly with the Democratic party and those who edit
conservative ones with the Republican party. Our analysis then reveals that
polarized teams---those consisting of a balanced set of politically diverse
editors---create articles of higher quality than politically homogeneous teams.
The effect appears most strongly in Wikipedia's Political articles, but is also
observed in Social Issues and even Science articles. Analysis of article "talk
pages" reveals that politically polarized teams engage in longer, more
constructive, competitive, and substantively focused but linguistically diverse
debates than political moderates. More intense use of Wikipedia policies by
politically diverse teams suggests institutional design principles to help
unleash the power of politically polarized teams.
| 1 | 0 | 0 | 1 | 0 | 0 |
Probabilistic Database Summarization for Interactive Data Exploration | We present a probabilistic approach to generate a small, query-able summary
of a dataset for interactive data exploration. Departing from traditional
summarization techniques, we use the Principle of Maximum Entropy to generate a
probabilistic representation of the data that can be used to give approximate
query answers. We develop the theoretical framework and formulation of our
probabilistic representation and show how to use it to answer queries. We then
present solving techniques and give three critical optimizations to improve
preprocessing time and query accuracy. Lastly, we experimentally evaluate our
work using a 5 GB dataset of flights within the United States and a 210 GB
dataset from an astronomy particle simulation. While our current work only
supports linear queries, we show that our technique can successfully answer
queries faster than sampling while introducing, on average, no more error than
sampling and can better distinguish between rare and nonexistent values.
| 1 | 0 | 0 | 0 | 0 | 0 |
GP-GAN: Towards Realistic High-Resolution Image Blending | Recent advances in generative adversarial networks (GANs) have shown
promising potentials in conditional image generation. However, how to generate
high-resolution images remains an open problem. In this paper, we aim at
generating high-resolution well-blended images given composited copy-and-paste
ones, i.e. realistic high-resolution image blending. To achieve this goal, we
propose Gaussian-Poisson GAN (GP-GAN), a framework that combines the strengths
of classical gradient-based approaches and GANs, which is the first work that
explores the capability of GANs in high-resolution image blending task to the
best of our knowledge. Particularly, we propose Gaussian-Poisson Equation to
formulate the high-resolution image blending problem, which is a joint
optimisation constrained by the gradient and colour information. Gradient
filters can obtain gradient information. For generating the colour information,
we propose Blending GAN to learn the mapping between the composited image and
the well-blended one. Compared to the alternative methods, our approach can
deliver high-resolution, realistic images with fewer bleedings and unpleasant
artefacts. Experiments confirm that our approach achieves the state-of-the-art
performance on Transient Attributes dataset. A user study on Amazon Mechanical
Turk finds that majority of workers are in favour of the proposed approach.
| 1 | 0 | 0 | 0 | 0 | 0 |
Relational Autoencoder for Feature Extraction | Feature extraction becomes increasingly important as data grows high
dimensional. Autoencoder as a neural network based feature extraction method
achieves great success in generating abstract features of high dimensional
data. However, it fails to consider the relationships of data samples which may
affect experimental results of using original and new features. In this paper,
we propose a Relation Autoencoder model considering both data features and
their relationships. We also extend it to work with other major autoencoder
models including Sparse Autoencoder, Denoising Autoencoder and Variational
Autoencoder. The proposed relational autoencoder models are evaluated on a set
of benchmark datasets and the experimental results show that considering data
relationships can generate more robust features which achieve lower
construction loss and then lower error rate in further classification compared
to the other variants of autoencoders.
| 0 | 0 | 0 | 1 | 0 | 0 |
Sentiment and Sarcasm Classification with Multitask Learning | Sentiment classification and sarcasm detection are both important NLP tasks.
We show that these two tasks are correlated, and present a multi-task
learning-based framework using deep neural network that models this correlation
to improve the performance of both tasks in a multi-task learning setting.
| 1 | 0 | 0 | 0 | 0 | 0 |
Context in Neural Machine Translation: A Review of Models and Evaluations | This review paper discusses how context has been used in neural machine
translation (NMT) in the past two years (2017-2018). Starting with a brief
retrospect on the rapid evolution of NMT models, the paper then reviews studies
that evaluate NMT output from various perspectives, with emphasis on those
analyzing limitations of the translation of contextual phenomena. In a
subsequent version, the paper will then present the main methods that were
proposed to leverage context for improving translation quality, and
distinguishes methods that aim to improve the translation of specific phenomena
from those that consider a wider unstructured context.
| 1 | 0 | 0 | 0 | 0 | 0 |
Manipulation of type-I and type-II Dirac points in PdTe2 superconductor by external pressure | A pair of type-II Dirac cones in PdTe$_2$ was recently predicted by theories
and confirmed in experiments, making PdTe$_2$ the first material that processes
both superconductivity and type-II Dirac fermions. In this work, we study the
evolution of Dirac cones in PdTe$_2$ under hydrostatic pressure by the
first-principles calculations. Our results show that the pair of type-II Dirac
points disappears at 6.1 GPa. Interestingly, a new pair of type-I Dirac points
from the same two bands emerges at 4.7 GPa. Due to the distinctive band
structures compared with those of PtSe$_2$ and PtTe$_2$, the two types of Dirac
points can coexist in PdTe$_2$ under proper pressure (4.7-6.1 GPa). The
emergence of type-I Dirac cones and the disappearance of type-II Dirac ones are
attributed to the increase/decrease of the energy of the states at $\Gamma$ and
$A$ point, which have the anti-bonding/bonding characters of interlayer Te-Te
atoms. On the other hand, we find that the superconductivity of PdTe$_2$
slightly decreases with pressure. The pressure-induced different types of Dirac
cones combined with superconductivity may open a promising way to investigate
the complex interactions between Dirac fermions and superconducting
quasi-particles.
| 0 | 1 | 0 | 0 | 0 | 0 |
A fast Metropolis-Hastings method for generating random correlation matrices | We propose a novel Metropolis-Hastings algorithm to sample uniformly from the
space of correlation matrices. Existing methods in the literature are based on
elaborated representations of a correlation matrix, or on complex
parametrizations of it. By contrast, our method is intuitive and simple, based
the classical Cholesky factorization of a positive definite matrix and Markov
chain Monte Carlo theory. We perform a detailed convergence analysis of the
resulting Markov chain, and show how it benefits from fast convergence, both
theoretically and empirically. Furthermore, in numerical experiments our
algorithm is shown to be significantly faster than the current alternative
approaches, thanks to its simple yet principled approach.
| 0 | 0 | 0 | 1 | 0 | 0 |
Compact Groups analysis using weak gravitational lensing | We present a weak lensing analysis of a sample of SDSS Compact Groups (CGs).
Using the measured radial density contrast profile, we derive the average
masses under the assumption of spherical symmetry, obtaining a velocity
dispersion for the Singular Isothermal Spherical model, $\sigma_V = 270 \pm 40
\rm ~km~s^{-1}$, and for the NFW model, $R_{200}=0.53\pm0.10\,h_{70}^{-1}\,\rm
Mpc$. We test three different definitions of CGs centres to identify which best
traces the true dark matter halo centre, concluding that a luminosity weighted
centre is the most suitable choice. We also study the lensing signal dependence
on CGs physical radius, group surface brightness, and morphological mixing. We
find that groups with more concentrated galaxy members show steeper mass
profiles and larger velocity dispersions. We argue that both, a possible lower
fraction of interloper and a true steeper profile, could be playing a role in
this effect. Straightforward velocity dispersion estimates from member
spectroscopy yields $\sigma_V \approx 230 \rm ~km~s^{-1}$ in agreement with our
lensing results.
| 0 | 1 | 0 | 0 | 0 | 0 |
An Applied Knowledge Framework to Study Complex Systems | The complexity of knowledge production on complex systems is well-known, but
there still lacks knowledge framework that would both account for a certain
structure of knowledge production at an epistemological level and be directly
applicable to the study and management of complex systems. We set a basis for
such a framework, by first analyzing in detail a case study of the construction
of a geographical theory of complex territorial systems, through mixed methods,
namely qualitative interview analysis and quantitative citation network
analysis. We can therethrough inductively build a framework that considers
knowledge entreprises as perspectives, with co-evolving components within
complementary knowledge domains. We finally discuss potential applications and
developments.
| 1 | 1 | 0 | 0 | 0 | 0 |
Classifying subcategories in quotients of exact categories | We classify certain subcategories in quotients of exact categories. In
particular, we classify the triangulated and thick subcategories of an
algebraic triangulated category, i.e. the stable category of a Frobenius
category.
| 0 | 0 | 1 | 0 | 0 | 0 |
Dealing with Rational Second Order Ordinary Differential Equations where both Darboux and Lie Find It Difficult: The $S$-function Method | Here we present a new approach to search for first order invariants (first
integrals) of rational second order ordinary differential equations. This
method is an alternative to the Darbouxian and symmetry approaches. Our
procedure can succeed in many cases where these two approaches fail. We also
present here a Maple implementation of the theoretical results and methods,
hereby introduced, in a computational package -- {\it InSyDE}. The package is
designed, apart from materializing the algorithms presented, to provide a set
of tools to allow the user to analyse the intermediary steps of the process.
| 1 | 0 | 0 | 0 | 0 | 0 |
Urban Vibrancy and Safety in Philadelphia | Statistical analyses of urban environments have been recently improved
through publicly available high resolution data and mapping technologies that
have been adopted across industries. These technologies allow us to create
metrics to empirically investigate urban design principles of the past
half-century. Philadelphia is an interesting case study for this work, with its
rapid urban development and population increase in the last decade. We outline
a data analysis pipeline for exploring the association between safety and local
neighborhood features such as population, economic health and the built
environment. As a particular example of our analysis pipeline, we focus on
quantitative measures of the built environment that serve as proxies for
vibrancy: the amount of human activity in a local area. Historically, vibrancy
has been very challenging to measure empirically. Measures based on land use
zoning are not an adequate description of local vibrancy and so we construct a
database and set of measures of business activity in each neighborhood. We
employ several matching analyses to explore the relationship between
neighborhood vibrancy and safety, such as comparing high crime versus low crime
locations within the same neighborhood. As additional sources of urban data
become available, our analysis pipeline can serve as the template for further
investigations into the relationships between safety, economic factors and the
built environment at the local neighborhood level.
| 0 | 0 | 0 | 1 | 0 | 0 |
Exponential quadrature rules without order reduction | In this paper a technique is suggested to integrate linear initial boundary
value problems with exponential quadrature rules in such a way that the order
in time is as high as possible. A thorough error analysis is given for both the
classical approach of integrating the problem firstly in space and then in time
and of doing it in the reverse order in a suitable manner. Time-dependent
boundary conditions are considered with both approaches and full discretization
formulas are given to implement the methods once the quadrature nodes have been
chosen for the time integration and a particular (although very general) scheme
is selected for the space discretization. Numerical experiments are shown which
corroborate that, for example, with the suggested technique, order $2s$ is
obtained when choosing the $s$ nodes of Gaussian quadrature rule.
| 0 | 0 | 1 | 0 | 0 | 0 |
New Derivatives for the Functions with the Fractal Tartan Support | In this manuscript, we generalize F-calculus to apply it on fractal Tartan
spaces. The generalized standard F-calculus is used to obtain the integral and
derivative of the functions on the fractal Tartan with different dimensions.
The generalized fractional derivatives have local properties that make it more
useful in modelling physical problems. The illustrative examples are used to
present the details.
| 0 | 0 | 1 | 0 | 0 | 0 |
Learning Navigation Behaviors End to End | A longstanding goal of behavior-based robotics is to solve high-level
navigation tasks using end to end navigation behaviors that directly map
sensors to actions. Navigation behaviors, such as reaching a goal or following
a path without collisions, can be learned from exploration and interaction with
the environment, but are constrained by the type and quality of a robot's
sensors, dynamics, and actuators. Traditional motion planning handles varied
robot geometry and dynamics, but typically assumes high-quality observations.
Modern vision-based navigation typically considers imperfect or partial
observations, but simplifies the robot action space. With both approaches, the
transition from simulation to reality can be difficult. Here, we learn two end
to end navigation behaviors that avoid moving obstacles: point to point and
path following. These policies receive noisy lidar observations and output
robot linear and angular velocities. We train these policies in small, static
environments with Shaped-DDPG, an adaptation of the Deep Deterministic Policy
Gradient (DDPG) reinforcement learning method which optimizes reward and
network architecture. Over 500 meters of on-robot experiments show , these
policies generalize to new environments and moving obstacles, are robust to
sensor, actuator, and localization noise, and can serve as robust building
blocks for larger navigation tasks. The path following and point and point
policies are 83% and 56% more successful than the baseline, respectively.
| 1 | 0 | 0 | 0 | 0 | 0 |
Bounded Information Rate Variational Autoencoders | This paper introduces a new member of the family of Variational Autoencoders
(VAE) that constrains the rate of information transferred by the latent layer.
The latent layer is interpreted as a communication channel, the information
rate of which is bound by imposing a pre-set signal-to-noise ratio. The new
constraint subsumes the mutual information between the input and latent
variables, combining naturally with the likelihood objective of the observed
data as used in a conventional VAE. The resulting Bounded-Information-Rate
Variational Autoencoder (BIR-VAE) provides a meaningful latent representation
with an information resolution that can be specified directly in bits by the
system designer. The rate constraint can be used to prevent overtraining, and
the method naturally facilitates quantisation of the latent variables at the
set rate. Our experiments confirm that the BIR-VAE has a meaningful latent
representation and that its performance is at least as good as state-of-the-art
competing algorithms, but with lower computational complexity.
| 0 | 0 | 0 | 1 | 0 | 0 |
Wasan geometry with the division by 0 | Results in Wasan geometry of tangents circles can still be considered in a
singular case by the division by 0.
| 0 | 0 | 1 | 0 | 0 | 0 |
New insights into non-central beta distributions | The beta family owes its privileged status within unit interval distributions
to several relevant features such as, for example, easyness of interpretation
and versatility in modeling different types of data. However, its flexibility
at the unit interval endpoints is poor enough to prevent from properly modeling
the portions of data having values next to zero and one. Such a drawback can be
overcome by resorting to the class of the non-central beta distributions.
Indeed, the latter allows the density to take on arbitrary positive and finite
limits which have a really simple form. That said, new insights into such class
are provided in this paper. In particular, new representations and moments
expressions are derived. Moreover, its potential with respect to alternative
models is highlighted through applications to real data.
| 0 | 0 | 1 | 1 | 0 | 0 |
Mean Curvature Flows of Closed Hypersurfaces in Warped Product Manifolds | We investigate the mean curvature flows in a class of warped product
manifolds with closed hypersurfaces fibering over $\mathbb{R}$. In particular,
we prove that under natural conditions on the warping function and Ricci
curvature bound for the ambient space, there exists a large class of closed
initial hypersurfaces, as geodesic graphs over the totally geodesic
hypersurface $\Sigma$, such that the mean curvature flow starting from $S_0$
exists for all time and converges to $\Sigma$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Optimising finite-difference methods for PDEs through parameterised time-tiling in Devito | Finite-difference methods are widely used in solving partial differential
equations. In a large problem set, approximations can take days or weeks to
evaluate, yet the bulk of computation may occur within a single loop nest. The
modelling process for researchers is not straightforward either, requiring
models with differential equations to be translated into stencil kernels, then
optimised separately. One tool that seeks to speed up and eliminate mistakes
from this tedious procedure is Devito, used to efficiently employ
finite-difference methods.
In this work, we implement time-tiling, a loop nest optimisation, in Devito
yielding a decrease in runtime of up to 45%, and at least 20% across stencils
from the acoustic wave equation family, widely used in Devito's target domain
of seismic imaging. We present an estimator for arithmetic intensity under
time-tiling and a model to predict runtime improvements in stencil
computations. We also consider generalisation of time-tiling to imperfect loop
nests, a less widely studied problem.
| 1 | 0 | 0 | 0 | 0 | 0 |
An Ontology to support automated negotiation | In this work we propose an ontology to support automated negotiation in
multiagent systems. The ontology can be connected with some domain-specific
ontologies to facilitate the negotiation in different domains, such as
Intelligent Transportation Systems (ITS), e-commerce, etc. The specific
negotiation rules for each type of negotiation strategy can also be defined as
part of the ontology, reducing the amount of knowledge hardcoded in the agents
and ensuring the interoperability. The expressiveness of the ontology was
proved in a multiagent architecture for the automatic traffic light setting
application on ITS.
| 1 | 0 | 0 | 0 | 0 | 0 |
Deep Scattering: Rendering Atmospheric Clouds with Radiance-Predicting Neural Networks | We present a technique for efficiently synthesizing images of atmospheric
clouds using a combination of Monte Carlo integration and neural networks. The
intricacies of Lorenz-Mie scattering and the high albedo of cloud-forming
aerosols make rendering of clouds---e.g. the characteristic silverlining and
the "whiteness" of the inner body---challenging for methods based solely on
Monte Carlo integration or diffusion theory. We approach the problem
differently. Instead of simulating all light transport during rendering, we
pre-learn the spatial and directional distribution of radiant flux from tens of
cloud exemplars. To render a new scene, we sample visible points of the cloud
and, for each, extract a hierarchical 3D descriptor of the cloud geometry with
respect to the shading location and the light source. The descriptor is input
to a deep neural network that predicts the radiance function for each shading
configuration. We make the key observation that progressively feeding the
hierarchical descriptor into the network enhances the network's ability to
learn faster and predict with high accuracy while using few coefficients. We
also employ a block design with residual connections to further improve
performance. A GPU implementation of our method synthesizes images of clouds
that are nearly indistinguishable from the reference solution within seconds
interactively. Our method thus represents a viable solution for applications
such as cloud design and, thanks to its temporal stability, also for
high-quality production of animated content.
| 1 | 0 | 0 | 1 | 0 | 0 |
On the Upward/Downward Closures of Petri Nets | We study the size and the complexity of computing finite state automata (FSA)
representing and approximating the downward and the upward closure of Petri net
languages with coverability as the acceptance condition. We show how to
construct an FSA recognizing the upward closure of a Petri net language in
doubly-exponential time, and therefore the size is at most doubly exponential.
For downward closures, we prove that the size of the minimal automata can be
non-primitive recursive. In the case of BPP nets, a well-known subclass of
Petri nets, we show that an FSA accepting the downward/upward closure can be
constructed in exponential time. Furthermore, we consider the problem of
checking whether a simple regular language is included in the downward/upward
closure of a Petri net/BPP net language. We show that this problem is
EXPSPACE-complete (resp. NP-complete) in the case of Petri nets (resp. BPP
nets). Finally, we show that it is decidable whether a Petri net language is
upward/downward closed. To this end, we prove that one can decide whether a
given regular language is a subset of a Petri net coverability language.
| 1 | 0 | 0 | 0 | 0 | 0 |
Semantic Similarity from Natural Language and Ontology Analysis | Artificial Intelligence federates numerous scientific fields in the aim of
developing machines able to assist human operators performing complex
treatments -- most of which demand high cognitive skills (e.g. learning or
decision processes). Central to this quest is to give machines the ability to
estimate the likeness or similarity between things in the way human beings
estimate the similarity between stimuli.
In this context, this book focuses on semantic measures: approaches designed
for comparing semantic entities such as units of language, e.g. words,
sentences, or concepts and instances defined into knowledge bases. The aim of
these measures is to assess the similarity or relatedness of such semantic
entities by taking into account their semantics, i.e. their meaning --
intuitively, the words tea and coffee, which both refer to stimulating
beverage, will be estimated to be more semantically similar than the words
toffee (confection) and coffee, despite that the last pair has a higher
syntactic similarity. The two state-of-the-art approaches for estimating and
quantifying semantic similarities/relatedness of semantic entities are
presented in detail: the first one relies on corpora analysis and is based on
Natural Language Processing techniques and semantic models while the second is
based on more or less formal, computer-readable and workable forms of knowledge
such as semantic networks, thesaurus or ontologies. (...) Beyond a simple
inventory and categorization of existing measures, the aim of this monograph is
to convey novices as well as researchers of these domains towards a better
understanding of semantic similarity estimation and more generally semantic
measures.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Hardness of Synthesizing Elementary Net Systems from Highly Restricted Inputs | Elementary net systems (ENS) are the most fundamental class of Petri nets.
Their synthesis problem has important applications in the design of digital
hardware and commercial processes. Given a labeled transition system (TS) $A$,
feasibility is the NP-complete decision problem whether $A$ can be equivalently
synthesized into an ENS. It is well known that $A$ is feasible if and only if
it has the event state separation property (ESSP) and the state separation
property (SSP). Recently, these properties have also been studied individually
for their practical implications. A fast ESSP algorithm, for instance, would
allow applications to at least validate the language equivalence of $A$ and a
synthesized ENS. Being able to efficiently decide SSP, on the other hand, could
serve as a quick-fail preprocessing mechanism for synthesis. Although a few
tractable subclasses have been found, this paper destroys much of the hope that
many practically meaningful input restrictions make feasibility or at least one
of ESSP and SSP efficient. We show that all three problems remain NP-complete
even if the input is restricted to linear TSs where every event occurs at most
three times or if the input is restricted to TSs where each event occurs at
most twice and each state has at most two successor and two predecessor states.
| 1 | 0 | 0 | 0 | 0 | 0 |
Exceptional Lattice Green's Functions | The three exceptional lattices, $E_6$, $E_7$, and $E_8$, have attracted much
attention due to their anomalously dense and symmetric structures which are of
critical importance in modern theoretical physics. Here, we study the
electronic band structure of a single spinless quantum particle hopping between
their nearest-neighbor lattice points in the tight-binding limit. Using Markov
chain Monte Carlo methods, we numerically sample their lattice Green's
functions, densities of states, and random walk return probabilities. We find
and tabulate a plethora of Van Hove singularities in the densities of states,
including degenerate ones in $E_6$ and $E_7$. Finally, we use brute force
enumeration to count the number of distinct closed walks of length up to eight,
which gives the first eight moments of the densities of states.
| 0 | 1 | 0 | 0 | 0 | 0 |
Optimal design with EGM approach in conjugate natural convection with surface radiation in a two-dimensional enclosure | Analysis of conjugate natural convection with surface radiation in a
two-dimensional enclosure is carried out in order to search the optimal
location of the heat source with entropy generation minimization (EGM) approach
and conventional heat transfer parameters. The air as an incompressible fluid
and transparent media is considered the fluid filling the enclosure with the
steady and laminar regime. The enclosure internal surfaces are also gray,
opaque and diffuse. The governing equations with stream function and vorticity
formulation are solved using finite difference approach. Results include the
effect of Rayleigh number and emissivity on the dimensionless average rate of
entropy generation and its optimum location. The optimum location search with
conventional heat transfer parameters including maximum temperature and Nusselt
numbers are also examined.
| 0 | 1 | 0 | 0 | 0 | 0 |
Convex cocompactness in pseudo-Riemannian hyperbolic spaces | Anosov representations of word hyperbolic groups into higher-rank semisimple
Lie groups are representations with finite kernel and discrete image that have
strong analogies with convex cocompact representations into rank-one Lie
groups. However, the most naive analogy fails: generically, Anosov
representations do not act properly and cocompactly on a convex set in the
associated Riemannian symmetric space. We study representations into projective
indefinite orthogonal groups PO(p,q) by considering their action on the
associated pseudo-Riemannian hyperbolic space H^{p,q-1} in place of the
Riemannian symmetric space. Following work of Barbot and Mérigot in anti-de
Sitter geometry, we find an intimate connection between Anosov representations
and the natural notion of convex cocompactness in this setting.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Closer Look at Memorization in Deep Networks | We examine the role of memorization in deep learning, drawing connections to
capacity, generalization, and adversarial robustness. While deep networks are
capable of memorizing noise data, our results suggest that they tend to
prioritize learning simple patterns first. In our experiments, we expose
qualitative differences in gradient-based optimization of deep neural networks
(DNNs) on noise vs. real data. We also demonstrate that for appropriately tuned
explicit regularization (e.g., dropout) we can degrade DNN training performance
on noise datasets without compromising generalization on real data. Our
analysis suggests that the notions of effective capacity which are dataset
independent are unlikely to explain the generalization performance of deep
networks when trained with gradient based methods because training data itself
plays an important role in determining the degree of memorization.
| 1 | 0 | 0 | 1 | 0 | 0 |
Hierarchical 3D fully convolutional networks for multi-organ segmentation | Recent advances in 3D fully convolutional networks (FCN) have made it
feasible to produce dense voxel-wise predictions of full volumetric images. In
this work, we show that a multi-class 3D FCN trained on manually labeled CT
scans of seven abdominal structures (artery, vein, liver, spleen, stomach,
gallbladder, and pancreas) can achieve competitive segmentation results, while
avoiding the need for handcrafting features or training organ-specific models.
To this end, we propose a two-stage, coarse-to-fine approach that trains an FCN
model to roughly delineate the organs of interest in the first stage (seeing
$\sim$40% of the voxels within a simple, automatically generated binary mask of
the patient's body). We then use these predictions of the first-stage FCN to
define a candidate region that will be used to train a second FCN. This step
reduces the number of voxels the FCN has to classify to $\sim$10% while
maintaining a recall high of $>$99%. This second-stage FCN can now focus on
more detailed segmentation of the organs. We respectively utilize training and
validation sets consisting of 281 and 50 clinical CT images. Our hierarchical
approach provides an improved Dice score of 7.5 percentage points per organ on
average in our validation set. We furthermore test our models on a completely
unseen data collection acquired at a different hospital that includes 150 CT
scans with three anatomical labels (liver, spleen, and pancreas). In such
challenging organs as the pancreas, our hierarchical approach improves the mean
Dice score from 68.5 to 82.2%, achieving the highest reported average score on
this dataset.
| 1 | 0 | 0 | 0 | 0 | 0 |
An evolutionary game model for behavioral gambit of loyalists: Global awareness and risk-aversion | We study the phase diagram of a minority game where three classes of agents
are present. Two types of agents play a risk-loving game that we model by the
standard Snowdrift Game. The behaviour of the third type of agents is coded by
{\em indifference} w.r.t. the game at all: their dynamics is designed to
account for risk-aversion as an innovative behavioral gambit. From this point
of view, the choice of this solitary strategy is enhanced when innovation
starts, while is depressed when it becomes the majority option. This implies
that the payoff matrix of the game becomes dependent on the global awareness of
the agents measured by the relevance of the population of the indifferent
players. The resulting dynamics is non-trivial with different kinds of phase
transition depending on a few model parameters. The phase diagram is studied on
regular as well as complex networks.
| 0 | 0 | 0 | 0 | 1 | 0 |
The Minimum Euclidean-Norm Point on a Convex Polytope: Wolfe's Combinatorial Algorithm is Exponential | The complexity of Philip Wolfe's method for the minimum Euclidean-norm point
problem over a convex polytope has remained unknown since he proposed the
method in 1974. The method is important because it is used as a subroutine for
one of the most practical algorithms for submodular function minimization. We
present the first example that Wolfe's method takes exponential time.
Additionally, we improve previous results to show that linear programming
reduces in strongly-polynomial time to the minimum norm point problem over a
simplex.
| 1 | 0 | 1 | 0 | 0 | 0 |
Constraints on a possible evolution of mass density power-law index in strong gravitational lensing from cosmological data | In this work, by using strong gravitational lensing (SGL) observations along
with Type Ia Supernovae (Union2.1) and gamma ray burst data (GRBs), we propose
a new method to study a possible redshift evolution of $\gamma(z)$, the mass
density power-law index of strong gravitational lensing systems. In this
analysis, we assume the validity of cosmic distance duality relation and the
flat universe. In order to explore the $\gamma(z)$ behavior, three different
parametrizations are considered, namely: (P1) $\gamma(z_l)=\gamma_0+\gamma_1
z_l$, (P2) $\gamma(z_l)=\gamma_0+\gamma_1 z_l/(1+z_l)$ and (P3)
$\gamma(z_l)=\gamma_0+\gamma_1 \ln(1+z_l)$, where $z_l$ corresponds to lens
redshift. If $\gamma_0=2$ and $\gamma_1=0$ the singular isothermal sphere model
is recovered. Our method is performed on SGL sub-samples defined by different
lens redshifts and velocity dispersions. For the former case, the results are
in full agreement with each other, while a 1$\sigma$ tension between the
sub-samples with low ($\leq 250$ km/s) and high ($>250$ km/s) velocity
dispersions was obtained on the ($\gamma_0$-$\gamma_1$) plane. By considering
the complete SGL sample, we obtain $\gamma_0 \approx 2$ and $ \gamma_1 \approx
0$ within 1$\sigma$ c.l. for all $\gamma(z)$ parametrizations. However, we find
the following best fit values of $\gamma_1$: $-0.085$, $-0.16$ and $-0.12$ for
P1, P2 and P3 parametrizations, respectively, suggesting a mild evolution for
$\gamma(z)$. By repeating the analysis with Type Ia Supernovae from JLA
compilation, GRBs and SGL systems this mild evolution is reinforced.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Strong Cell-based Hydrogen Peroxide Generation Triggered by Cold Atmospheric Plasma | Hydrogen peroxide (H2O2) is an important signaling molecule in cancer cells.
However, the significant secretion of H2O2 by cancer cells have been rarely
observed. Cold atmospheric plasma (CAP) is a near room temperature ionized gas
composed of neutral particles, charged particles, reactive species, and
electrons. Here, we first demonstrated that breast cancer cells and pancreatic
adenocarcinoma cells generated micromolar level H2O2 during just 1 min of
direct CAP treatment on these cells. The cell-based H2O2 generation is affected
by the medium volume, the cell confluence, as well as the discharge voltage.
The application of cold atmospheric plasma (CAP) in the cancer treatment has
been intensively investigated over the past decade. Several cellular responses
to the CAP treatment have been observed including the consumption of the
CAP-originated reactive species, the rise of intracellular reactive oxygen
species, the damage on DNA and mitochondria, as well as the activation of
apoptotic events. This is a new previously unknown cellular response to CAP,
which provides a new prospective to understand the interaction between CAP and
cells.
| 0 | 1 | 0 | 0 | 0 | 0 |
Theory of circadian metabolism | Many organisms repartition their proteome in a circadian fashion in response
to the daily nutrient changes in their environment. A striking example is
provided by cyanobacteria, which perform photosynthesis during the day to fix
carbon. These organisms not only face the challenge of rewiring their proteome
every 12 hours, but also the necessity of storing the fixed carbon in the form
of glycogen to fuel processes during the night. In this manuscript, we extend
the framework developed by Hwa and coworkers (Scott et al., Science 330, 1099
(2010)) for quantifying the relatinship between growth and proteome composition
to circadian metabolism. We then apply this framework to investigate the
circadian metabolism of the cyanobacterium Cyanothece, which not only fixes
carbon during the day, but also nitrogen during the night, storing it in the
polymer cyanophycin. Our analysis reveals that the need to store carbon and
nitrogen tends to generate an extreme growth strategy, in which the cells
predominantly grow during the day, as observed experimentally. This strategy
maximizes the growth rate over 24 hours, and can be quantitatively understood
by the bacterial growth laws. Our analysis also shows that the slow relaxation
of the proteome, arising from the slow growth rate, puts a severe constraint on
implementing this optimal strategy. Yet, the capacity to estimate the time of
the day, enabled by the circadian clock, makes it possible to anticipate the
daily changes in the environment and mount a response ahead of time. This
significantly enhances the growth rate by counteracting the detrimental effects
of the slow proteome relaxation.
| 0 | 0 | 0 | 0 | 1 | 0 |
Rare-earth/transition-metal magnetic interactions in pristine and (Ni,Fe)-doped YCo5 and GdCo5 | We present an investigation into the intrinsic magnetic properties of the
compounds YCo5 and GdCo5, members of the RETM5 class of permanent magnets (RE =
rare earth, TM = transition metal). Focusing on Y and Gd provides direct
insight into both the TM magnetization and RE-TM interactions without the
complication of strong crystal field effects. We synthesize single crystals of
YCo5 and GdCo5 using the optical floating zone technique and measure the
magnetization from liquid helium temperatures up to 800 K. These measurements
are interpreted through calculations based on a Green's function formulation of
density-functional theory, treating the thermal disorder of the local magnetic
moments within the coherent potential approximation. The rise in the
magnetization of GdCo5 with temperature is shown to arise from a faster
disordering of the Gd magnetic moments compared to the antiferromagnetically
aligned Co sublattice. We use the calculations to analyze the different Curie
temperatures of the compounds and also compare the molecular (Weiss) fields at
the RE site with previously published neutron scattering experiments. To gain
further insight into the RE-TM interactions, we perform substitutional doping
on the TM site, studying the compounds RECo4.5Ni0.5, RECo4Ni, and RECo4.5Fe0.5.
Both our calculations and experiments on powdered samples find an
increased/decreased magnetization with Fe/Ni doping, respectively. The
calculations further reveal a pronounced dependence on the location of the
dopant atoms of both the Curie temperatures and the Weiss field at the RE site.
| 0 | 1 | 0 | 0 | 0 | 0 |
Optimum weight chamber examples of moduli spaces of stable parabolic bundles in genus 0 | We present an explicit construction of the moduli spaces of rank 2 stable
parabolic bundles of parabolic degree 0 over the Riemann sphere, corresponding
to "optimum" open weight chambers of parabolic weights in the weight polytope.
The complexity of the different moduli space' weight chambers is understood in
terms of the complexity of the actions of the corresponding groups of bundle
automorphisms on stable parabolic structures. For the given choices of
parabolic weights, $\mathscr{N}$ consists entirely of isomorphism classes of
strictly stable parabolic bundles whose underlying Birkhoff-Grothendieck
splitting coefficients are constant and minimal, is constructed as a quotient
of a set of stable parabolic structures by a group of bundle automorphisms, and
is a smooth, compact complex manifold biholomorphic to
$\left(\mathbb{C}\mathbb{P}^{1}\right)^{n-3}$ for even degree, and
$\mathbb{C}\mathbb{P}^{n-3}$ for odd degree. As an application of the
construction of such explicit models, we provide an explicit characterization
of the nilpotent cone locus on $T^{*}\mathscr{N}$ for Hitchin's integrable
system.
| 0 | 0 | 1 | 0 | 0 | 0 |
Individualized Risk Prognosis for Critical Care Patients: A Multi-task Gaussian Process Model | We report the development and validation of a data-driven real-time risk
score that provides timely assessments for the clinical acuity of ward patients
based on their temporal lab tests and vital signs, which allows for timely
intensive care unit (ICU) admissions. Unlike the existing risk scoring
technologies, the proposed score is individualized; it uses the electronic
health record (EHR) data to cluster the patients based on their static
covariates into subcohorts of similar patients, and then learns a separate
temporal, non-stationary multi-task Gaussian Process (GP) model that captures
the physiology of every subcohort. Experiments conducted on data from a
heterogeneous cohort of 6,094 patients admitted to the Ronald Reagan UCLA
medical center show that our risk score significantly outperforms the
state-of-the-art risk scoring technologies, such as the Rothman index and MEWS,
in terms of timeliness, true positive rate (TPR), and positive predictive value
(PPV). In particular, the proposed score increases the AUC with 20% and 38% as
compared to Rothman index and MEWS respectively, and can predict ICU admissions
8 hours before clinicians at a PPV of 35% and a TPR of 50%. Moreover, we show
that the proposed risk score allows for better decisions on when to discharge
clinically stable patients from the ward, thereby improving the efficiency of
hospital resource utilization.
| 1 | 0 | 0 | 0 | 0 | 0 |
X-View: Graph-Based Semantic Multi-View Localization | Global registration of multi-view robot data is a challenging task.
Appearance-based global localization approaches often fail under drastic
view-point changes, as representations have limited view-point invariance. This
work is based on the idea that human-made environments contain rich semantics
which can be used to disambiguate global localization. Here, we present X-View,
a Multi-View Semantic Global Localization system. X-View leverages semantic
graph descriptor matching for global localization, enabling localization under
drastically different view-points. While the approach is general in terms of
the semantic input data, we present and evaluate an implementation on visual
data. We demonstrate the system in experiments on the publicly available
SYNTHIA dataset, on a realistic urban dataset recorded with a simulator, and on
real-world StreetView data. Our findings show that X-View is able to globally
localize aerial-to-ground, and ground-to-ground robot data of drastically
different view-points. Our approach achieves an accuracy of up to 85 % on
global localizations in the multi-view case, while the benchmarked baseline
appearance-based methods reach up to 75 %.
| 1 | 0 | 0 | 0 | 0 | 0 |
Local Hardy spaces with variable exponents associated to non-negative self-adjoint operators satisfying Gaussian estimates | In this paper we introduce variable exponent local Hardy spaces associated
with a non-negative self-adjoint operator L. We define them by using an area
square integral involving the heat semigroup associated to L. A molecular
characterization is established and as an aplication of the molecular
characterization we prove that our local Hardy space coincides with the
(global) variable exponent Hardy space associated to L, provided that 0 does
not belong to the spectrum of L. Also, we show that it coincides with the
global variable exponent Hardy space associated to L+I.
| 0 | 0 | 1 | 0 | 0 | 0 |
Provable Inductive Robust PCA via Iterative Hard Thresholding | The robust PCA problem, wherein, given an input data matrix that is the
superposition of a low-rank matrix and a sparse matrix, we aim to separate out
the low-rank and sparse components, is a well-studied problem in machine
learning. One natural question that arises is that, as in the inductive
setting, if features are provided as input as well, can we hope to do better?
Answering this in the affirmative, the main goal of this paper is to study the
robust PCA problem while incorporating feature information. In contrast to
previous works in which recovery guarantees are based on the convex relaxation
of the problem, we propose a simple iterative algorithm based on
hard-thresholding of appropriate residuals. Under weaker assumptions than
previous works, we prove the global convergence of our iterative procedure;
moreover, it admits a much faster convergence rate and lesser computational
complexity per iteration. In practice, through systematic synthetic and real
data simulations, we confirm our theoretical findings regarding improvements
obtained by using feature information.
| 1 | 0 | 0 | 1 | 0 | 0 |
The Topological Period-Index Problem over 8-Complexes | We study the Postnikov tower of the classifying space of a compact Lie group
P(n,mn), which gives obstructions to lifting a topological Brauer class of
period $n$ to a PU_{mn}-torsor, where the base space is a CW complex of
dimension 8. Combined with the study of a twisted version of Atiyah-Hirzebruch
spectral sequence, this solves the topological period-index problem for CW
complexes of dimension 8.
| 0 | 0 | 1 | 0 | 0 | 0 |
The Rise of Jihadist Propaganda on Social Networks | Using a dataset of over 1.9 million messages posted on Twitter by about
25,000 ISIS members, we explore how ISIS makes use of social media to spread
its propaganda and to recruit militants from the Arab world and across the
globe. By distinguishing between violence-driven, theological, and sectarian
content, we trace the connection between online rhetoric and key events on the
ground. To the best of our knowledge, ours is one of the first studies to focus
on Arabic content, while most literature focuses on English content. Our
findings yield new important insights about how social media is used by radical
militant groups to target the Arab-speaking world, and reveal important
patterns in their propaganda efforts.
| 1 | 1 | 0 | 0 | 0 | 0 |
Simulated performance of the production target for the Muon g-2 Experiment | The Muon g-2 Experiment plans to use the Fermilab Recycler Ring for forming
the proton bunches that hit its production target. The proposed scheme uses one
RF system, 80 kV of 2.5 MHz RF. In order to avoid bunch rotations in a
mismatched bucket, the 2.5 MHz is ramped adiabatically from 3 to 80 kV in 90
ms. In this study, the interaction of the primary proton beam with the
production target for the Muon g-2 Experiment is numerically examined.
| 0 | 1 | 0 | 0 | 0 | 0 |
Virtual-to-Real: Learning to Control in Visual Semantic Segmentation | Collecting training data from the physical world is usually time-consuming
and even dangerous for fragile robots, and thus, recent advances in robot
learning advocate the use of simulators as the training platform.
Unfortunately, the reality gap between synthetic and real visual data prohibits
direct migration of the models trained in virtual worlds to the real world.
This paper proposes a modular architecture for tackling the virtual-to-real
problem. The proposed architecture separates the learning model into a
perception module and a control policy module, and uses semantic image
segmentation as the meta representation for relating these two modules. The
perception module translates the perceived RGB image to semantic image
segmentation. The control policy module is implemented as a deep reinforcement
learning agent, which performs actions based on the translated image
segmentation. Our architecture is evaluated in an obstacle avoidance task and a
target following task. Experimental results show that our architecture
significantly outperforms all of the baseline methods in both virtual and real
environments, and demonstrates a faster learning curve than them. We also
present a detailed analysis for a variety of variant configurations, and
validate the transferability of our modular architecture.
| 1 | 0 | 0 | 0 | 0 | 0 |
On Fienup Methods for Regularized Phase Retrieval | Alternating minimization, or Fienup methods, have a long history in phase
retrieval. We provide new insights related to the empirical and theoretical
analysis of these algorithms when used with Fourier measurements and combined
with convex priors. In particular, we show that Fienup methods can be viewed as
performing alternating minimization on a regularized nonconvex least-squares
problem with respect to amplitude measurements. We then prove that under mild
additional structural assumptions on the prior (semi-algebraicity), the
sequence of signal estimates has a smooth convergent behaviour towards a
critical point of the nonconvex regularized least-squares objective. Finally,
we propose an extension to Fienup techniques, based on a projected gradient
descent interpretation and acceleration using inertial terms. We demonstrate
experimentally that this modification combined with an $\ell_1$ prior
constitutes a competitive approach for sparse phase retrieval.
| 1 | 0 | 1 | 0 | 0 | 0 |
Coupling the reduced-order model and the generative model for an importance sampling estimator | In this work, we develop an importance sampling estimator by coupling the
reduced-order model and the generative model in a problem setting of
uncertainty quantification. The target is to estimate the probability that the
quantity of interest (QoI) in a complex system is beyond a given threshold. To
avoid the prohibitive cost of sampling a large scale system, the reduced-order
model is usually considered for a trade-off between efficiency and accuracy.
However, the Monte Carlo estimator given by the reduced-order model is biased
due to the error from dimension reduction. To correct the bias, we still need
to sample the fine model. An effective technique to reduce the variance
reduction is importance sampling, where we employ the generative model to
estimate the distribution of the data from the reduced-order model and use it
for the change of measure in the importance sampling estimator. To compensate
the approximation errors of the reduced-order model, more data that induce a
slightly smaller QoI than the threshold need to be included into the training
set. Although the amount of these data can be controlled by a posterior error
estimate, redundant data, which may outnumber the effective data, will be kept
due to the epistemic uncertainty. To deal with this issue, we introduce a
weighted empirical distribution to process the data from the reduced-order
model. The generative model is then trained by minimizing the cross entropy
between it and the weighted empirical distribution. We also introduce a penalty
term into the objective function to deal with the overfitting for more
robustness. Numerical results are presented to demonstrate the effectiveness of
the proposed methodology.
| 1 | 0 | 0 | 1 | 0 | 0 |
Anomalous dynamical phase in quantum spin chains with long-range interactions | The existence or absence of non-analytic cusps in the Loschmidt-echo return
rate is traditionally employed to distinguish between a regular dynamical phase
(regular cusps) and a trivial phase (no cusps) in quantum spin chains after a
global quench. However, numerical evidence in a recent study [J. C. Halimeh and
V. Zauner-Stauber, arXiv:1610.02019] suggests that instead of the trivial phase
a distinct anomalous dynamical phase characterized by a novel type of
non-analytic cusps occurs in the one-dimensional transverse-field Ising model
when interactions are sufficiently long-range. Using an analytic semiclassical
approach and exact diagonalization, we show that this anomalous phase also
arises in the fully-connected case of infinite-range interactions, and we
discuss its defining signature. Our results show that the transition from the
regular to the anomalous dynamical phase coincides with Z2-symmetry breaking in
the infinite-time limit, thereby showing a connection between two different
concepts of dynamical criticality. Our work further expands the dynamical phase
diagram of long-range interacting quantum spin chains, and can be tested
experimentally in ion-trap setups and ultracold atoms in optical cavities,
where interactions are inherently long-range.
| 0 | 1 | 0 | 0 | 0 | 0 |
Cold keV dark matter from decays and scatterings | We explore ways of creating cold keV-scale dark matter by means of decays and
scatterings. The main observation is that certain thermal freeze-in processes
can lead to a cold dark matter distribution in regions with small available
phase space. In this way the free-streaming length of keV particles can be
suppressed without decoupling them too much from the Standard Model. In all
cases, dark matter needs to be produced together with a heavy particle that
carries away most of the initial momentum. For decays, this simply requires an
off-diagonal DM coupling to two heavy particles; for scatterings, the coupling
of soft DM to two heavy particles needs to be diagonal, in particular in spin
space. Decays can thus lead to cold light DM of any spin, while scatterings
only work for bosons with specific couplings. We explore a number of simple
models and also comment on the connection to the tentative 3.5 keV line.
| 0 | 1 | 0 | 0 | 0 | 0 |
Solving high-dimensional partial differential equations using deep learning | Developing algorithms for solving high-dimensional partial differential
equations (PDEs) has been an exceedingly difficult task for a long time, due to
the notoriously difficult problem known as the "curse of dimensionality". This
paper introduces a deep learning-based approach that can handle general
high-dimensional parabolic PDEs. To this end, the PDEs are reformulated using
backward stochastic differential equations and the gradient of the unknown
solution is approximated by neural networks, very much in the spirit of deep
reinforcement learning with the gradient acting as the policy function.
Numerical results on examples including the nonlinear Black-Scholes equation,
the Hamilton-Jacobi-Bellman equation, and the Allen-Cahn equation suggest that
the proposed algorithm is quite effective in high dimensions, in terms of both
accuracy and cost. This opens up new possibilities in economics, finance,
operational research, and physics, by considering all participating agents,
assets, resources, or particles together at the same time, instead of making ad
hoc assumptions on their inter-relationships.
| 1 | 0 | 1 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.