ID
int64 1
21k
| TITLE
stringlengths 7
239
| ABSTRACT
stringlengths 7
2.76k
| Computer Science
int64 0
1
| Physics
int64 0
1
| Mathematics
int64 0
1
| Statistics
int64 0
1
| Quantitative Biology
int64 0
1
| Quantitative Finance
int64 0
1
|
---|---|---|---|---|---|---|---|---|
16,001 | Conformal growth rates and spectral geometry on distributional limits of graphs | For a unimodular random graph $(G,\rho)$, we consider deformations of its
intrinsic path metric by a (random) weighting of its vertices. This leads to
the notion of the {\em conformal growth exponent of $(G,\rho)$}, which is the
best asymptotic degree of volume growth of balls that can be achieved by such a
reweighting. Under moment conditions on the degree of the root, we show that
the conformal growth exponent of a unimodular random graph bounds the almost
sure spectral dimension.
In two dimensions, one obtains more precise information. If $(G,\rho)$ has a
property we call {\em quadratic conformal growth}, then the following holds: If
the degree of the root is uniformly bounded almost surely, then $G$ is almost
surely recurrent. Since limits of finite $H$-minor-free graphs have gauged
quadratic conformal growth, such limits are almost surely recurrent; this
affirms a conjecture of Benjamini and Schramm (2001). For the special case of
planar graphs, this gives a proof of the Benjamini-Schramm Recurrence Theorem
that does not proceed via the analysis of circle packings.
Gurel-Gurevich and Nachmias (2013) resolved a central open problem by showing
that the uniform infinite planar triangulation (UIPT) and quadrangulation
(UIPQ) are almost surely recurrent. They proved that this holds for any
distributional limit of planar graphs in which the degree of the root has
exponential tails (which is known to hold for UIPT and UIPQ). We use the
quadratic conformal growth property to give a new proof of this result that
holds for distributional limits of finite $H$-minor-free graphs. Moreover, our
arguments yield quantitative bounds on the heat kernel in terms of the degree
distribution at the root. This also yields a new approach to subdiffusivity of
the random walk on UIPT/UIPQ, using only the volume growth profile of balls in
the intrinsic metric.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,002 | Self-Supervised Vision-Based Detection of the Active Speaker as a Prerequisite for Socially-Aware Language Acquisition | This paper presents a self-supervised method for detecting the active speaker
in a multi-person spoken interaction scenario. We argue that this capability is
a fundamental prerequisite for any artificial cognitive system attempting to
acquire language in social settings. Our methods are able to detect an
arbitrary number of possibly overlapping active speakers based exclusively on
visual information about their face. Our methods do not rely on external
annotations, thus complying with cognitive development. Instead, they use
information from the auditory modality to support learning in the visual
domain. The methods have been extensively evaluated on a large multi-person
face-to-face interaction dataset. The results reach an accuracy of 80% on a
multi-speaker setting. We believe this system represents an essential component
of any artificial cognitive system or robotic platform engaging in social
interaction.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,003 | Monocular Imaging-based Autonomous Tracking for Low-cost Quad-rotor Design - TraQuad | TraQuad is an autonomous tracking quadcopter capable of tracking any moving
(or static) object like cars, humans, other drones or any other object
on-the-go. This article describes the applications and advantages of TraQuad
and the reduction in cost (to about 250$) that has been achieved so far using
the hardware and software capabilities and our custom algorithms wherever
needed. This description is backed by strong data and the research analyses
which have been drawn out of extant information or conducted on own when
necessary. This also describes the development of completely autonomous (even
GPS is optional) low-cost drone which can act as a major platform for further
developments in automation, transportation, reconnaissance and more. We
describe our ROS Gazebo simulator and our STATUS algorithms which form the core
of our development of our object tracking drone for generic purposes.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,004 | Persistence Diagrams with Linear Machine Learning Models | Persistence diagrams have been widely recognized as a compact descriptor for
characterizing multiscale topological features in data. When many datasets are
available, statistical features embedded in those persistence diagrams can be
extracted by applying machine learnings. In particular, the ability for
explicitly analyzing the inverse in the original data space from those
statistical features of persistence diagrams is significantly important for
practical applications. In this paper, we propose a unified method for the
inverse analysis by combining linear machine learning models with persistence
images. The method is applied to point clouds and cubical sets, showing the
ability of the statistical inverse analysis and its advantages.
| 1 | 0 | 1 | 0 | 0 | 0 |
16,005 | Characterizing the number of coloured $m$-ary partitions modulo $m$, with and without gaps | In a pair of recent papers, Andrews, Fraenkel and Sellers provide a complete
characterization for the number of $m$-ary partitions modulo $m$, with and
without gaps. In this paper we extend these results to the case of coloured
$m$-ary partitions, with and without gaps. Our method of proof is different,
giving explicit expansions for the generating functions modulo $m$
| 0 | 0 | 1 | 0 | 0 | 0 |
16,006 | Towards Approximate Mobile Computing | Mobile computing is one of the main drivers of innovation, yet the future
growth of mobile computing capabilities remains critically threatened by
hardware constraints, such as the already extremely dense transistor packing
and limited battery capacity. The breakdown of Dennard scaling and stagnating
energy storage improvements further amplify these threats. However, the
computational burden we put on our mobile devices is not always justified. In a
myriad of situations the result of a computation is further manipulated,
interpreted, and finally acted upon. This allows for the computation to be
relaxed, so that the result is calculated with "good enough", not perfect
accuracy. For example, results of a Web search may be perfectly acceptable even
if the order of the last few listed items is shuffled, as an end user decides
which of the available links to follow. Similarly, the quality of a
voice-over-IP call may be acceptable, despite being imperfect, as long as the
two involved parties can clearly understand each other. This novel way of
thinking about computation is termed Approximate Computing (AC) and promises to
reduce resource usage, while ensuring that satisfactory performance is
delivered to end-users. AC is already experimented with on various levels of
desktop computer architecture, from the hardware level where incorrect adders
have been designed to sacrifice result correctness for reduced energy
consumption, to compiler-level optimisations that omit certain lines of code to
speed up video encoding. AC is yet to be attempted on mobile devices and in
this article we examine the potential benefits of mobile AC and present an
overview of AC techniques applicable in the mobile domain.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,007 | Room-temperature high detectivity mid-infrared photodetectors based on black arsenic phosphorus | The mid-infrared (MIR) spectral range, pertaining to important applications
such as molecular 'fingerprint' imaging, remote sensing, free space
telecommunication and optical radar, is of particular scientific interest and
technological importance. However, state-of-the-art materials for MIR detection
are limited by intrinsic noise and inconvenient fabrication processes,
resulting in high cost photodetectors requiring cryogenic operation. We report
black arsenic-phosphorus-based long wavelength infrared photodetectors with
room temperature operation up to 8.2 um, entering the second MIR atmospheric
transmission window. Combined with a van der Waals heterojunction, room
temperature specific detectivity higher than 4.9*10^9 Jones was obtained in the
3-5 um range. The photodetector works in a zero-bias photovoltaic mode,
enabling fast photoresponse and low dark noise. Our van der Waals
heterojunction photodector not only exemplify black arsenic-phosphorus as a
promising candidate for MIR opto-electronic applications, but also pave the way
for a general strategy to suppress 1/f noise in photonic devices.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,008 | Topology determines force distributions in one-dimensional random spring networks | Networks of elastic fibers are ubiquitous in biological systems and often
provide mechanical stability to cells and tissues. Fiber reinforced materials
are also common in technology. An important characteristic of such materials is
their resistance to failure under load. Rupture occurs when fibers break under
excessive force and when that failure propagates. Therefore it is crucial to
understand force distributions. Force distributions within such networks are
typically highly inhomogeneous and are not well understood. Here we construct a
simple one-dimensional model system with periodic boundary conditions by
randomly placing linear springs on a circle. We consider ensembles of such
networks that consist of $N$ nodes and have an average degree of connectivity
$z$, but vary in topology. Using a graph-theoretical approach that accounts for
the full topology of each network in the ensemble, we show that, surprisingly,
the force distributions can be fully characterized in terms of the parameters
$(N,z)$. Despite the universal properties of such $(N,z)$-ensembles, our
analysis further reveals that a classical mean-field approach fails to capture
force distributions correctly. We demonstrate that network topology is a
crucial determinant of force distributions in elastic spring networks.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,009 | On a local invariant of elliptic curves with a p-isogeny | An elliptic curve $E$ defined over a $p$-adic field $K$ with a $p$-isogeny
$\phi:E\rightarrow E^\prime$ comes equipped with an invariant $\alpha_{\phi/K}$
that measures the valuation of the leading term of the formal group
homomorphism $\Phi:\hat E \rightarrow \hat E^\prime$. We prove that if
$K/\mathbb{Q}_p$ is unramified and $E$ has additive, potentially supersingular
reduction, then $\alpha_{\phi/K}$ is determined by the number of distinct
geometric components on the special fibers of the minimal proper regular models
of $E$ and $E^\prime$.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,010 | IIFA: Modular Inter-app Intent Information Flow Analysis of Android Applications | Android apps cooperate through message passing via intents. However, when
apps do not have identical sets of privileges inter-app communication (IAC) can
accidentally or maliciously be misused, e.g., to leak sensitive information
contrary to users expectations. Recent research considered static program
analysis to detect dangerous data leaks due to inter-component communication
(ICC) or IAC, but suffers from shortcomings with respect to precision,
soundness, and scalability. To solve these issues we propose a novel approach
for static ICC/IAC analysis. We perform a fixed-point iteration of ICC/IAC
summary information to precisely resolve intent communication with more than
two apps involved. We integrate these results with information flows generated
by a baseline (i.e. not considering intents) information flow analysis, and
resolve if sensitive data is flowing (transitively) through components/apps in
order to be ultimately leaked. Our main contribution is the first fully
automatic sound and precise ICC/IAC information flow analysis that is scalable
for realistic apps due to modularity, avoiding combinatorial explosion: Our
approach determines communicating apps using short summaries rather than
inlining intent calls, which often requires simultaneously analyzing all tuples
of apps. We evaluated our tool IIFA in terms of scalability, precision, and
recall. Using benchmarks we establish that precision and recall of our
algorithm are considerably better than prominent state-of-the-art analyses for
IAC. But foremost, applied to the 90 most popular applications from the Google
Playstore, IIFA demonstrated its scalability to a large corpus of real-world
apps. IIFA reports 62 problematic ICC-/IAC-related information flows via two or
more apps/components.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,011 | Glass+Skin: An Empirical Evaluation of the Added Value of Finger Identification to Basic Single-Touch Interaction on Touch Screens | The usability of small devices such as smartphones or interactive watches is
often hampered by the limited size of command vocabularies. This paper is an
attempt at better understanding how finger identification may help users invoke
commands on touch screens, even without recourse to multi-touch input. We
describe how finger identification can increase the size of input vocabularies
under the constraint of limited real estate, and we discuss some visual cues to
communicate this novel modality to novice users. We report a controlled
experiment that evaluated, over a large range of input-vocabulary sizes, the
efficiency of single-touch command selections with vs. without finger
identification. We analyzed the data not only in terms of traditional time and
error metrics, but also in terms of a throughput measure based on Shannon's
theory, which we show offers a synthetic and parsimonious account of users'
performance. The results show that the larger the input vocabulary needed by
the designer, the more promising the identification of individual fingers.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,012 | The Morphospace of Consciousness | We construct a complexity-based morphospace to study systems-level properties
of conscious & intelligent systems. The axes of this space label 3 complexity
types: autonomous, cognitive & social. Given recent proposals to synthesize
consciousness, a generic complexity-based conceptualization provides a useful
framework for identifying defining features of conscious & synthetic systems.
Based on current clinical scales of consciousness that measure cognitive
awareness and wakefulness, we take a perspective on how contemporary
artificially intelligent machines & synthetically engineered life forms measure
on these scales. It turns out that awareness & wakefulness can be associated to
computational & autonomous complexity respectively. Subsequently, building on
insights from cognitive robotics, we examine the function that consciousness
serves, & argue the role of consciousness as an evolutionary game-theoretic
strategy. This makes the case for a third type of complexity for describing
consciousness: social complexity. Having identified these complexity types,
allows for a representation of both, biological & synthetic systems in a common
morphospace. A consequence of this classification is a taxonomy of possible
conscious machines. We identify four types of consciousness, based on
embodiment: (i) biological consciousness, (ii) synthetic consciousness, (iii)
group consciousness (resulting from group interactions), & (iv) simulated
consciousness (embodied by virtual agents within a simulated reality). This
taxonomy helps in the investigation of comparative signatures of consciousness
across domains, in order to highlight design principles necessary to engineer
conscious machines. This is particularly relevant in the light of recent
developments at the crossroads of cognitive neuroscience, biomedical
engineering, artificial intelligence & biomimetics.
| 1 | 1 | 0 | 0 | 0 | 0 |
16,013 | Transverse Magnetic Susceptibility of a Frustrated Spin-$\frac{1}{2}$ $J_{1}$--$J_{2}$--$J_{1}^{\perp}$ Heisenberg Antiferromagnet on a Bilayer Honeycomb Lattice | We use the coupled cluster method (CCM) to study a frustrated
spin-$\frac{1}{2}$ $J_{1}$--$J_{2}$--$J_{1}^{\perp}$ Heisenberg antiferromagnet
on a bilayer honeycomb lattice with $AA$ stacking. Both nearest-neighbor (NN)
and frustrating next-nearest-neighbor antiferromagnetic (AFM) exchange
interactions are present in each layer, with respective exchange coupling
constants $J_{1}>0$ and $J_{2} \equiv \kappa J_{1} > 0$. The two layers are
coupled with NN AFM exchanges with coupling strength $J_{1}^{\perp}\equiv
\delta J_{1}>0$. We calculate to high orders of approximation within the CCM
the zero-field transverse magnetic susceptibility $\chi$ in the Néel phase.
We thus obtain an accurate estimate of the full boundary of the Néel phase in
the $\kappa\delta$ plane for the zero-temperature quantum phase diagram. We
demonstrate explicitly that the phase boundary derived from $\chi$ is fully
consistent with that obtained from the vanishing of the Néel magnetic order
parameter. We thus conclude that at all points along the Néel phase boundary
quasiclassical magnetic order gives way to a nonclassical paramagnetic phase
with a nonzero energy gap. The Néel phase boundary exhibits a marked
reentrant behavior, which we discuss in detail.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,014 | Minimum polyhedron with $n$ vertices | We study a polyhedron with $n$ vertices of fixed volume having minimum
surface area. Completing the proof of Toth, we show that all faces of a minimum
polyhedron are triangles, and further prove that a minimum polyhedron does not
allow deformation of a single vertex. We also present possible minimum shapes
for $n\le 12$, some of them are quite unexpected, in particular $n=8$.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,015 | Theory of interacting fermions in shaken square optical lattice | We develop a theory of weakly interacting fermionic atoms in shaken optical
lattices based on the orbital mixing in the presence of time-periodic
modulations. Specifically, we focus on fermionic atoms in circularly shaken
square lattice with near resonance frequencies, i.e., tuned close to the energy
separation between $s$-band and the $p$-bands. First, we derive a
time-independent four-band effective Hamiltonian in the non-interacting limit.
Diagonalization of the effective Hamiltonian yields a quasi-energy spectrum
consistent with the full numerical Floquet solution that includes all higher
bands. In particular, we find that the hybridized $s$-band develops multiple
minima and therefore non-trivial Fermi surfaces at different fillings. We then
obtain the effective interactions for atoms in the hybridized $s$-band
analytically and show that they acquire momentum dependence on the Fermi
surface even though the bare interaction is contact-like. We apply the theory
to find the phase diagram of fermions with weak attractive interactions and
demonstrate that the pairing symmetry is $s+d$-wave. Our theory is valid for a
range of shaking frequencies near resonance, and it can be generalized to other
phases of interacting fermions in shaken lattices.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,016 | Simple to Complex Cross-modal Learning to Rank | The heterogeneity-gap between different modalities brings a significant
challenge to multimedia information retrieval. Some studies formalize the
cross-modal retrieval tasks as a ranking problem and learn a shared multi-modal
embedding space to measure the cross-modality similarity. However, previous
methods often establish the shared embedding space based on linear mapping
functions which might not be sophisticated enough to reveal more complicated
inter-modal correspondences. Additionally, current studies assume that the
rankings are of equal importance, and thus all rankings are used
simultaneously, or a small number of rankings are selected randomly to train
the embedding space at each iteration. Such strategies, however, always suffer
from outliers as well as reduced generalization capability due to their lack of
insightful understanding of procedure of human cognition. In this paper, we
involve the self-paced learning theory with diversity into the cross-modal
learning to rank and learn an optimal multi-modal embedding space based on
non-linear mapping functions. This strategy enhances the model's robustness to
outliers and achieves better generalization via training the model gradually
from easy rankings by diverse queries to more complex ones. An efficient
alternative algorithm is exploited to solve the proposed challenging problem
with fast convergence in practice. Extensive experimental results on several
benchmark datasets indicate that the proposed method achieves significant
improvements over the state-of-the-arts in this literature.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,017 | Iteration-complexity analysis of a generalized alternating direction method of multipliers | This paper analyzes the iteration-complexity of a generalized alternating
direction method of multipliers (G-ADMM) for solving linearly constrained
convex problems. This ADMM variant, which was first proposed by Bertsekas and
Eckstein, introduces a relaxation parameter $\alpha \in (0,2)$ into the second
ADMM subproblem. Our approach is to show that the G-ADMM is an instance of a
hybrid proximal extragradient framework with some special properties, and, as a
by product, we obtain ergodic iteration-complexity for the G-ADMM with
$\alpha\in (0,2]$, improving and complementing related results in the
literature. Additionally, we also present pointwise iteration-complexity for
the G-ADMM.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,018 | A Game Theoretic Macroscopic Model of Bypassing at Traffic Diverges with Applications to Mixed Autonomy Networks | Vehicle bypassing is known to negatively affect delays at traffic diverges.
However, due to the complexities of this phenomenon, accurate and yet simple
models of such lane change maneuvers are hard to develop. In this work, we
present a macroscopic model for predicting the number of vehicles that bypass
at a traffic diverge. We take into account the selfishness of vehicles in
selecting their lanes; every vehicle selects lanes such that its own cost is
minimized. We discuss how we model the costs experienced by the vehicles. Then,
taking into account the selfish behavior of the vehicles, we model the lane
choice of vehicles at a traffic diverge as a Wardrop equilibrium. We state and
prove the properties of Wardrop equilibrium in our model. We show that there
always exists an equilibrium for our model. Moreover, unlike most nonlinear
asymmetrical routing games, we prove that the equilibrium is unique under mild
assumptions. We discuss how our model can be easily calibrated by running a
simple optimization problem. Using our calibrated model, we validate it through
simulation studies and demonstrate that our model successfully predicts the
aggregate lane change maneuvers that are performed by vehicles for bypassing at
a traffic diverge. We further discuss how our model can be employed to obtain
the optimal lane choice behavior of the vehicles, where the social or total
cost of vehicles is minimized. Finally, we demonstrate how our model can be
utilized in scenarios where a central authority can dictate the lane choice and
trajectory of certain vehicles so as to increase the overall vehicle mobility
at a traffic diverge. Examples of such scenarios include the case when both
human driven and autonomous vehicles coexist in the network. We show how
certain decisions of the central authority can affect the total delays in such
scenarios via an example.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,019 | Non-resonant secular dynamics of trans-Neptunian objects perturbed by a distant super-Earth | We use a secular model to describe the non-resonant dynamics of
trans-Neptunian objects in the presence of an external ten-earth-mass
perturber. The secular dynamics is analogous to an "eccentric Kozai mechanism"
but with both an inner component (the four giant planets) and an outer one (the
eccentric distant perturber). By the means of Poincaré sections, the cases of
a non-inclined or inclined outer planet are successively studied, making the
connection with previous works. In the inclined case, the problem is reduced to
two degrees of freedom by assuming a non-precessing argument of perihelion for
the perturbing body.
The size of the perturbation is typically ruled by the semi-major axis of the
small body: we show that the classic integrable picture is still valid below
about 70 AU, but it is progressively destroyed when we get closer to the
external perturber. In particular, for a>150 AU, large-amplitude orbital flips
become possible, and for a>200 AU, the Kozai libration islands are totally
submerged by the chaotic sea. Numerous resonance relations are highlighted. The
most large and persistent ones are associated to apsidal alignments or
anti-alignments with the orbit of the distant perturber.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,020 | Stability Analysis for Switched Systems with Sequence-based Average Dwell Time | This note investigates the stability of both linear and nonlinear switched
systems with average dwell time. Two new analysis methods are proposed.
Different from existing approaches, the proposed methods take into account the
sequence in which the subsystems are switched. Depending on the predecessor or
successor subsystems to be considered, sequence-based average preceding dwell
time (SBAPDT) and sequence-based average subsequence dwell time (SBASDT)
approaches are proposed and discussed for both continuous and discrete time
systems. These proposed methods, when considering the switch sequence, have the
potential to further reduce the conservativeness of the existing approaches. A
comparative numerical example is also given to demonstrate the advantages of
the proposed approaches.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,021 | Interpreting and using CPDAGs with background knowledge | We develop terminology and methods for working with maximally oriented
partially directed acyclic graphs (maximal PDAGs). Maximal PDAGs arise from
imposing restrictions on a Markov equivalence class of directed acyclic graphs,
or equivalently on its graphical representation as a completed partially
directed acyclic graph (CPDAG), for example when adding background knowledge
about certain edge orientations. Although maximal PDAGs often arise in
practice, causal methods have been mostly developed for CPDAGs. In this paper,
we extend such methodology to maximal PDAGs. In particular, we develop
methodology to read off possible ancestral relationships, we introduce a
graphical criterion for covariate adjustment to estimate total causal effects,
and we adapt the IDA and joint-IDA frameworks to estimate multi-sets of
possible causal effects. We also present a simulation study that illustrates
the gain in identifiability of total causal effects as the background knowledge
increases. All methods are implemented in the R package pcalg.
| 0 | 0 | 1 | 1 | 0 | 0 |
16,022 | A cable-driven parallel manipulator with force sensing capabilities for high-accuracy tissue endomicroscopy | This paper introduces a new surgical end-effector probe, which allows to
accurately apply a contact force on a tissue, while at the same time allowing
for high resolution and highly repeatable probe movement. These are achieved by
implementing a cable-driven parallel manipulator arrangement, which is deployed
at the distal-end of a robotic instrument. The combination of the offered
qualities can be advantageous in several ways, with possible applications
including: large area endomicroscopy and multi-spectral imaging, micro-surgery,
tissue palpation, safe energy-based and conventional tissue resection. To
demonstrate the concept and its adaptability, the probe is integrated with a
modified da Vinci robot instrument.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,023 | Overfitting Mechanism and Avoidance in Deep Neural Networks | Assisted by the availability of data and high performance computing, deep
learning techniques have achieved breakthroughs and surpassed human performance
empirically in difficult tasks, including object recognition, speech
recognition, and natural language processing. As they are being used in
critical applications, understanding underlying mechanisms for their successes
and limitations is imperative. In this paper, we show that overfitting, one of
the fundamental issues in deep neural networks, is due to continuous gradient
updating and scale sensitiveness of cross entropy loss. By separating samples
into correctly and incorrectly classified ones, we show that they behave very
differently, where the loss decreases in the correct ones and increases in the
incorrect ones. Furthermore, by analyzing dynamics during training, we propose
a consensus-based classification algorithm that enables us to avoid overfitting
and significantly improve the classification accuracy especially when the
number of training samples is limited. As each trained neural network depends
on extrinsic factors such as initial values as well as training data, requiring
consensus among multiple models reduces extrinsic factors substantially; for
statistically independent models, the reduction is exponential. Compared to
ensemble algorithms, the proposed algorithm avoids overgeneralization by not
classifying ambiguous inputs. Systematic experimental results demonstrate the
effectiveness of the proposed algorithm. For example, using only 1000 training
samples from MNIST dataset, the proposed algorithm achieves 95% accuracy,
significantly higher than any of the individual models, with 90% of the test
samples classified.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,024 | What deep learning can tell us about higher cognitive functions like mindreading? | Can deep learning (DL) guide our understanding of computations happening in
biological brain? We will first briefly consider how DL has contributed to the
research on visual object recognition. In the main part we will assess whether
DL could also help us to clarify the computations underlying higher cognitive
functions such as Theory of Mind. In addition, we will compare the objectives
and learning signals of brains and machines, leading us to conclude that simply
scaling up the current DL algorithms will not lead to human level mindreading
skills. We then provide some insights about how to fairly compare human and DL
performance. In the end we find that DL can contribute to our understanding of
biological computations by providing an example of an end-to-end algorithm that
solves the same problems the biological agents face.
| 0 | 0 | 0 | 0 | 1 | 0 |
16,025 | Concentration of quantum states from quantum functional and transportation cost inequalities | Quantum functional inequalities (e.g. the logarithmic Sobolev- and Poincaré
inequalities) have found widespread application in the study of the behavior of
primitive quantum Markov semigroups. The classical counterparts of these
inequalities are related to each other via a so-called transportation cost
inequality of order 2 (TC2). The latter inequality relies on the notion of a
metric on the set of probability distributions called the Wasserstein distance
of order 2. (TC2) in turn implies a transportation cost inequality of order 1
(TC1). In this paper, we introduce quantum generalizations of the inequalities
(TC1) and (TC2), making use of appropriate quantum versions of the Wasserstein
distances, one recently defined by Carlen and Maas and the other defined by us.
We establish that these inequalities are related to each other, and to the
quantum modified logarithmic Sobolev- and Poincaré inequalities, as in the
classical case. We also show that these inequalities imply certain
concentration-type results for the invariant state of the underlying semigroup.
We consider the example of the depolarizing semigroup to derive concentration
inequalities for any finite dimensional full-rank quantum state. These
inequalities are then applied to derive upper bounds on the error probabilities
occurring in the setting of finite blocklength quantum parameter estimation.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,026 | Stochastic Training of Neural Networks via Successive Convex Approximations | This paper proposes a new family of algorithms for training neural networks
(NNs). These are based on recent developments in the field of non-convex
optimization, going under the general name of successive convex approximation
(SCA) techniques. The basic idea is to iteratively replace the original
(non-convex, highly dimensional) learning problem with a sequence of (strongly
convex) approximations, which are both accurate and simple to optimize.
Differently from similar ideas (e.g., quasi-Newton algorithms), the
approximations can be constructed using only first-order information of the
neural network function, in a stochastic fashion, while exploiting the overall
structure of the learning problem for a faster convergence. We discuss several
use cases, based on different choices for the loss function (e.g., squared loss
and cross-entropy loss), and for the regularization of the NN's weights. We
experiment on several medium-sized benchmark problems, and on a large-scale
dataset involving simulated physical data. The results show how the algorithm
outperforms state-of-the-art techniques, providing faster convergence to a
better minimum. Additionally, we show how the algorithm can be easily
parallelized over multiple computational units without hindering its
performance. In particular, each computational unit can optimize a tailored
surrogate function defined on a randomly assigned subset of the input
variables, whose dimension can be selected depending entirely on the available
computational power.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,027 | Best arm identification in multi-armed bandits with delayed feedback | We propose a generalization of the best arm identification problem in
stochastic multi-armed bandits (MAB) to the setting where every pull of an arm
is associated with delayed feedback. The delay in feedback increases the
effective sample complexity of standard algorithms, but can be offset if we
have access to partial feedback received before a pull is completed. We propose
a general framework to model the relationship between partial and delayed
feedback, and as a special case we introduce efficient algorithms for settings
where the partial feedback are biased or unbiased estimators of the delayed
feedback. Additionally, we propose a novel extension of the algorithms to the
parallel MAB setting where an agent can control a batch of arms. Our
experiments in real-world settings, involving policy search and hyperparameter
optimization in computational sustainability domains for fast charging of
batteries and wildlife corridor construction, demonstrate that exploiting the
structure of partial feedback can lead to significant improvements over
baselines in both sequential and parallel MAB.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,028 | General $(α, β)$ metrics with relatively isotroic mean Landsberg curvature | In this paper, we study a new class of Finsler metrics, F=\alpha\phi(b^2,s),
s:=\beta/\alpha, defined by a Riemannian metric \alpha and 1-form \beta. It is
called general (\alpha, \beta) metric. In this paper, we assume \phi be
coefficient by s and \beta be closed and conformal. We find a nessecary and
sufficient condition for the metric of relatively isotropic mean Landsberg
curvature to be Berwald.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,029 | Structured Optimal Transport | Optimal Transport has recently gained interest in machine learning for
applications ranging from domain adaptation, sentence similarities to deep
learning. Yet, its ability to capture frequently occurring structure beyond the
"ground metric" is limited. In this work, we develop a nonlinear generalization
of (discrete) optimal transport that is able to reflect much additional
structure. We demonstrate how to leverage the geometry of this new model for
fast algorithms, and explore connections and properties. Illustrative
experiments highlight the benefit of the induced structured couplings for tasks
in domain adaptation and natural language processing.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,030 | Very metal-poor stars observed by the RAVE survey | We present a novel analysis of the metal-poor star sample in the complete
Radial Velocity Experiment (RAVE) Data Release 5 catalog with the goal of
identifying and characterizing all very metal-poor stars observed by the
survey. Using a three-stage method, we first identified the candidate stars
using only their spectra as input information. We employed an algorithm called
t-SNE to construct a low-dimensional projection of the spectrum space and
isolate the region containing metal-poor stars. Following this step, we
measured the equivalent widths of the near-infrared CaII triplet lines with a
method based on flexible Gaussian processes to model the correlated noise
present in the spectra. In the last step, we constructed a calibration relation
that converts the measured equivalent widths and the color information coming
from the 2MASS and WISE surveys into metallicity and temperature estimates. We
identified 877 stars with at least a 50% probability of being very metal-poor
$(\rm [Fe/H] < -2\,\rm dex)$, out of which 43 are likely extremely metal-poor
$(\rm [Fe/H] < -3\,\rm dex )$. The comparison of the derived values to a small
subsample of stars with literature metallicity values shows that our method
works reliably and correctly estimates the uncertainties, which typically have
values $\sigma_{\rm [Fe/H]} \approx 0.2\,\mathrm{dex}$. In addition, when
compared to the metallicity results derived using the RAVE DR5 pipeline, it is
evident that we achieve better accuracy than the pipeline and therefore more
reliably evaluate the very metal-poor subsample. Based on the repeated
observations of the same stars, our method gives very consistent results. The
method used in this work can also easily be extended to other large-scale data
sets, including to the data from the Gaia mission and the upcoming 4MOST
survey.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,031 | Estimating causal effects of time-dependent exposures on a binary endpoint in a high-dimensional setting | Recently, the intervention calculus when the DAG is absent (IDA) method was
developed to estimate lower bounds of causal effects from observational
high-dimensional data. Originally it was introduced to assess the effect of
baseline biomarkers which do not vary over time. However, in many clinical
settings, measurements of biomarkers are repeated at fixed time points during
treatment exposure and, therefore, this method need to be extended. The purpose
of this paper is then to extend the first step of the IDA, the Peter Clarks
(PC)-algorithm, to a time-dependent exposure in the context of a binary
outcome. We generalised the PC-algorithm for taking into account the
chronological order of repeated measurements of the exposure and propose to
apply the IDA with our new version, the chronologically ordered PC-algorithm
(COPC-algorithm). A simulation study has been performed before applying the
method for estimating causal effects of time-dependent immunological biomarkers
on toxicity, death and progression in patients with metastatic melanoma. The
simulation study showed that the completed partially directed acyclic graphs
(CPDAGs) obtained using COPC-algorithm were structurally closer to the true
CPDAG than CPDAGs obtained using PC-algorithm. Also, causal effects were more
accurate when they were estimated based on CPDAGs obtained using
COPC-algorithm. Moreover, CPDAGs obtained by COPC-algorithm allowed removing
non-chronologic arrows with a variable measured at a time t pointing to a
variable measured at a time t' where t'< t. Bidirected edges were less present
in CPDAGs obtained with the COPC-algorithm, supporting the fact that there was
less variability in causal effects estimated from these CPDAGs. The
COPC-algorithm provided CPDAGs that keep the chronological structure present in
the data, thus allowed to estimate lower bounds of the causal effect of
time-dependent biomarkers.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,032 | A Note on Cyclotomic Integers | In this note, we present a new proof that the cyclotomic integers constitute
the full ring of integers in the cyclotomic field.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,033 | Stochastic homogenization for functionals with anisotropic rescaling and non-coercive Hamilton-Jacobi equations | We study the stochastic homogenization for a Cauchy problem for a first-order
Hamilton-Jacobi equation whose operator is not coercive w.r.t. the gradient
variable. We look at Hamiltonians like $H(x,\sigma(x)p,\omega)$ where
$\sigma(x)$ is a matrix associated to a Carnot group. The rescaling considered
is consistent with the underlying Carnot group structure, thus anisotropic. We
will prove that under suitable assumptions for the Hamiltonian, the solutions
of the $\varepsilon$-problem converge to a deterministic function which can be
characterized as the unique (viscosity) solution of a suitable deterministic
Hamilton-Jacobi problem.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,034 | Automatic Software Repair: a Bibliography | This article presents a survey on automatic software repair. Automatic
software repair consists of automatically finding a solution to software bugs
without human intervention. This article considers all kinds of repairs. First,
it discusses behavioral repair where test suites, contracts, models, and
crashing inputs are taken as oracle. Second, it discusses state repair, also
known as runtime repair or runtime recovery, with techniques such as checkpoint
and restart, reconfiguration, and invariant restoration. The uniqueness of this
article is that it spans the research communities that contribute to this body
of knowledge: software engineering, dependability, operating systems,
programming languages, and security. It provides a novel and structured
overview of the diversity of bug oracles and repair operators used in the
literature.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,035 | The design and the performance of stratospheric mission in the search for the Schumann resonances | The technical details of a balloon stratospheric mission that is aimed at
measuring the Schumann resonances are described. The gondola is designed
specifically for the measuring of faint effects of ELF (Extremely Low Frequency
electromagnetic waves) phenomena. The prototype met the design requirements.
The ELF measuring system worked properly for entire mission; however, the level
of signal amplification that was chosen taking into account ground-level
measurements was too high. Movement of the gondola in the Earth magnetic field
induced the signal in the antenna that saturated the measuring system. This
effect will be taken into account in the planning of future missions. A large
telemetry dataset was gathered during the experiment and is currently under
processing. The payload consists also of biological material as well as
electronic equipment that was tested under extreme conditions.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,036 | A Systematic Comparison of Deep Learning Architectures in an Autonomous Vehicle | Self-driving technology is advancing rapidly --- albeit with significant
challenges and limitations. This progress is largely due to recent developments
in deep learning algorithms. To date, however, there has been no systematic
comparison of how different deep learning architectures perform at such tasks,
or an attempt to determine a correlation between classification performance and
performance in an actual vehicle, a potentially critical factor in developing
self-driving systems. Here, we introduce the first controlled comparison of
multiple deep-learning architectures in an end-to-end autonomous driving task
across multiple testing conditions. We compared performance, under identical
driving conditions, across seven architectures including a fully-connected
network, a simple 2 layer CNN, AlexNet, VGG-16, Inception-V3, ResNet, and an
LSTM by assessing the number of laps each model was able to successfully
complete without crashing while traversing an indoor racetrack. We compared
performance across models when the conditions exactly matched those in training
as well as when the local environment and track were configured differently and
objects that were not included in the training dataset were placed on the track
in various positions. In addition, we considered performance using several
different data types for training and testing including single grayscale and
color frames, and multiple grayscale frames stacked together in sequence. With
the exception of a fully-connected network, all models performed reasonably
well (around or above 80\%) and most very well (~95\%) on at least one input
type but with considerable variation across models and inputs. Overall,
AlexNet, operating on single color frames as input, achieved the best level of
performance (100\% success rate in phase one and 55\% in phase two) while
VGG-16 performed well most consistently across image types.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,037 | On the Theory of Light Propagation in Crystalline Dielectrics | A synoptic view on the long-established theory of light propagation in
crystalline dielectrics is presented, providing a new exact solution for the
microscopic local electromagnetic field thus disclosing the role of the
divergence-free (transversal) and curl-free (longitudinal) parts of the
electromagnetic field inside a material as a function of the density of
polarizable atoms. Our results enable fast and efficient calculation of the
photonic bandstructure and also the (non-local) dielectric tensor, solely with
the crystalline symmetry and atom-individual polarizabilities as input.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,038 | Microfluidic control of nucleation and growth of calcite | The nucleation and growth of calcite is an important research in scientific
and industrial field. Both the macroscopic and microscopic observation of
calcite growth have been reported. Now, with the development of microfluidic
device, we could focus the nucleation and growth of one single calcite. By
changing the flow rate of fluid, the concentration of fluid is controlled. We
introduced a new method to study calcite growth in situ and measured the growth
rate of calcite in microfluidic channel.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,039 | Using low-frequency pulsar observations to study the 3-D structure of the Galactic magnetic field | The Galactic magnetic field (GMF) plays a role in many astrophysical
processes and is a significant foreground to cosmological signals, such as the
Epoch of Reionization (EoR), but is not yet well understood. Dispersion and
Faraday rotation measurements (DMs and RMs, respectively) towards a large
number of pulsars provide an efficient method to probe the three-dimensional
structure of the GMF. Low-frequency polarisation observations with large
fractional bandwidth can be used to measure precise DMs and RMs. This is
demonstrated by a catalogue of RMs (corrected for ionospheric Faraday rotation)
from the Low Frequency Array (LOFAR), with a growing complementary catalogue in
the southern hemisphere from the Murchison Widefield Array (MWA). These data
further our knowledge of the three-dimensional GMF, particularly towards the
Galactic halo. Recently constructed or upgraded pathfinder and precursor
telescopes, such as LOFAR and the MWA, have reinvigorated low-frequency science
and represent progress towards the construction of the Square Kilometre Array
(SKA), which will make significant advancements in studies of astrophysical
magnetic fields in the future. A key science driver for the SKA-Low is to study
the EoR, for which pulsar and polarisation data can provide valuable insights
in terms of Galactic foreground conditions.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,040 | Simple mechanical cues could explain adipose tissue morphology | The mechanisms by which organs acquire their functional structure and realize
its maintenance (or homeostasis) over time are still largely unknown. In this
paper, we investigate this question on adipose tissue. Adipose tissue can
represent 20 to 50% of the body weight. Its investigation is key to overcome a
large array of metabolic disorders that heavily strike populations worldwide.
Adipose tissue consists of lobular clusters of adipocytes surrounded by an
organized collagen fiber network. By supplying substrates needed for
adipogenesis, vasculature was believed to induce the regroupment of adipocytes
near capillary extremities. This paper shows that the emergence of these
structures could be explained by simple mechanical interactions between the
adipocytes and the collagen fibers. Our assumption is that the fiber network
resists the pressure induced by the growing adipocytes and forces them to
regroup into clusters. Reciprocally, cell clusters force the fibers to merge
into a well-organized network. We validate this hypothesis by means of a
two-dimensional Individual Based Model (IBM) of interacting adipocytes and
extra-cellular-matrix fiber elements. The model produces structures that
compare quantitatively well to the experimental observations. Our model seems
to indicate that cell clusters could spontaneously emerge as a result of simple
mechanical interactions between cells and fibers and surprisingly, vasculature
is not directly needed for these structures to emerge.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,041 | Comments on the National Toxicology Program Report on Cancer, Rats and Cell Phone Radiation | With the National Toxicology Program issuing its final report on cancer, rats
and cell phone radiation, one can draw the following conclusions from their
data. There is a roughly linear relationship between gliomas (brain cancers)
and schwannomas (cancers of the nerve sheaths around the heart) with increased
absorption of 900 MHz radiofrequency radiation for male rats. The rate of these
cancers in female rats is about one third the rate in male rats; the rate of
gliomas in female humans is about two thirds the rate in male humans. Both of
these observations can be explained by a decrease in sensitivity to chemical
carcinogenesis in both female rats and female humans. The increase in male rat
life spans with increased radiofrequency absorption is due to a reduction in
kidney failure from a decrease in food intake. No such similar increase in the
life span of humans who use cell phones is expected.
| 0 | 0 | 0 | 0 | 1 | 0 |
16,042 | Estimates for the coefficients of differential dimension polynomials | We answer the following long-standing question of Kolchin: given a system of
algebraic-differential equations $\Sigma(x_1,\dots,x_n)=0$ in $m$ derivatives
over a differential field of characteristic zero, is there a computable bound,
that only depends on the order of the system (and on the fixed data $m$ and
$n$), for the typical differential dimension of any prime component of
$\Sigma$? We give a positive answer in a strong form; that is, we compute a
(lower and upper) bound for all the coefficients of the Kolchin polynomial of
every such prime component. We then show that, if we look at those components
of a specified differential type, we can compute a significantly better bound
for the typical differential dimension. This latter improvement comes from new
combinatorial results on characteristic sets, in combination with the classical
theorems of Macaulay and Gotzmann on the growth of Hilbert-Samuel functions.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,043 | Giant perpendicular exchange bias with antiferromagnetic MnN | We investigated an out-of-plane exchange bias system that is based on the
antiferromagnet MnN. Polycrystalline, highly textured film stacks of Ta / MnN /
CoFeB / MgO / Ta were grown on SiO$_x$ by (reactive) magnetron sputtering and
studied by x-ray diffraction and Kerr magnetometry. Nontrivial modifications of
the exchange bias and the perpendicular magnetic anisotropy were observed both
as functions of film thicknesses as well as field cooling temperatures. In
optimized film stacks, a giant perpendicular exchange bias of 3600 Oe and a
coercive field of 350 Oe were observed at room temperature. The effective
interfacial exchange energy is estimated to be $J_\mathrm{eff} = 0.24$ mJ/m$^2$
and the effective uniaxial anisotropy constant of the antiferromagnet is
$K_\mathrm{eff} = 24$ kJ/m$^3$. The maximum effective perpendicular anisotropy
field of the CoFeB layer is $H_\mathrm{ani} = 3400$ Oe. These values are larger
than any previously reported values. These results possibly open a route to
magnetically stable, exchange biased perpendicularly magnetized spin valves.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,044 | A Fuzzy Community-Based Recommender System Using PageRank | Recommendation systems are widely used by different user service providers
specially those who have interactions with the large community of users. This
paper introduces a recommender system based on community detection. The
recommendation is provided using the local and global similarities between
users. The local information is obtained from communities, and the global ones
are based on the ratings. Here, a new fuzzy community detection using the
personalized PageRank metaphor is introduced. The fuzzy membership values of
the users to the communities are utilized to define a similarity measure. The
method is evaluated by using two well-known datasets: MovieLens and FilmTrust.
The results show that our method outperforms recent recommender systems.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,045 | Foundations of Complex Event Processing | Complex Event Processing (CEP) has emerged as the unifying field for
technologies that require processing and correlating distributed data sources
in real-time. CEP finds applications in diverse domains, which has resulted in
a large number of proposals for expressing and processing complex events.
However, existing CEP languages lack from a clear semantics, making them hard
to understand and generalize. Moreover, there are no general techniques for
evaluating CEP query languages with clear performance guarantees.
In this paper we embark on the task of giving a rigorous and efficient
framework to CEP. We propose a formal language for specifying complex events,
called CEL, that contains the main features used in the literature and has a
denotational and compositional semantics. We also formalize the so-called
selection strategies, which had only been presented as by-design extensions to
existing frameworks. With a well-defined semantics at hand, we study how to
efficiently evaluate CEL for processing complex events in the case of unary
filters. We start by studying the syntactical properties of CEL and propose
rewriting optimization techniques for simplifying the evaluation of formulas.
Then, we introduce a formal computational model for CEP, called complex event
automata (CEA), and study how to compile CEL formulas into CEA. Furthermore, we
provide efficient algorithms for evaluating CEA over event streams using
constant time per event followed by constant-delay enumeration of the results.
By gathering these results together, we propose a framework for efficiently
evaluating CEL with unary filters. Finally, we show experimentally that this
framework consistently outperforms the competition, and even over trivial
queries can be orders of magnitude more efficient.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,046 | Prior Variances and Depth Un-Biased Estimators in EEG Focal Source Imaging | In electroencephalography (EEG) source imaging, the inverse source estimates
are depth biased in such a way that their maxima are often close to the
sensors. This depth bias can be quantified by inspecting the statistics (mean
and co-variance) of these estimates. In this paper, we find weighting factors
within a Bayesian framework for the used L1/L2 sparsity prior that the
resulting maximum a posterior (MAP) estimates do not favor any particular
source location. Due to the lack of an analytical expression for the MAP
estimate when this sparsity prior is used, we solve the weights indirectly.
First, we calculate the Gaussian prior variances that lead to depth un-biased
maximum a posterior (MAP) estimates. Subsequently, we approximate the
corresponding weight factors in the sparsity prior based on the solved Gaussian
prior variances. Finally, we reconstruct focal source configurations using the
sparsity prior with the proposed weights and two other commonly used choices of
weights that can be found in literature.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,047 | Standard Galactic Field RR Lyrae. I. Optical to Mid-infrared Phased Photometry | We present a multi-wavelength compilation of new and previously-published
photometry for 55 Galactic field RR Lyrae variables. Individual studies,
spanning a time baseline of up to 30 years, are self-consistently phased to
produce light curves in 10 photometric bands covering the wavelength range from
0.4 to 4.5 microns. Data smoothing via the GLOESS technique is described and
applied to generate high-fidelity light curves, from which mean magnitudes,
amplitudes, rise-times, and times of minimum and maximum light are derived.
60,000 observations were acquired using the new robotic Three-hundred
MilliMeter Telescope (TMMT), which was first deployed at the Carnegie
Observatories in Pasadena, CA, and is now permanently installed and operating
at Las Campanas Observatory in Chile. We provide a full description of the TMMT
hardware, software, and data reduction pipeline. Archival photometry
contributed approximately 31,000 observations. Photometric data are given in
the standard Johnson UBV, Kron-Cousins RI, 2MASS JHK, and Spitzer [3.6] & [4.5]
bandpasses.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,048 | An Updated Literature Review of Distance Correlation and its Applications to Time Series | The concept of distance covariance/correlation was introduced recently to
characterize dependence among vectors of random variables. We review some
statistical aspects of distance covariance/correlation function and we
demonstrate its applicability to time series analysis. We will see that the
auto-distance covariance/correlation function is able to identify nonlinear
relationships and can be employed for testing the i.i.d.\ hypothesis.
Comparisons with other measures of dependence are included.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,049 | A Novel Method for Extrinsic Calibration of Multiple RGB-D Cameras Using Descriptor-Based Patterns | This letter presents a novel method to estimate the relative poses between
RGB-D cameras with minimal overlapping fields of view in a panoramic RGB-D
camera system. This calibration problem is relevant to applications such as
indoor 3D mapping and robot navigation that can benefit from a 360$^\circ$
field of view using RGB-D cameras. The proposed approach relies on
descriptor-based patterns to provide well-matched 2D keypoints in the case of a
minimal overlapping field of view between cameras. Integrating the matched 2D
keypoints with corresponding depth values, a set of 3D matched keypoints are
constructed to calibrate multiple RGB-D cameras. Experiments validated the
accuracy and efficiency of the proposed calibration approach, both superior to
those of existing methods (800 ms vs. 5 seconds; rotation error of 0.56 degrees
vs. 1.6 degrees; and translation error of 1.80 cm vs. 2.5 cm.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,050 | Particular type of gap in the spectrum of multiband superconductors | We show, that in contrast to the free electron model (standard BCS model), a
particular gap in the spectrum of multiband superconductors opens at some
distance from the Fermi energy, if conduction band is composed of hybridized
atomic orbitals of different symmetries. This gap has composite
superconducting-hybridization origin, because it exists only if both the
superconductivity and the hybridization between different kinds of orbitals are
present. So for many classes of superconductors with multiorbital structure
such spectrum changes should take place. These particular changes in the
spectrum at some distance from the Fermi level result in slow convergence of
the spectral weight of the optical conductivity even in quite conventional
superconductors with isotropic s-wave pairing mechanism.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,051 | Reliable Decision Support using Counterfactual Models | Decision-makers are faced with the challenge of estimating what is likely to
happen when they take an action. For instance, if I choose not to treat this
patient, are they likely to die? Practitioners commonly use supervised learning
algorithms to fit predictive models that help decision-makers reason about
likely future outcomes, but we show that this approach is unreliable, and
sometimes even dangerous. The key issue is that supervised learning algorithms
are highly sensitive to the policy used to choose actions in the training data,
which causes the model to capture relationships that do not generalize. We
propose using a different learning objective that predicts counterfactuals
instead of predicting outcomes under an existing action policy as in supervised
learning. To support decision-making in temporal settings, we introduce the
Counterfactual Gaussian Process (CGP) to predict the counterfactual future
progression of continuous-time trajectories under sequences of future actions.
We demonstrate the benefits of the CGP on two important decision-support tasks:
risk prediction and "what if?" reasoning for individualized treatment planning.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,052 | Robust temporal difference learning for critical domains | We present a new Q-function operator for temporal difference (TD) learning
methods that explicitly encodes robustness against significant rare events
(SRE) in critical domains. The operator, which we call the $\kappa$-operator,
allows to learn a safe policy in a model-based fashion without actually
observing the SRE. We introduce single- and multi-agent robust TD methods using
the operator $\kappa$. We prove convergence of the operator to the optimal safe
Q-function with respect to the model using the theory of Generalized Markov
Decision Processes. In addition we prove convergence to the optimal Q-function
of the original MDP given that the probability of SREs vanishes. Empirical
evaluations demonstrate the superior performance of $\kappa$-based TD methods
both in the early learning phase as well as in the final converged stage. In
addition we show robustness of the proposed method to small model errors, as
well as its applicability in a multi-agent context.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,053 | Structure of the Entanglement Entropy of (3+1)D Gapped Phases of Matter | We study the entanglement entropy of gapped phases of matter in three spatial
dimensions. We focus in particular on size-independent contributions to the
entropy across entanglement surfaces of arbitrary topologies. We show that for
low energy fixed-point theories, the constant part of the entanglement entropy
across any surface can be reduced to a linear combination of the entropies
across a sphere and a torus. We first derive our results using strong
sub-additivity inequalities along with assumptions about the entanglement
entropy of fixed-point models, and identify the topological contribution by
considering the renormalization group flow; in this way we give an explicit
definition of topological entanglement entropy $S_{\mathrm{topo}}$ in (3+1)D,
which sharpens previous results. We illustrate our results using several
concrete examples and independent calculations, and show adding "twist" terms
to the Lagrangian can change $S_{\mathrm{topo}}$ in (3+1)D. For the generalized
Walker-Wang models, we find that the ground state degeneracy on a 3-torus is
given by $\exp(-3S_{\mathrm{topo}}[T^2])$ in terms of the topological
entanglement entropy across a 2-torus. We conjecture that a similar
relationship holds for Abelian theories in $(d+1)$ dimensional spacetime, with
the ground state degeneracy on the $d$-torus given by
$\exp(-dS_{\mathrm{topo}}[T^{d-1}])$.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,054 | Dynamics and asymptotic profiles of endemic equilibrium for two frequency-dependent SIS epidemic models with cross-diffusion | This paper is concerned with two frequency-dependent SIS epidemic
reaction-diffusion models in heterogeneous environment, with a cross-diffusion
term modeling the effect that susceptible individuals tend to move away from
higher concentration of infected individuals. It is first shown that the
corresponding Neumann initial-boundary value problem in an $n$-dimensional
bounded smooth domain possesses a unique global classical solution which is
uniformly-in-time bounded regardless of the strength of the cross-diffusion and
the spatial dimension $n$. It is further shown that, even in the presence of
cross-diffusion, the models still admit threshold-type dynamics in terms of the
basic reproduction number $\mathcal R_0$; that is, the unique disease free
equilibrium is globally stable if $\mathcal R_0<1$, while if $\mathcal R_0>1$,
the disease is uniformly persistent and there is an endemic equilibrium, which
is globally stable in some special cases with weak chemotactic sensitivity. Our
results on the asymptotic profiles of endemic equilibrium illustrate that
restricting the motility of susceptible population may eliminate the infectious
disease entirely for the first model with constant total population but fails
for the second model with varying total population. In particular, this implies
that such cross-diffusion does not contribute to the elimination of the
infectious disease modelled by the second one.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,055 | Stable Clustering Ansatz, Consistency Relations and Gravity Dual of Large-Scale Structure | Gravitational clustering in the nonlinear regime remains poorly understood.
Gravity dual of gravitational clustering has recently been proposed as a means
to study the nonlinear regime. The stable clustering ansatz remains a key
ingredient to our understanding of gravitational clustering in the highly
nonlinear regime. We study certain aspects of violation of the stable
clustering ansatz in the gravity dual of Large Scale Structure (LSS). We extend
the recent studies of gravitational clustering using AdS gravity dual to take
into account possible departure from the stable clustering ansatz and to
arbitrary dimensions. Next, we extend the recently introduced consistency
relations to arbitrary dimensions. We use the consistency relations to test the
commonly used models of gravitational clustering including the halo models and
hierarchical ansätze. In particular we establish a tower of consistency
relations for the hierarchical amplitudes: $Q, R_a, R_b, S_a,S_b,S_c$ etc. as a
functions of the scaled peculiar velocity $h$. We also study the variants of
popular halo models in this context. In contrast to recent claims, none of
these models, in their simplest incarnation, seem to satisfy the consistency
relations in the soft limit.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,056 | Heavy-Tailed Analogues of the Covariance Matrix for ICA | Independent Component Analysis (ICA) is the problem of learning a square
matrix $A$, given samples of $X=AS$, where $S$ is a random vector with
independent coordinates. Most existing algorithms are provably efficient only
when each $S_i$ has finite and moderately valued fourth moment. However, there
are practical applications where this assumption need not be true, such as
speech and finance. Algorithms have been proposed for heavy-tailed ICA, but
they are not practical, using random walks and the full power of the ellipsoid
algorithm multiple times. The main contributions of this paper are:
(1) A practical algorithm for heavy-tailed ICA that we call HTICA. We provide
theoretical guarantees and show that it outperforms other algorithms in some
heavy-tailed regimes, both on real and synthetic data. Like the current
state-of-the-art, the new algorithm is based on the centroid body (a first
moment analogue of the covariance matrix). Unlike the state-of-the-art, our
algorithm is practically efficient. To achieve this, we use explicit analytic
representations of the centroid body, which bypasses the use of the ellipsoid
method and random walks.
(2) We study how heavy tails affect different ICA algorithms, including
HTICA. Somewhat surprisingly, we show that some algorithms that use the
covariance matrix or higher moments can successfully solve a range of ICA
instances with infinite second moment. We study this theoretically and
experimentally, with both synthetic and real-world heavy-tailed data.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,057 | Breaking through the bandwidth barrier in distributed fiber vibration sensing by sub-Nyquist randomized sampling | The round trip time of the light pulse limits the maximum detectable
frequency response range of vibration in phase-sensitive optical time domain
reflectometry ({\phi}-OTDR). We propose a method to break the frequency
response range restriction of {\phi}-OTDR system by modulating the light pulse
interval randomly which enables a random sampling for every vibration point in
a long sensing fiber. This sub-Nyquist randomized sampling method is suits for
detecting sparse-wideband-frequency vibration signals. Up to MHz resonance
vibration signal with over dozens of frequency components and 1.153MHz single
frequency vibration signal are clearly identified for a sensing range of 9.6km
with 10kHz maximum sampling rate.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,058 | The Continuity of the Gauge Fixing Condition $n\cdot\partial n\cdot A=0$ for $SU(2)$ Gauge Theory | The continuity of the gauge fixing condition $n\cdot\partial n\cdot A=0$ for
$SU(2)$ gauge theory on the manifold $R\bigotimes S^{1}\bigotimes
S^{1}\bigotimes S^{1}$ is studied here, where $n^{\mu}$ stands for directional
vector along $x_{i}$-axis($i=1,2,3$). It is proved that the gauge fixing
condition is continuous given that gauge potentials are differentiable with
continuous derivatives on the manifold $R\bigotimes S^{1}\bigotimes
S^{1}\bigotimes S^{1}$ which is compact.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,059 | Estimating functions for jump-diffusions | Asymptotic theory for approximate martingale estimating functions is
generalised to diffusions with finite-activity jumps, when the sampling
frequency and terminal sampling time go to infinity. Rate optimality and
efficiency are of particular concern. Under mild assumptions, it is shown that
estimators of drift, diffusion, and jump parameters are consistent and
asymptotically normal, as well as rate-optimal for the drift and jump
parameters. Additional conditions are derived, which ensure rate-optimality for
the diffusion parameter as well as efficiency for all parameters. The findings
indicate a potentially fruitful direction for the further development of
estimation for jump-diffusions.
| 0 | 0 | 1 | 1 | 0 | 0 |
16,060 | Automatic Skin Lesion Analysis using Large-scale Dermoscopy Images and Deep Residual Networks | Malignant melanoma has one of the most rapidly increasing incidences in the
world and has a considerable mortality rate. Early diagnosis is particularly
important since melanoma can be cured with prompt excision. Dermoscopy images
play an important role in the non-invasive early detection of melanoma [1].
However, melanoma detection using human vision alone can be subjective,
inaccurate and poorly reproducible even among experienced dermatologists. This
is attributed to the challenges in interpreting images with diverse
characteristics including lesions of varying sizes and shapes, lesions that may
have fuzzy boundaries, different skin colors and the presence of hair [2].
Therefore, the automatic analysis of dermoscopy images is a valuable aid for
clinical decision making and for image-based diagnosis to identify diseases
such as melanoma [1-4]. Deep residual networks (ResNets) has achieved
state-of-the-art results in image classification and detection related problems
[5-8]. In this ISIC 2017 skin lesion analysis challenge [9], we propose to
exploit the deep ResNets for robust visual features learning and
representations.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,061 | Quadrature-based features for kernel approximation | We consider the problem of improving kernel approximation via randomized
feature maps. These maps arise as Monte Carlo approximation to integral
representations of kernel functions and scale up kernel methods for larger
datasets. Based on an efficient numerical integration technique, we propose a
unifying approach that reinterprets the previous random features methods and
extends to better estimates of the kernel approximation. We derive the
convergence behaviour and conduct an extensive empirical study that supports
our hypothesis.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,062 | The impossibility of "fairness": a generalized impossibility result for decisions | Various measures can be used to estimate bias or unfairness in a predictor.
Previous work has already established that some of these measures are
incompatible with each other. Here we show that, when groups differ in
prevalence of the predicted event, several intuitive, reasonable measures of
fairness (probability of positive prediction given occurrence or
non-occurrence; probability of occurrence given prediction or non-prediction;
and ratio of predictions over occurrences for each group) are all mutually
exclusive: if one of them is equal among groups, the other two must differ. The
only exceptions are for perfect, or trivial (always-positive or
always-negative) predictors. As a consequence, any non-perfect, non-trivial
predictor must necessarily be "unfair" under two out of three reasonable sets
of criteria. This result readily generalizes to a wide range of well-known
statistical quantities (sensitivity, specificity, false positive rate,
precision, etc.), all of which can be divided into three mutually exclusive
groups. Importantly, The results applies to all predictors, whether algorithmic
or human. We conclude with possible ways to handle this effect when assessing
and designing prediction methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,063 | Measuring Quantum Entropy | The entropy of a quantum system is a measure of its randomness, and has
applications in measuring quantum entanglement. We study the problem of
measuring the von Neumann entropy, $S(\rho)$, and Rényi entropy,
$S_\alpha(\rho)$ of an unknown mixed quantum state $\rho$ in $d$ dimensions,
given access to independent copies of $\rho$.
We provide an algorithm with copy complexity $O(d^{2/\alpha})$ for estimating
$S_\alpha(\rho)$ for $\alpha<1$, and copy complexity $O(d^{2})$ for estimating
$S(\rho)$, and $S_\alpha(\rho)$ for non-integral $\alpha>1$. These bounds are
at least quadratic in $d$, which is the order dependence on the number of
copies required for learning the entire state $\rho$. For integral $\alpha>1$,
on the other hand, we provide an algorithm for estimating $S_\alpha(\rho)$ with
a sub-quadratic copy complexity of $O(d^{2-2/\alpha})$. We characterize the
copy complexity for integral $\alpha>1$ up to constant factors by providing
matching lower bounds. For other values of $\alpha$, and the von Neumann
entropy, we show lower bounds on the algorithm that achieves the upper bound.
This shows that we either need new algorithms for better upper bounds, or
better lower bounds to tighten the results.
For non-integral $\alpha$, and the von Neumann entropy, we consider the well
known Empirical Young Diagram (EYD) algorithm, which is the analogue of
empirical plug-in estimator in classical distribution estimation. As a
corollary, we strengthen a lower bound on the copy complexity of the EYD
algorithm for learning the maximally mixed state by showing that the lower
bound holds with exponential probability (which was previously known to hold
with a constant probability). For integral $\alpha>1$, we provide new
concentration results of certain polynomials that arise in Kerov algebra of
Young diagrams.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,064 | Elliptic Transverse Circulation Equations for Balanced Models in a Generalized Vertical Coordinate | When studying tropical cyclones using the $f$-plane, axisymmetric, gradient
balanced model, there arises a second-order elliptic equation for the
transverse circulation. Similarly, when studying zonally symmetric meridional
circulations near the equator (the tropical Hadley cells) or the katabatically
forced meridional circulation over Antarctica, there also arises a second order
elliptic equation. These elliptic equations are usually derived in the pressure
coordinate or the potential temperature coordinate, since the thermal wind
equation has simple non-Jacobian forms in these two vertical coordinates.
Because of the large variations in surface pressure that can occur in tropical
cyclones and over the Antarctic ice sheet, there is interest in using other
vertical coordinates, e.g., the height coordinate, the classical
$\sigma$-coordinate, or some type of hybrid coordinate typically used in global
numerical weather prediction or climate models. Because the thermal wind
equation in these coordinates takes a Jacobian form, the derivation of the
elliptic transverse circulation equation is not as simple. Here we present a
method for deriving the elliptic transverse circulation equation in a
generalized vertical coordinate, which allows for many particular vertical
coordinates, such as height, pressure, log-pressure, potential temperature,
classical $\sigma$, and most hybrid cases. Advantages and disadvantages of the
various coordinates are discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,065 | The Voice Conversion Challenge 2018: Promoting Development of Parallel and Nonparallel Methods | We present the Voice Conversion Challenge 2018, designed as a follow up to
the 2016 edition with the aim of providing a common framework for evaluating
and comparing different state-of-the-art voice conversion (VC) systems. The
objective of the challenge was to perform speaker conversion (i.e. transform
the vocal identity) of a source speaker to a target speaker while maintaining
linguistic information. As an update to the previous challenge, we considered
both parallel and non-parallel data to form the Hub and Spoke tasks,
respectively. A total of 23 teams from around the world submitted their
systems, 11 of them additionally participated in the optional Spoke task. A
large-scale crowdsourced perceptual evaluation was then carried out to rate the
submitted converted speech in terms of naturalness and similarity to the target
speaker identity. In this paper, we present a brief summary of the
state-of-the-art techniques for VC, followed by a detailed explanation of the
challenge tasks and the results that were obtained.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,066 | Levels of distribution for sieve problems in prehomogeneous vector spaces | In a companion paper, we developed an efficient algebraic method for
computing the Fourier transforms of certain functions defined on prehomogeneous
vector spaces over finite fields, and we carried out these computations in a
variety of cases.
Here we develop a method, based on Fourier analysis and algebraic geometry,
which exploits these Fourier transform formulas to yield level of distribution
results, in the sense of analytic number theory. Such results are of the shape
typically required for a variety of sieve methods. As an example of such an
application we prove that there are $\gg$ X/log(X) quartic fields whose
discriminant is squarefree, bounded above by X, and has at most eight prime
factors.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,067 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,068 | Cluster-glass phase in pyrochlore XY antiferromagnets with quenched disorder | We study the impact of quenched disorder (random exchange couplings or site
dilution) on easy-plane pyrochlore antiferromagnets. In the clean system,
order-by-disorder selects a magnetically ordered state from a classically
degenerate manifold. In the presence of randomness, however, different orders
can be chosen locally depending on details of the disorder configuration. Using
a combination of analytical considerations and classical Monte-Carlo
simulations, we argue that any long-range-ordered magnetic state is destroyed
beyond a critical level of randomness where the system breaks into magnetic
domains due to random exchange anisotropies, becoming, therefore, a glass of
spin clusters, in accordance with the available experimental data. These random
anisotropies originate from off-diagonal exchange couplings in the microscopic
Hamiltonian, establishing their relevance to other magnets with strong
spin-orbit coupling.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,069 | Non-line-of-sight tracking of people at long range | A remote-sensing system that can determine the position of hidden objects has
applications in many critical real-life scenarios, such as search and rescue
missions and safe autonomous driving. Previous work has shown the ability to
range and image objects hidden from the direct line of sight, employing
advanced optical imaging technologies aimed at small objects at short range. In
this work we demonstrate a long-range tracking system based on single laser
illumination and single-pixel single-photon detection. This enables us to track
one or more people hidden from view at a stand-off distance of over 50~m. These
results pave the way towards next generation LiDAR systems that will
reconstruct not only the direct-view scene but also the main elements hidden
behind walls or corners.
| 1 | 1 | 0 | 0 | 0 | 0 |
16,070 | DeSIGN: Design Inspiration from Generative Networks | Can an algorithm create original and compelling fashion designs to serve as
an inspirational assistant? To help answer this question, we design and
investigate different image generation models associated with different loss
functions to boost creativity in fashion generation. The dimensions of our
explorations include: (i) different Generative Adversarial Networks
architectures that start from noise vectors to generate fashion items, (ii)
novel loss functions that encourage novelty, inspired from Sharma-Mittal
divergence, a generalized mutual information measure for the widely used
relative entropies such as Kullback-Leibler, and (iii) a generation process
following the key elements of fashion design (disentangling shape and texture
components). A key challenge of this study is the evaluation of generated
designs and the retrieval of best ones, hence we put together an evaluation
protocol associating automatic metrics and human experimental studies that we
hope will help ease future research. We show that our proposed creativity
criterion yield better overall appreciation than the one employed in Creative
Adversarial Networks. In the end, about 61% of our images are thought to be
created by human designers rather than by a computer while also being
considered original per our human subject experiments, and our proposed loss
scores the highest compared to existing losses in both novelty and likability.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,071 | Challenges of facet analysis and concept placement in universal classifications: the example of architecture in UDC | The paper discusses the challenges of faceted vocabulary organization in
universal classifications which treat the universe of knowledge as a coherent
whole and in which the concepts and subjects in different disciplines are
shared, related and combined. The authors illustrate the challenges of the
facet analytical approach using, as an example, the revision of class 72 in
UDC. The paper reports on the research undertaken in 2013 as preparation for
the revision. This consisted of analysis of concept organization in the UDC
schedules in comparison with the Art & Architecture Thesaurus and class W of
the Bliss Bibliographic Classification. The paper illustrates how such research
can contribute to a better understanding of the field and may lead to
improvements in the facet structure of this segment of the UDC vocabulary.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,072 | Coalescent-based species tree estimation: a stochastic Farris transform | The reconstruction of a species phylogeny from genomic data faces two
significant hurdles: 1) the trees describing the evolution of each individual
gene--i.e., the gene trees--may differ from the species phylogeny and 2) the
molecular sequences corresponding to each gene often provide limited
information about the gene trees themselves. In this paper we consider an
approach to species tree reconstruction that addresses both these hurdles.
Specifically, we propose an algorithm for phylogeny reconstruction under the
multispecies coalescent model with a standard model of site substitution. The
multispecies coalescent is commonly used to model gene tree discordance due to
incomplete lineage sorting, a well-studied population-genetic effect.
In previous work, an information-theoretic trade-off was derived in this
context between the number of loci, $m$, needed for an accurate reconstruction
and the length of the locus sequences, $k$. It was shown that to reconstruct an
internal branch of length $f$, one needs $m$ to be of the order of $1/[f^{2}
\sqrt{k}]$. That previous result was obtained under the molecular clock
assumption, i.e., under the assumption that mutation rates (as well as
population sizes) are constant across the species phylogeny.
Here we generalize this result beyond the restrictive molecular clock
assumption, and obtain a new reconstruction algorithm that has the same data
requirement (up to log factors). Our main contribution is a novel reduction to
the molecular clock case under the multispecies coalescent. As a corollary, we
also obtain a new identifiability result of independent interest: for any
species tree with $n \geq 3$ species, the rooted species tree can be identified
from the distribution of its unrooted weighted gene trees even in the absence
of a molecular clock.
| 1 | 0 | 1 | 1 | 0 | 0 |
16,073 | Variance-Reduced Stochastic Learning by Networked Agents under Random Reshuffling | A new amortized variance-reduced gradient (AVRG) algorithm was developed in
\cite{ying2017convergence}, which has constant storage requirement in
comparison to SAGA and balanced gradient computations in comparison to SVRG.
One key advantage of the AVRG strategy is its amenability to decentralized
implementations. In this work, we show how AVRG can be extended to the network
case where multiple learning agents are assumed to be connected by a graph
topology. In this scenario, each agent observes data that is spatially
distributed and all agents are only allowed to communicate with direct
neighbors. Moreover, the amount of data observed by the individual agents may
differ drastically. For such situations, the balanced gradient computation
property of AVRG becomes a real advantage in reducing idle time caused by
unbalanced local data storage requirements, which is characteristic of other
reduced-variance gradient algorithms. The resulting diffusion-AVRG algorithm is
shown to have linear convergence to the exact solution, and is much more memory
efficient than other alternative algorithms. In addition, we propose a
mini-batch strategy to balance the communication and computation efficiency for
diffusion-AVRG. When a proper batch size is employed, it is observed in
simulations that diffusion-AVRG is more computationally efficient than exact
diffusion or EXTRA while maintaining almost the same communication efficiency.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,074 | Blind Community Detection from Low-rank Excitations of a Graph Filter | This paper considers a novel framework to detect communities in a graph from
the observation of signals at its nodes. We model the observed signals as noisy
outputs of an unknown network process -- represented as a graph filter -- that
is excited by a set of low-rank inputs. Rather than learning the precise
parameters of the graph itself, the proposed method retrieves the community
structure directly; Furthermore, as in blind system identification methods, it
does not require knowledge of the system excitation. The paper shows that
communities can be detected by applying spectral clustering to the low-rank
output covariance matrix obtained from the graph signals. The performance
analysis indicates that the community detection accuracy depends on the
spectral properties of the graph filter considered. Furthermore, we show that
the accuracy can be improved via a low-rank matrix decomposition method when
the excitation signals are known. Numerical experiments demonstrate that our
approach is effective for analyzing network data from diffusion, consumers, and
social dynamics.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,075 | Realistic theory of electronic correlations in nanoscopic systems | Nanostructures with open shell transition metal or molecular constituents
host often strong electronic correlations and are highly sensitive to atomistic
material details. This tutorial review discusses method developments and
applications of theoretical approaches for the realistic description of the
electronic and magnetic properties of nanostructures with correlated electrons.
First, the implementation of a flexible interface between density functional
theory and a variant of dynamical mean field theory (DMFT) highly suitable for
the simulation of complex correlated structures is explained and illustrated.
On the DMFT side, this interface is largely based on recent developments of
quantum Monte Carlo and exact diagonalization techniques allowing for efficient
descriptions of general four fermion Coulomb interactions, reduced symmetries
and spin-orbit coupling, which are explained here. With the examples of the Cr
(001) surfaces, magnetic adatoms, and molecular systems it is shown how the
interplay of Hubbard U and Hund's J determines charge and spin fluctuations and
how these interactions drive different sorts of correlation effects in
nanosystems. Non-local interactions and correlations present a particular
challenge for the theory of low dimensional systems. We present our method
developments addressing these two challenges, i.e., advancements of the
dynamical vertex approximation and a combination of the constrained random
phase approximation with continuum medium theories. We demonstrate how
non-local interaction and correlation phenomena are controlled not only by
dimensionality but also by coupling to the environment which is typically
important for determining the physics of nanosystems.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,076 | The Multiplier Problem of the Calculus of Variations for Scalar Ordinary Differential Equations | In the inverse problem of the calculus of variations one is asked to find a
Lagrangian and a multiplier so that a given differential equation, after
multiplying with the multiplier, becomes the Euler--Lagrange equation for the
Lagrangian. An answer to this problem for the case of a scalar ordinary
differential equation of order $2n, n\geq 2,$ is proposed.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,077 | Representation categories of Mackey Lie algebras as universal monoidal categories | Let $\mathbb{K}$ be an algebraically closed field of characteristic $0$. We
study a monoidal category $\mathbb{T}_\alpha$ which is universal among all
symmetric $\mathbb{K}$-linear monoidal categories generated by two objects $A$
and $B$ such that $A$ has a, possibly transfinite, filtration. We construct
$\mathbb{T}_\alpha$ as a category of representations of the Lie algebra
$\mathfrak{gl}^M(V_*,V)$ consisting of endomorphisms of a fixed diagonalizable
pairing $V_*\otimes V\to \mathbb{K}$ of vector spaces $V_*$ and $V$ of
dimension $\alpha$. Here $\alpha$ is an arbitrary cardinal number. We describe
explicitly the simple and the injective objects of $\mathbb{T}_\alpha$ and
prove that the category $\mathbb{T}_\alpha$ is Koszul. We pay special attention
to the case where the filtration on $A$ is finite. In this case
$\alpha=\aleph_t$ for $t\in\mathbb{Z}_{\geq 0}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,078 | Dielectric media considered as vacuum with sources | Conventional textbook treatments on electromagnetic wave propagation consider
the induced charge and current densities as "bound", and therefore absorb them
into a refractive index. In principle it must also be possible to treat the
medium as vacuum, but with explicit charge and current densities. This gives a
more direct, physical description. However, since the induced waves propagate
in vacuum in this picture, it is not straightforward to realize that the
wavelength becomes different compared to that in vacuum. We provide an
explanation, and also associated time-domain simulations. As an extra bonus the
results turn out to illuminate the behavior of metamaterials.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,079 | Game-Theoretic Design of Secure and Resilient Distributed Support Vector Machines with Adversaries | With a large number of sensors and control units in networked systems,
distributed support vector machines (DSVMs) play a fundamental role in scalable
and efficient multi-sensor classification and prediction tasks. However, DSVMs
are vulnerable to adversaries who can modify and generate data to deceive the
system to misclassification and misprediction. This work aims to design defense
strategies for DSVM learner against a potential adversary. We establish a
game-theoretic framework to capture the conflicting interests between the DSVM
learner and the attacker. The Nash equilibrium of the game allows predicting
the outcome of learning algorithms in adversarial environments, and enhancing
the resilience of the machine learning through dynamic distributed learning
algorithms. We show that the DSVM learner is less vulnerable when he uses a
balanced network with fewer nodes and higher degree. We also show that adding
more training samples is an efficient defense strategy against an attacker. We
present secure and resilient DSVM algorithms with verification method and
rejection method, and show their resiliency against adversary with numerical
experiments.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,080 | Rank conditional coverage and confidence intervals in high dimensional problems | Confidence interval procedures used in low dimensional settings are often
inappropriate for high dimensional applications. When a large number of
parameters are estimated, marginal confidence intervals associated with the
most significant estimates have very low coverage rates: They are too small and
centered at biased estimates. The problem of forming confidence intervals in
high dimensional settings has previously been studied through the lens of
selection adjustment. In this framework, the goal is to control the proportion
of non-covering intervals formed for selected parameters.
In this paper we approach the problem by considering the relationship between
rank and coverage probability. Marginal confidence intervals have very low
coverage rates for significant parameters and high rates for parameters with
more boring estimates. Many selection adjusted intervals display the same
pattern. This connection motivates us to propose a new coverage criterion for
confidence intervals in multiple testing/covering problems --- the rank
conditional coverage (RCC). This is the expected coverage rate of an interval
given the significance ranking for the associated estimator. We propose
interval construction via bootstrapping which produces small intervals and have
a rank conditional coverage close to the nominal level. These methods are
implemented in the R package rcc.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,081 | Phase-Encoded Hyperpolarized Nanodiamond for Magnetic Resonance Imaging | Surface-functionalized nanomaterials can act as theranostic agents that
detect disease and track biological processes using hyperpolarized magnetic
resonance imaging (MRI). Candidate materials are sparse however, requiring
spinful nuclei with long spin-lattice relaxation (T1) and spin-dephasing times
(T2), together with a reservoir of electrons to impart hyperpolarization. Here,
we demonstrate the versatility of the nanodiamond material system for
hyperpolarized 13C MRI, making use of its intrinsic paramagnetic defect
centers, hours-long nuclear T1 times, and T2 times suitable for spatially
resolving millimeter-scale structures. Combining these properties, we enable a
new imaging modality that exploits the phase-contrast between spins encoded
with a hyperpolarization that is aligned, or anti-aligned with the external
magnetic field. The use of phase-encoded hyperpolarization allows nanodiamonds
to be tagged and distinguished in an MRI based on their spin-orientation alone,
and could permit the action of specific bio-functionalized complexes to be
directly compared and imaged.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,082 | k-Space Deep Learning for Reference-free EPI Ghost Correction | Nyquist ghost artifacts in EPI images are originated from phase mismatch
between the even and odd echoes. However, conventional correction methods using
reference scans often produce erroneous results especially in high-field MRI
due to the non-linear and time-varying local magnetic field changes. Recently,
it was shown that the problem of ghost correction can be transformed into
k-space data interpolation problem that can be solved using the annihilating
filter-based low-rank Hankel structured matrix completion approach (ALOHA).
Another recent discovery has shown that the deep convolutional neural network
is closely related to the data-driven Hankel matrix decomposition. By
synergistically combining these findings, here we propose a k-space deep
learning approach that immediately corrects the k-space phase mismatch without
a reference scan. Reconstruction results using 7T in vivo data showed that the
proposed reference-free k-space deep learning approach for EPI ghost correction
significantly improves the image quality compared to the existing methods, and
the computing time is several orders of magnitude faster.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,083 | Binary Search in Graphs Revisited | In the classical binary search in a path the aim is to detect an unknown
target by asking as few queries as possible, where each query reveals the
direction to the target. This binary search algorithm has been recently
extended by [Emamjomeh-Zadeh et al., STOC, 2016] to the problem of detecting a
target in an arbitrary graph. Similarly to the classical case in the path, the
algorithm of Emamjomeh-Zadeh et al. maintains a candidates' set for the target,
while each query asks an appropriately chosen vertex-- the "median"--which
minimises a potential $\Phi$ among the vertices of the candidates' set. In this
paper we address three open questions posed by Emamjomeh-Zadeh et al., namely
(a) detecting a target when the query response is a direction to an
approximately shortest path to the target, (b) detecting a target when querying
a vertex that is an approximate median of the current candidates' set (instead
of an exact one), and (c) detecting multiple targets, for which to the best of
our knowledge no progress has been made so far. We resolve questions (a) and
(b) by providing appropriate upper and lower bounds, as well as a new potential
$\Gamma$ that guarantees efficient target detection even by querying an
approximate median each time. With respect to (c), we initiate a systematic
study for detecting two targets in graphs and we identify sufficient conditions
on the queries that allow for strong (linear) lower bounds and strong
(polylogarithmic) upper bounds for the number of queries. All of our positive
results can be derived using our new potential $\Gamma$ that allows querying
approximate medians.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,084 | Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients | The ADAM optimizer is exceedingly popular in the deep learning community.
Often it works very well, sometimes it doesn't. Why? We interpret ADAM as a
combination of two aspects: for each weight, the update direction is determined
by the sign of stochastic gradients, whereas the update magnitude is determined
by an estimate of their relative variance. We disentangle these two aspects and
analyze them in isolation, gaining insight into the mechanisms underlying ADAM.
This analysis also extends recent results on adverse effects of ADAM on
generalization, isolating the sign aspect as the problematic one. Transferring
the variance adaptation to SGD gives rise to a novel method, completing the
practitioner's toolbox for problems where ADAM fails.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,085 | Econometric Modeling of Regional Electricity Spot Prices in the Australian Market | Wholesale electricity markets are increasingly integrated via high voltage
interconnectors, and inter-regional trade in electricity is growing. To model
this, we consider a spatial equilibrium model of price formation, where
constraints on inter-regional flows result in three distinct equilibria in
prices. We use this to motivate an econometric model for the distribution of
observed electricity spot prices that captures many of their unique empirical
characteristics. The econometric model features supply and inter-regional trade
cost functions, which are estimated using Bayesian monotonic regression
smoothing methodology. A copula multivariate time series model is employed to
capture additional dependence -- both cross-sectional and serial-- in regional
prices. The marginal distributions are nonparametric, with means given by the
regression means. The model has the advantage of preserving the heavy
right-hand tail in the predictive densities of price. We fit the model to
half-hourly spot price data in the five interconnected regions of the
Australian national electricity market. The fitted model is then used to
measure how both supply and price shocks in one region are transmitted to the
distribution of prices in all regions in subsequent periods. Finally, to
validate our econometric model, we show that prices forecast using the proposed
model compare favorably with those from some benchmark alternatives.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,086 | Accurate Effective Medium Theory for the Analysis of Spoof Localized Surface Plasmons in Textured Metallic Cylinders | It has been recently demonstrated that textured closed surfaces which are
made out of perfect electric conductors (PECs) can mimic highly localized
surface plasmons (LSPs). Here, we propose an effective medium which can
accurately model LSP resonances in a two-dimensional periodically decorated PEC
cylinder. The accuracy of previous models is limited to structures with
deep-subwavelength and high number of grooves. However, we show that our model
can successfully predict the ultra-sharp LSP resonances which exist in
structures with relatively lower number of grooves. Such resonances are not
correctly predictable with previous models that give some spurious resonances.
The success of the proposed model is indebted to the incorporation of an
effective surface conductivity which is created at the interface of the
cylinder and the homogeneous medium surrounding the structure. This surface
conductivity models the effect of higher diffracted orders which are excited in
the periodic structure. The validity of the proposed model is verified by
full-wave simulations.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,087 | Joint Regression and Ranking for Image Enhancement | Research on automated image enhancement has gained momentum in recent years,
partially due to the need for easy-to-use tools for enhancing pictures captured
by ubiquitous cameras on mobile devices. Many of the existing leading methods
employ machine-learning-based techniques, by which some enhancement parameters
for a given image are found by relating the image to the training images with
known enhancement parameters. While knowing the structure of the parameter
space can facilitate search for the optimal solution, none of the existing
methods has explicitly modeled and learned that structure. This paper presents
an end-to-end, novel joint regression and ranking approach to model the
interaction between desired enhancement parameters and images to be processed,
employing a Gaussian process (GP). GP allows searching for ideal parameters
using only the image features. The model naturally leads to a ranking technique
for comparing images in the induced feature space. Comparative evaluation using
the ground-truth based on the MIT-Adobe FiveK dataset plus subjective tests on
an additional data-set were used to demonstrate the effectiveness of the
proposed approach.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,088 | From arteries to boreholes: Transient response of a poroelastic cylinder to fluid injection | The radially outward flow of fluid through a porous medium occurs in many
practical problems, from transport across vascular walls to the pressurisation
of boreholes in the subsurface. When the driving pressure is non-negligible
relative to the stiffness of the solid structure, the poromechanical coupling
between the fluid and the solid can control both the steady-state and the
transient mechanics of the system. Very large pressures or very soft materials
lead to large deformations of the solid skeleton, which introduce kinematic and
constitutive nonlinearity that can have a nontrivial impact on these mechanics.
Here, we study the transient response of a poroelastic cylinder to sudden fluid
injection. We consider the impacts of kinematic and constitutive nonlinearity,
both separately and in combination, and we highlight the central role of
driving method in the evolution of the response. We show that the various
facets of nonlinearity may either accelerate or decelerate the transient
response relative to linear poroelasticity, depending on the boundary
conditions and the initial geometry, and that an imposed fluid pressure leads
to a much faster response than an imposed fluid flux.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,089 | Social Clustering in Epidemic Spread on Coevolving Networks | Even though transitivity is a central structural feature of social networks,
its influence on epidemic spread on coevolving networks has remained relatively
unexplored. Here we introduce and study an adaptive SIS epidemic model wherein
the infection and network coevolve with non-trivial probability to close
triangles during edge rewiring, leading to substantial reinforcement of network
transitivity. This new model provides a unique opportunity to study the role of
transitivity in altering the SIS dynamics on a coevolving network. Using
numerical simulations and Approximate Master Equations (AME), we identify and
examine a rich set of dynamical features in the new model. In many cases, the
AME including transitivity reinforcement provide accurate predictions of
stationary-state disease prevalence and network degree distributions.
Furthermore, for some parameter settings, the AME accurately trace the temporal
evolution of the system. We show that higher transitivity reinforcement in the
model leads to lower levels of infective individuals in the population; when
closing a triangle is the only rewiring mechanism. These methods and results
may be useful in developing ideas and modeling strategies for controlling SIS
type epidemics.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,090 | A Mathematical Aspect of Hohenberg-Kohn Theorem | The Hohenberg-Kohn theorem plays a fundamental role in density functional
theory, which has become a basic tool for the study of electronic structure of
matter. In this article, we study the Hohenberg-Kohn theorem for a class of
external potentials based on a unique continuation principle.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,091 | Autocomplete 3D Sculpting | Digital sculpting is a popular means to create 3D models but remains a
challenging task for many users. This can be alleviated by recent advances in
data-driven and procedural modeling, albeit bounded by the underlying data and
procedures. We propose a 3D sculpting system that assists users in freely
creating models without predefined scope. With a brushing interface similar to
common sculpting tools, our system silently records and analyzes users'
workflows, and predicts what they might or should do in the future to reduce
input labor or enhance output quality. Users can accept, ignore, or modify the
suggestions and thus maintain full control and individual style. They can also
explicitly select and clone past workflows over output model regions. Our key
idea is to consider how a model is authored via dynamic workflows in addition
to what it is shaped in static geometry, for more accurate analysis of user
intentions and more general synthesis of shape structures. The workflows
contain potential repetitions for analysis and synthesis, including user inputs
(e.g. pen strokes on a pressure sensing tablet), model outputs (e.g. extrusions
on an object surface), and camera viewpoints. We evaluate our method via user
feedbacks and authored models.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,092 | KeyXtract Twitter Model - An Essential Keywords Extraction Model for Twitter Designed using NLP Tools | Since a tweet is limited to 140 characters, it is ambiguous and difficult for
traditional Natural Language Processing (NLP) tools to analyse. This research
presents KeyXtract which enhances the machine learning based Stanford CoreNLP
Part-of-Speech (POS) tagger with the Twitter model to extract essential
keywords from a tweet. The system was developed using rule-based parsers and
two corpora. The data for the research was obtained from a Twitter profile of a
telecommunication company. The system development consisted of two stages. At
the initial stage, a domain specific corpus was compiled after analysing the
tweets. The POS tagger extracted the Noun Phrases and Verb Phrases while the
parsers removed noise and extracted any other keywords missed by the POS
tagger. The system was evaluated using the Turing Test. After it was tested and
compared against Stanford CoreNLP, the second stage of the system was developed
addressing the shortcomings of the first stage. It was enhanced using Named
Entity Recognition and Lemmatization. The second stage was also tested using
the Turing test and its pass rate increased from 50.00% to 83.33%. The
performance of the final system output was measured using the F1 score.
Stanford CoreNLP with the Twitter model had an average F1 of 0.69 while the
improved system had a F1 of 0.77. The accuracy of the system could be improved
by using a complete domain specific corpus. Since the system used linguistic
features of a sentence, it could be applied to other NLP tools.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,093 | A Generalized Zero-Forcing Precoder with Successive Dirty-Paper Coding in MISO Broadcast Channels | In this paper, we consider precoder designs for multiuser
multiple-input-single-output (MISO) broadcasting channels. Instead of using a
traditional linear zero-forcing (ZF) precoder, we propose a generalized ZF
(GZF) precoder in conjunction with successive dirty-paper coding (DPC) for
data-transmissions, namely, the GZF-DP precoder, where the suffix \lq{}DP\rq{}
stands for \lq{}dirty-paper\rq{}. The GZF-DP precoder is designed to generate a
band-shaped and lower-triangular effective channel $\vec{F}$ such that only the
entries along the main diagonal and the $\nu$ first lower-diagonals can take
non-zero values. Utilizing the successive DPC, the known non-causal inter-user
interferences from the other (up to) $\nu$ users are canceled through
successive encoding. We analyze optimal GZF-DP precoder designs both for
sum-rate and minimum user-rate maximizations. Utilizing Lagrange multipliers,
the optimal precoders for both cases are solved in closed-forms in relation to
optimal power allocations. For the sum-rate maximization, the optimal power
allocation can be found through water-filling, but with modified water-levels
depending on the parameter $\nu$. While for the minimum user-rate maximization
that measures the quality of the service (QoS), the optimal power allocation is
directly solved in closed-form which also depends on $\nu$. Moreover, we
propose two low-complexity user-ordering algorithms for the GZF-DP precoder
designs for both maximizations, respectively. We show through numerical results
that, the proposed GZF-DP precoder with a small $\nu$ ($\leq\!3$) renders
significant rate increments compared to the previous precoder designs such as
the linear ZF and user-grouping based DPC (UG-DP) precoders.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,094 | Generating Connected Random Graphs | Sampling random graphs is essential in many applications, and often
algorithms use Markov chain Monte Carlo methods to sample uniformly from the
space of graphs. However, often there is a need to sample graphs with some
property that we are unable, or it is too inefficient, to sample using standard
approaches. In this paper, we are interested in sampling graphs from a
conditional ensemble of the underlying graph model. We present an algorithm to
generate samples from an ensemble of connected random graphs using a
Metropolis-Hastings framework. The algorithm extends to a general framework for
sampling from a known distribution of graphs, conditioned on a desired
property. We demonstrate the method to generate connected spatially embedded
random graphs, specifically the well known Waxman network, and illustrate the
convergence and practicalities of the algorithm.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,095 | Machine Learning Predicts Laboratory Earthquakes | Forecasting fault failure is a fundamental but elusive goal in earthquake
science. Here we show that by listening to the acoustic signal emitted by a
laboratory fault, machine learning can predict the time remaining before it
fails with great accuracy. These predictions are based solely on the
instantaneous physical characteristics of the acoustical signal, and do not
make use of its history. Surprisingly, machine learning identifies a signal
emitted from the fault zone previously thought to be low-amplitude noise that
enables failure forecasting throughout the laboratory quake cycle. We
hypothesize that applying this approach to continuous seismic data may lead to
significant advances in identifying currently unknown signals, in providing new
insights into fault physics, and in placing bounds on fault failure times.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,096 | New results of the search for hidden photons by means of a multicathode counter | New upper limit on a mixing parameter for hidden photons with a mass from 5
eV till 10 keV has been obtained from the results of measurements during 78
days in two configurations R1 and R2 of a multicathode counter. For a region of
a maximal sensitivity from 10 eV till 30 eV the upper limit obtained is less
than 4 x 10-11. The measurements have been performed at three temperatures:
26C, 31C and 36C. A positive effect for the spontaneous emission of single
electrons has been obtained at the level of more than 7{\sigma}. A falling
tendency of a temperature dependence of the spontaneous emission rate indicates
that the effect of thermal emission from a copper cathode can be neglected.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,097 | Higher dimensional Steinhaus and Slater problems via homogeneous dynamics | The three gap theorem, also known as the Steinhaus conjecture or three
distance theorem, states that the gaps in the fractional parts of
$\alpha,2\alpha,\ldots, N\alpha$ take at most three distinct values. Motivated
by a question of Erdős, Geelen and Simpson, we explore a higher-dimensional
variant, which asks for the number of gaps between the fractional parts of a
linear form. Using the ergodic properties of the diagonal action on the space
of lattices, we prove that for almost all parameter values the number of
distinct gaps in the higher dimensional problem is unbounded. Our results in
particular improve earlier work by Boshernitzan, Dyson and Bleher et al. We
furthermore discuss a close link with the Littlewood conjecture in
multiplicative Diophantine approximation. Finally, we also demonstrate how our
methods can be adapted to obtain similar results for gaps between return times
of translations to shrinking regions on higher dimensional tori.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,098 | Bayesian Unbiasing of the Gaia Space Mission Time Series Database | 21 st century astrophysicists are confronted with the herculean task of
distilling the maximum scientific return from extremely expensive and complex
space- or ground-based instrumental projects. This paper concentrates in the
mining of the time series catalog produced by the European Space Agency Gaia
mission, launched in December 2013. We tackle in particular the problem of
inferring the true distribution of the variability properties of Cepheid stars
in the Milky Way satellite galaxy known as the Large Magellanic Cloud (LMC).
Classical Cepheid stars are the first step in the so-called distance ladder: a
series of techniques to measure cosmological distances and decipher the
structure and evolution of our Universe. In this work we attempt to unbias the
catalog by modelling the aliasing phenomenon that distorts the true
distribution of periods. We have represented the problem by a 2-level
generative Bayesian graphical model and used a Markov chain Monte Carlo (MCMC)
algorithm for inference (classification and regression). Our results with
synthetic data show that the system successfully removes systematic biases and
is able to infer the true hyperparameters of the frequency and magnitude
distributions.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,099 | On the connectivity of the hyperbolicity region of irreducible polynomials | We give an elementary proof for the fact that an irreducible hyperbolic
polynomial has only one pair of hyperbolicity cones.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,100 | Matrix Completion via Factorizing Polynomials | Predicting unobserved entries of a partially observed matrix has found wide
applicability in several areas, such as recommender systems, computational
biology, and computer vision. Many scalable methods with rigorous theoretical
guarantees have been developed for algorithms where the matrix is factored into
low-rank components, and embeddings are learned for the row and column
entities. While there has been recent research on incorporating explicit side
information in the low-rank matrix factorization setting, often implicit
information can be gleaned from the data, via higher-order interactions among
entities. Such implicit information is especially useful in cases where the
data is very sparse, as is often the case in real-world datasets. In this
paper, we design a method to learn embeddings in the context of recommendation
systems, using the observation that higher powers of a graph transition
probability matrix encode the probability that a random walker will hit that
node in a given number of steps. We develop a coordinate descent algorithm to
solve the resulting optimization, that makes explicit computation of the higher
order powers of the matrix redundant, preserving sparsity and making
computations efficient. Experiments on several datasets show that our method,
that can use higher order information, outperforms methods that only use
explicitly available side information, those that use only second-order
implicit information and in some cases, methods based on deep neural networks
as well.
| 1 | 0 | 0 | 1 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.