title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Transductive Zero-Shot Learning with Adaptive Structural Embedding | Zero-shot learning (ZSL) endows the computer vision system with the
inferential capability to recognize instances of a new category that has never
seen before. Two fundamental challenges in it are visual-semantic embedding and
domain adaptation in cross-modality learning and unseen class prediction steps,
respectively. To address both challenges, this paper presents two corresponding
methods named Adaptive STructural Embedding (ASTE) and Self-PAsed Selective
Strategy (SPASS), respectively. Specifically, ASTE formulates the
visualsemantic interactions in a latent structural SVM framework to adaptively
adjust the slack variables to embody the different reliableness among training
instances. In this way, the reliable instances are imposed with small
punishments, wheras the less reliable instances are imposed with more severe
punishments. Thus, it ensures a more discriminative embedding. On the other
hand, SPASS offers a framework to alleviate the domain shift problem in ZSL,
which exploits the unseen data in an easy to hard fashion. Particularly, SPASS
borrows the idea from selfpaced learning by iteratively selecting the unseen
instances from reliable to less reliable to gradually adapt the knowledge from
the seen domain to the unseen domain. Subsequently, by combining SPASS and
ASTE, we present a self-paced Transductive ASTE (TASTE) method to progressively
reinforce the classification capacity. Extensive experiments on three benchmark
datasets (i.e., AwA, CUB, and aPY) demonstrate the superiorities of ASTE and
TASTE. Furthermore, we also propose a fast training (FT) strategy to improve
the efficiency of most of existing ZSL methods. The FT strategy is surprisingly
simple and general enough, which can speed up the training time of most
existing methods by 4~300 times while holding the previous performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
A geometric attractor mechanism for self-organization of entorhinal grid modules | Grid cells in the medial entorhinal cortex (mEC) respond when an animal
occupies a periodic lattice of "grid fields" in the environment. The grids are
organized in modules with spatial periods clustered around discrete values
separated by constant ratios reported in the range 1.3-1.8. We propose a
mechanism for dynamical self-organization in the mEC that can produce this
modular structure. In attractor network models of grid formation, the period of
a single module is set by the length scale of recurrent inhibition between
neurons. We show that grid cells will instead form a hierarchy of discrete
modules if a continuous increase in inhibition distance along the dorso-ventral
axis of the mEC is accompanied by excitatory interactions along this axis.
Moreover, constant scale ratios between successive modules arise through
geometric relationships between triangular grids, whose lattice constants are
separated by $\sqrt{3} \approx 1.7$, $\sqrt{7}/2 \approx 1.3$, or other ratios.
We discuss how the interactions required by our model might be tested
experimentally and realized by circuits in the mEC.
| 0 | 0 | 0 | 0 | 1 | 0 |
Attacking Strategies and Temporal Analysis Involving Facebook Discussion Groups | Online social network (OSN) discussion groups are exerting significant
effects on political dialogue. In the absence of access control mechanisms, any
user can contribute to any OSN thread. Individuals can exploit this
characteristic to execute targeted attacks, which increases the potential for
subsequent malicious behaviors such as phishing and malware distribution. These
kinds of actions will also disrupt bridges among the media, politicians, and
their constituencies.
For the concern of Security Management, blending malicious cyberattacks with
online social interactions has introduced a brand new challenge. In this paper
we describe our proposal for a novel approach to studying and understanding the
strategies that attackers use to spread malicious URLs across Facebook
discussion groups. We define and analyze problems tied to predicting the
potential for attacks focused on threads created by news media organizations.
We use a mix of macro static features and the micro dynamic evolution of posts
and threads to identify likely targets with greater than 90% accuracy. One of
our secondary goals is to make such predictions within a short (10 minute) time
frame. It is our hope that the data and analyses presented in this paper will
support a better understanding of attacker strategies and footprints, thereby
developing new system management methodologies in handing cyber attacks on
social networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Perception-based energy functions in seam-cutting | Image stitching is challenging in consumer-level photography, due to
alignment difficulties in unconstrained shooting environment. Recent studies
show that seam-cutting approaches can effectively relieve artifacts generated
by local misalignment. Normally, seam-cutting is described in terms of energy
minimization, however, few of existing methods consider human perception in
their energy functions, which sometimes causes that a seam with minimum energy
is not most invisible in the overlapping region. In this paper, we propose a
novel perception-based energy function in the seam-cutting framework, which
considers the nonlinearity and the nonuniformity of human perception in energy
minimization. Our perception-based approach adopts a sigmoid metric to
characterize the perception of color discrimination, and a saliency weight to
simulate that human eyes incline to pay more attention to salient objects. In
addition, our seam-cutting composition can be easily implemented into other
stitching pipelines. Experiments show that our method outperforms the
seam-cutting method of the normal energy function, and a user study
demonstrates that our composed results are more consistent with human
perception.
| 1 | 0 | 0 | 0 | 0 | 0 |
Probabilistic learning of nonlinear dynamical systems using sequential Monte Carlo | Probabilistic modeling provides the capability to represent and manipulate
uncertainty in data, models, predictions and decisions. We are concerned with
the problem of learning probabilistic models of dynamical systems from measured
data. Specifically, we consider learning of probabilistic nonlinear state-space
models. There is no closed-form solution available for this problem, implying
that we are forced to use approximations. In this tutorial we will provide a
self-contained introduction to one of the state-of-the-art methods---the
particle Metropolis--Hastings algorithm---which has proven to offer a practical
approximation. This is a Monte Carlo based method, where the particle filter is
used to guide a Markov chain Monte Carlo method through the parameter space.
One of the key merits of the particle Metropolis--Hastings algorithm is that it
is guaranteed to converge to the "true solution" under mild assumptions,
despite being based on a particle filter with only a finite number of
particles. We will also provide a motivating numerical example illustrating the
method using a modeling language tailored for sequential Monte Carlo methods.
The intention of modeling languages of this kind is to open up the power of
sophisticated Monte Carlo methods---including particle
Metropolis--Hastings---to a large group of users without requiring them to know
all the underlying mathematical details.
| 1 | 0 | 0 | 1 | 0 | 0 |
Deep Projective 3D Semantic Segmentation | Semantic segmentation of 3D point clouds is a challenging problem with
numerous real-world applications. While deep learning has revolutionized the
field of image semantic segmentation, its impact on point cloud data has been
limited so far. Recent attempts, based on 3D deep learning approaches
(3D-CNNs), have achieved below-expected results. Such methods require
voxelizations of the underlying point cloud data, leading to decreased spatial
resolution and increased memory consumption. Additionally, 3D-CNNs greatly
suffer from the limited availability of annotated datasets.
In this paper, we propose an alternative framework that avoids the
limitations of 3D-CNNs. Instead of directly solving the problem in 3D, we first
project the point cloud onto a set of synthetic 2D-images. These images are
then used as input to a 2D-CNN, designed for semantic segmentation. Finally,
the obtained prediction scores are re-projected to the point cloud to obtain
the segmentation results. We further investigate the impact of multiple
modalities, such as color, depth and surface normals, in a multi-stream network
architecture. Experiments are performed on the recent Semantic3D dataset. Our
approach sets a new state-of-the-art by achieving a relative gain of 7.9 %,
compared to the previous best approach.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dynamic Pricing with Finitely Many Unknown Valuations | Motivated by posted price auctions where buyers are grouped in an unknown
number of latent types characterized by their private values for the good on
sale, we investigate revenue maximization in stochastic dynamic pricing when
the distribution of buyers' private values is supported on an unknown set of
points in [0,1] of unknown cardinality K. This setting can be viewed as an
instance of a stochastic K-armed bandit problem where the location of the arms
(the K unknown valuations) must be learned as well. In the distribution-free
case, we show that our setting is just as hard as K-armed stochastic bandits:
we prove that no algorithm can achieve a regret significantly better than
$\sqrt{KT}$, (where T is the time horizon) and present an efficient algorithm
matching this lower bound up to logarithmic factors. In the
distribution-dependent case, we show that for all K>2 our setting is strictly
harder than K-armed stochastic bandits by proving that it is impossible to
obtain regret bounds that grow logarithmically in time or slower. On the other
hand, when a lower bound $\gamma>0$ on the smallest drop in the demand curve is
known, we prove an upper bound on the regret of order $(1/\Delta+(\log \log
T)/\gamma^2)(K\log T)$. This is a significant improvement on previously known
regret bounds for discontinuous demand curves, that are at best of order
$(K^{12}/\gamma^8)\sqrt{T}$. When K=2 in the distribution-dependent case, the
hardness of our setting reduces to that of a stochastic 2-armed bandit: we
prove that an upper bound of order $(\log T)/\Delta$ (up to $\log\log$ factors)
on the regret can be achieved with no information on the demand curve. Finally,
we show a $O(\sqrt{T})$ upper bound on the regret for the setting in which the
buyers' decisions are nonstochastic, and the regret is measured with respect to
the best between two fixed valuations one of which is known to the seller.
| 0 | 0 | 0 | 1 | 0 | 0 |
Haar systems, KMS states on von Neumann algebras and $C^*$-algebras on dynamically defined groupoids and Noncommutative Integration | We analyse certain Haar systems associated to groupoids obtained by certain
natural equivalence relations of dynamical nature on sets like
$\{1,2,...,d\}^\mathbb{Z}$, $\{1,2,...,d\}^\mathbb{N}$, $S^1\times S^1$, or
$(S^1)^\mathbb{N}$, where $S^1$ is the unitary circle. We also describe
properties of transverse functions, quasi-invariant probabilities and KMS
states for some examples of von Neumann algebras (and also $C^*$-Algebras)
associated to these groupoids. We relate some of these KMS states with Gibbs
states of Thermodynamic Formalism. While presenting new results, we will also
describe in detail several examples and basic results on the above topics. In
other words it is also a survey paper. Some known results on non-commutative
integration are presented, more precisely, the relation of transverse measures,
cocycles and quasi-invariant probabilities.
We describe the results in a language which is more familiar to people in
Dynamical Systems. Our intention is to study Haar systems, quasi-invariant
probabilities and von Neumann algebras as a topic on measure theory
(intersected with ergodic theory) avoiding questions of algebraic nature
(which, of course, are also extremely important).
| 0 | 0 | 1 | 0 | 0 | 0 |
Restoring a smooth function from its noisy integrals | Numerical (and experimental) data analysis often requires the restoration of
a smooth function from a set of sampled integrals over finite bins. We present
the bin hierarchy method that efficiently computes the maximally smooth
function from the sampled integrals using essentially all the information
contained in the data. We perform extensive tests with different classes of
functions and levels of data quality, including Monte Carlo data suffering from
a severe sign problem and physical data for the Green's function of the
Fröhlich polaron.
| 0 | 1 | 0 | 1 | 0 | 0 |
Adversarial Active Learning for Deep Networks: a Margin Based Approach | We propose a new active learning strategy designed for deep neural networks.
The goal is to minimize the number of data annotation queried from an oracle
during training. Previous active learning strategies scalable for deep networks
were mostly based on uncertain sample selection. In this work, we focus on
examples lying close to the decision boundary. Based on theoretical works on
margin theory for active learning, we know that such examples may help to
considerably decrease the number of annotations. While measuring the exact
distance to the decision boundaries is intractable, we propose to rely on
adversarial examples. We do not consider anymore them as a threat instead we
exploit the information they provide on the distribution of the input space in
order to approximate the distance to decision boundaries. We demonstrate
empirically that adversarial active queries yield faster convergence of CNNs
trained on MNIST, the Shoe-Bag and the Quick-Draw datasets.
| 0 | 0 | 0 | 1 | 0 | 0 |
Improving SIEM capabilities through an enhanced probe for encrypted Skype traffic detection | Nowadays, the Security Information and Event Management (SIEM) systems take
on great relevance in handling security issues for critical infrastructures as
Internet Service Providers. Basically, a SIEM has two main functions: i) the
collection and the aggregation of log data and security information from
disparate network devices (routers, firewalls, intrusion detection systems, ad
hoc probes and others) and ii) the analysis of the gathered data by
implementing a set of correlation rules aimed at detecting potential suspicious
events as the presence of encrypted real-time traffic. In the present work, the
authors propose an enhanced implementation of a SIEM where a particular focus
is given to the detection of encrypted Skype traffic by using an ad-hoc
developed enhanced probe (ESkyPRO) conveniently governed by the SIEM itself.
Such enhanced probe, able to interact with an agent counterpart deployed into
the SIEM platform, is designed by exploiting some machine learning concepts.
The main purpose of the proposed ad-hoc SIEM is to correlate the information
received by ESkyPRO and other types of data obtained by an Intrusion Detection
System (IDS) probe in order to make the encrypted Skype traffic detection as
accurate as possible.
| 1 | 0 | 0 | 0 | 0 | 0 |
Escaping Saddle Points with Adaptive Gradient Methods | Adaptive methods such as Adam and RMSProp are widely used in deep learning
but are not well understood. In this paper, we seek a crisp, clean and precise
characterization of their behavior in nonconvex settings. To this end, we first
provide a novel view of adaptive methods as preconditioned SGD, where the
preconditioner is estimated in an online manner. By studying the preconditioner
on its own, we elucidate its purpose: it rescales the stochastic gradient noise
to be isotropic near stationary points, which helps escape saddle points.
Furthermore, we show that adaptive methods can efficiently estimate the
aforementioned preconditioner. By gluing together these two components, we
provide the first (to our knowledge) second-order convergence result for any
adaptive method. The key insight from our analysis is that, compared to SGD,
adaptive methods escape saddle points faster, and can converge faster overall
to second-order stationary points.
| 1 | 0 | 0 | 1 | 0 | 0 |
The Effect of Temperature on Cu-K-In-Se Thin Films | Films of Cu-K-In-Se were co-evaporated at varied K/(K+Cu) compositions and
substrate temperatures (with constant (K+Cu)/In ~ 0.85). Increased Na
composition on the substrate's surface and decreased growth temperature were
both found to favor Cu1-xKxInSe2 (CKIS) alloy formation, relative to
mixed-phase CuInSe2 + KInSe2 formation. Structures from X-ray diffraction
(XRD), band gaps, resistivities, minority carrier lifetimes and carrier
concentrations from time-resolved photoluminescence were in agreement with
previous reports, where low K/(K+Cu) composition films exhibited properties
promising for photovoltaic (PV) absorbers. Films grown at 400-500 C were then
annealed to 600 C under Se, which caused K loss by evaporation in proportion to
initial K/(K+Cu) composition. Similar to growth temperature, annealing drove
CKIS alloy consumption and CuInSe2 + KInSe2 production, as evidenced by high
temperature XRD. Annealing also decomposed KInSe2 and formed K2In12Se19. At
high temperature the KInSe2 crystal lattice gradually contracted as temperature
and time increased, as well as just time. Evaporative loss of K during
annealing could accompany the generation of vacancies on K lattice sites, and
may explain the KInSe2 lattice contraction. This knowledge of Cu-K-In-Se
material chemistry may be used to predict and control minor phase impurities in
Cu(In,Ga)(Se,S)2 PV absorbers-where impurities below typical detection limits
may have played a role in recent world record PV efficiencies that utilized KF
post-deposition treatments.
| 0 | 1 | 0 | 0 | 0 | 0 |
Simulations of the Solar System's Early Dynamical Evolution with a Self-Gravitating Planetesimal Disk | Over the course of last decade, the Nice model has dramatically changed our
view of the solar system's formation and early evolution. Within the context of
this model, a transient period of planet-planet scattering is triggered by
gravitational interactions between the giant planets and a massive primordial
planetesimal disk, leading to a successful reproduction of the solar system's
present-day architecture. In typical realizations of the Nice model,
self-gravity of the planetesimal disk is routinely neglected, as it poses a
computational bottleneck to the calculations. Recent analyses have shown,
however, that a self-gravitating disk can exhibit behavior that is dynamically
distinct, and this disparity may have significant implications for the solar
system's evolutionary path. In this work, we explore this discrepancy utilizing
a large suite of Nice odel simulations with and without a self-gravitating
planetesimal disk, taking advantage of the inherently parallel nature of
graphic processing units. Our simulations demonstrate that self-consistent
modeling of particle interactions does not lead to significantly different
final planetary orbits from those obtained within conventional simulations.
Moreover, self-gravitating calculations show similar planetesimal evolution to
non-self-gravitating numerical experiments after dynamical instability is
triggered, suggesting that the orbital clustering observed in the distant
Kuiper belt is unlikely to have a self-gravitational origin.
| 0 | 1 | 0 | 0 | 0 | 0 |
On weak Fraisse limits | Using the natural action of $S_\infty$ we show that a countable hereditary
class $\cC$ of finitely generated structures has the joint embedding property
(JEP) and the weak amalgamation property (WAP) if and only if there is a
structure $M$ whose isomorphism type is comeager in the space of all countable,
infinitely generated structures with age in $\cC$. In this case, $M$ is the
weak Fraïssé limit of $\cC$.
This applies in particular to countable structures with generic automorphisms
and recovers a result by Kechris and Rosendal [\textit{Proc.\ Lond.\ Math.\
Soc.,\ 2007}].
| 0 | 0 | 1 | 0 | 0 | 0 |
Promoting Saving for College Through Data Science | The cost of attending college has been steadily rising and in 10 years is
estimated to reach $140,000 for a 4-year public university. Recent surveys
estimate just over half of US families are saving for college. State-operated
529 college savings plans are an effective way for families to plan and save
for future college costs, but only 3% of families currently use them. The
Office of the Illinois State Treasurer (Treasurer) administers two 529 plans to
help its residents save for college. In order to increase the number of
families saving for college, the Treasurer and Civis Analytics used data
science techniques to identify the people most likely to sign up for a college
savings plan. In this paper, we will discuss the use of person matching to join
accountholder data from the Treasurer to the Civis National File, as well as
the use of lookalike modeling to identify new potential signups. In order to
avoid reinforcing existing demographic imbalances in who saves for college, the
lookalike models used were ensured to be racially and economically balanced. We
will also discuss how these new signup targets were then individually served
digital ads to encourage opening college savings accounts.
| 1 | 0 | 0 | 0 | 0 | 0 |
Rotation Blurring: Use of Artificial Blurring to Reduce Cybersickness in Virtual Reality First Person Shooters | Users of Virtual Reality (VR) systems often experience vection, the
perception of self-motion in the absence of any physical movement. While
vection helps to improve presence in VR, it often leads to a form of motion
sickness called cybersickness. Cybersickness is a major deterrent to large
scale adoption of VR.
Prior work has discovered that changing vection (changing the perceived speed
or moving direction) causes more severe cybersickness than steady vection
(walking at a constant speed or in a constant direction). Based on this idea,
we try to reduce the cybersickness caused by character movements in a First
Person Shooter (FPS) game in VR. We propose Rotation Blurring (RB), uniformly
blurring the screen during rotational movements to reduce cybersickness. We
performed a user study to evaluate the impact of RB in reducing cybersickness.
We found that the blurring technique led to an overall reduction in sickness
levels of the participants and delayed its onset. Participants who experienced
acute levels of cybersickness benefited significantly from this technique.
| 1 | 0 | 0 | 0 | 0 | 0 |
Comparative Benchmarking of Causal Discovery Techniques | In this paper we present a comprehensive view of prominent causal discovery
algorithms, categorized into two main categories (1) assuming acyclic and no
latent variables, and (2) allowing both cycles and latent variables, along with
experimental results comparing them from three perspectives: (a) structural
accuracy, (b) standard predictive accuracy, and (c) accuracy of counterfactual
inference. For (b) and (c) we train causal Bayesian networks with structures as
predicted by each causal discovery technique to carry out counterfactual or
standard predictive inference. We compare causal algorithms on two pub- licly
available and one simulated datasets having different sample sizes: small,
medium and large. Experiments show that structural accuracy of a technique does
not necessarily correlate with higher accuracy of inferencing tasks. Fur- ther,
surveyed structure learning algorithms do not perform well in terms of
structural accuracy in case of datasets having large number of variables.
| 1 | 0 | 0 | 1 | 0 | 0 |
Sensitivity of Love and quasi-Rayleigh waves to model parameters | We examine the sensitivity of the Love and the quasi-Rayleigh waves to model
parameters. Both waves are guided waves that propagate in the same model of an
elastic layer above an elastic halfspace. We study their dispersion curves
without any simplifying assumptions, beyond the standard approach of elasticity
theory in isotropic media. We examine the sensitivity of both waves to
elasticity parameters, frequency and layer thickness, for varying frequency and
different modes. In the case of Love waves, we derive and plot the absolute
value of a dimensionless sensitivity coefficient in terms of partial
derivatives, and perform an analysis to find the optimum frequency for
determining the layer thickness. For a coherency of the background information,
we briefly review the Love-wave dispersion relation and provide details of the
less common derivation of the quasi-Rayleigh relation in an appendix. We
compare that derivation to past results in the literature, finding certain
discrepancies among them.
| 0 | 1 | 0 | 0 | 0 | 0 |
The image size of iterated rational maps over finite fields | Let $\varphi:\mathbb{F}_q\to\mathbb{F}_q$ be a rational map on a fixed finite
field. We give explicit asymptotic formulas for the size of image sets
$\varphi^n(\mathbb{F}_q)$ as a function of $n$. This is done by using
properties of the Galois groups of iterated maps, whose connection to the
question of the size of image sets is established via Chebotarev's Density
Theorem. We then apply these results to provide explicit bounds on the
proportion of periodic points in $\mathbb{F}_q$ in terms of $q$ for certain
rational maps.
| 0 | 0 | 1 | 0 | 0 | 0 |
Noncommutative modular symbols and Eisenstein series | We form real-analytic Eisenstein series twisted by Manin's noncommutative
modular symbols. After developing their basic properties, these series are
shown to have meromorphic continuations to the entire complex plane and satisfy
functional equations in some cases. This theory neatly contains and generalizes
earlier work in the literature on the properties of Eisenstein series twisted
by classical modular symbols.
| 0 | 0 | 1 | 0 | 0 | 0 |
Some rigidity characterizations on critical metrics for quadratic curvature functionals | We study closed $n$-dimensional manifolds of which the metrics are critical
for quadratic curvature functionals involving the Ricci curvature, the scalar
curvature and the Riemannian curvature tensor on the space of Riemannian
metrics with unit volume. Under some additional integral conditions, we
classify such manifolds. Moreover, under some curvature conditions, the result
that a critical metric must be Einstein is proved.
| 0 | 0 | 1 | 0 | 0 | 0 |
Profinite completions of Burnside-type quotients of surface groups | Using quantum representations of mapping class groups we prove that profinite
completions of Burnside-type surface group quotients are not virtually
prosolvable, in general. Further, we construct infinitely many finite simple
characteristic quotients of surface groups.
| 0 | 0 | 1 | 0 | 0 | 0 |
Learning Infinite RBMs with Frank-Wolfe | In this work, we propose an infinite restricted Boltzmann machine~(RBM),
whose maximum likelihood estimation~(MLE) corresponds to a constrained convex
optimization. We consider the Frank-Wolfe algorithm to solve the program, which
provides a sparse solution that can be interpreted as inserting a hidden unit
at each iteration, so that the optimization process takes the form of a
sequence of finite models of increasing complexity. As a side benefit, this can
be used to easily and efficiently identify an appropriate number of hidden
units during the optimization. The resulting model can also be used as an
initialization for typical state-of-the-art RBM training algorithms such as
contrastive divergence, leading to models with consistently higher test
likelihood than random initialization.
| 1 | 0 | 0 | 1 | 0 | 0 |
New low-mass eclipsing binary systems in Praesepe discovered by K2 | We present the discovery of four low-mass ($M<0.6$ $M_\odot$) eclipsing
binary (EB) systems in the sub-Gyr old Praesepe open cluster using Kepler/K2
time-series photometry and Keck/HIRES spectroscopy. We present a new Gaussian
process eclipsing binary model, GP-EBOP, as well as a method of simultaneously
determining effective temperatures and distances for EBs. Three of the reported
systems (AD 3814, AD 2615 and AD 1508) are detached and double-lined, and
precise solutions are presented for the first two. We determine masses and
radii to 1-3% precision for AD 3814 and to 5-6% for AD 2615. Together with
effective temperatures determined to $\sim$50 K precision, we test the PARSEC
v1.2 and BHAC15 stellar evolution models. Our EB parameters are more consistent
with the PARSEC models, primarily because the BHAC15 temperature scale is
hotter than our data over the mid M-dwarf mass range probed. Both ADs 3814 and
2615, which have orbital periods of 6.0 and 11.6 days, are circularized but not
synchronized. This suggests that either synchronization proceeds more slowly in
fully convective stars than the theory of equilibrium tides predicts or
magnetic braking is currently playing a more important role than tidal forces
in the spin evolution of these binaries. The fourth system (AD 3116) comprises
a brown dwarf transiting a mid M-dwarf, which is the first such system
discovered in a sub-Gyr open cluster. Finally, these new discoveries increase
the number of characterized EBs in sub-Gyr open clusters by 20% (40%) below
$M<1.5$ $M_{\odot}$ ($M<0.6$ $M_{\odot}$).
| 0 | 1 | 0 | 0 | 0 | 0 |
The impact of the halide cage on the electronic properties of fully inorganic caesium lead halide perovskites | Perovskite solar cells with record power conversion efficiency are fabricated
by alloying both hybrid and fully inorganic compounds. While the basic
electronic properties of the hybrid perovskites are now well understood, key
electronic parameters for solar cell performance, such as the exciton binding
energy of fully inorganic perovskites, are still unknown. By performing magneto
transmission measurements, we determine with high accuracy the exciton binding
energy and reduced mass of fully inorganic CsPbX$_3$ perovskites (X=I, Br, and
an alloy of these). The well behaved (continuous) evolution of the band gap
with temperature in the range $4-270$\,K suggests that fully inorganic
perovskites do not undergo structural phase transitions like their hybrid
counterparts. The experimentally determined dielectric constants indicate that
at low temperature, when the motion of the organic cation is frozen, the
dielectric screening mechanism is essentially the same both for hybrid and
inorganic perovskites, and is dominated by the relative motion of atoms within
the lead-halide cage.
| 0 | 1 | 0 | 0 | 0 | 0 |
Discovery of Giant Radio Galaxies from NVSS: Radio & Infrared Properties | Giant radio galaxies (GRGs) are one of the largest astrophysical sources in
the Universe with an overall projected linear size of ~0.7 Mpc or more. Last
six decades of radio astronomy research has led to the detection of thousands
of radio galaxies. But only ~ 300 of them can be classified as GRGs. The
reasons behind their large size and rarity are unknown. We carried out a
systematic search for these radio giants and found a large sample of GRGs. In
this paper, we report the discovery of 25 GRGs from NVSS, in the redshift range
(z) ~ 0.07 to 0.67. Their physical sizes range from ~0.8 Mpc to ~4 Mpc. Eight
of these GRGs have sizes greater than 2Mpc which is a rarity. In this paper,
for the first time, we investigate the mid-IR properties of the optical hosts
of the GRGs and classify them securely into various AGN types using the WISE
mid-IR colours. Using radio and IR data, four of the hosts of GRGs were
observed to be radio loud quasars that extend up to 2 Mpc in radio size. These
GRGs missed detection in earlier searches possibly because of their highly
diffuse nature, low surface brightness and lack of optical data. The new GRGs
are a significant addition to the existing sample that will contribute to
better understanding of the physical properties of radio giants.
| 0 | 1 | 0 | 0 | 0 | 0 |
SGD: General Analysis and Improved Rates | We propose a general yet simple theorem describing the convergence of SGD
under the arbitrary sampling paradigm. Our theorem describes the convergence of
an infinite array of variants of SGD, each of which is associated with a
specific probability law governing the data selection rule used to form
mini-batches. This is the first time such an analysis is performed, and most of
our variants of SGD were never explicitly considered in the literature before.
Our analysis relies on the recently introduced notion of expected smoothness
and does not rely on a uniform bound on the variance of the stochastic
gradients. By specializing our theorem to different mini-batching strategies,
such as sampling with replacement and independent sampling, we derive exact
expressions for the stepsize as a function of the mini-batch size. With this we
can also determine the mini-batch size that optimizes the total complexity, and
show explicitly that as the variance of the stochastic gradient evaluated at
the minimum grows, so does the optimal mini-batch size. For zero variance, the
optimal mini-batch size is one. Moreover, we prove insightful
stepsize-switching rules which describe when one should switch from a constant
to a decreasing stepsize regime.
| 1 | 0 | 0 | 1 | 0 | 0 |
Nonconvex One-bit Single-label Multi-label Learning | We study an extreme scenario in multi-label learning where each training
instance is endowed with a single one-bit label out of multiple labels. We
formulate this problem as a non-trivial special case of one-bit rank-one matrix
sensing and develop an efficient non-convex algorithm based on alternating
power iteration. The proposed algorithm is able to recover the underlying
low-rank matrix model with linear convergence. For a rank-$k$ model with $d_1$
features and $d_2$ classes, the proposed algorithm achieves $O(\epsilon)$
recovery error after retrieving $O(k^{1.5}d_1 d_2/\epsilon)$ one-bit labels
within $O(kd)$ memory. Our bound is nearly optimal in the order of
$O(1/\epsilon)$. This significantly improves the state-of-the-art sampling
complexity of one-bit multi-label learning. We perform experiments to verify
our theory and evaluate the performance of the proposed algorithm.
| 1 | 0 | 0 | 1 | 0 | 0 |
Two Categories of Indoor Interactive Dynamics of a Large-scale Human Population in a WiFi covered university campus | To explore large-scale population indoor interactions, we analyze 18,715
users' WiFi access logs recorded in a Chinese university campus during 3
months, and define two categories of human interactions, the event interaction
(EI) and the temporal interaction (TI). The EI helps construct a transmission
graph, and the TI helps build an interval graph. The dynamics of EIs show that
their active durations are truncated power-law distributed, which is
independent on the number of involved individuals. The transmission duration
presents a truncated power-law behavior at the daily timescale with weekly
periodicity. Besides, those `leaf' individuals in the aggregated contact
network may participate in the `super-connecting cliques' in the aggregated
transmission graph. Analyzing the dynamics of the interval graph, we find that
the probability distribution of TIs' inter-event duration also displays a
truncated power-law pattern at the daily timescale with weekly periodicity,
while the pairwise individuals with burst interactions are prone to randomly
select their interactive locations, and those individuals with periodic
interactions have preferred interactive locations.
| 1 | 1 | 0 | 0 | 0 | 0 |
Robust estimators for generalized linear models with a dispersion parameter | Highly robust and efficient estimators for the generalized linear model with
a dispersion parameter are proposed. The estimators are based on three steps.
In the first step the maximum rank correlation estimator is used to
consistently estimate the slopes up to a scale factor. In the second step, the
scale factor, the intercept, and the dispersion parameter are consistently
estimated using a MT-estimator of a simple regression model. The combined
estimator is highly robust but inefficient. Then, randomized quantile residuals
based on the initial estimators are used to detect outliers to be rejected and
to define a set S of observations to be retained. Finally, a conditional
maximum likelihood (CML) estimator given the observations in S is computed. We
show that, under the model, S tends to the complete sample for increasing
sample size. Therefore, the CML tends to the unconditional maximum likelihood
estimator. It is therefore highly efficient, while maintaining the high degree
of robustness of the initial estimator. The case of the negative binomial
regression model is studied in detail.
| 0 | 0 | 0 | 1 | 0 | 0 |
Abstracting Event-Driven Systems with Lifestate Rules | We present lifestate rules--an approach for abstracting event-driven object
protocols. Developing applications against event-driven software frameworks is
notoriously difficult. One reason why is that to create functioning
applications, developers must know about and understand the complex protocols
that abstract the internal behavior of the framework. Such protocols intertwine
the proper registering of callbacks to receive control from the framework with
appropriate application programming interface (API) calls to delegate back to
it. Lifestate rules unify lifecycle and typestate constraints in one common
specification language. Our primary contribution is a model of event-driven
systems from which lifestate rules can be derived. We then apply specification
mining techniques to learn lifestate specifications for Android framework
types. In the end, our implementation is able to find several rules that
characterize actual behavior of the Android framework.
| 1 | 0 | 0 | 0 | 0 | 0 |
Boltzmann Encoded Adversarial Machines | Restricted Boltzmann Machines (RBMs) are a class of generative neural network
that are typically trained to maximize a log-likelihood objective function. We
argue that likelihood-based training strategies may fail because the objective
does not sufficiently penalize models that place a high probability in regions
where the training data distribution has low probability. To overcome this
problem, we introduce Boltzmann Encoded Adversarial Machines (BEAMs). A BEAM is
an RBM trained against an adversary that uses the hidden layer activations of
the RBM to discriminate between the training data and the probability
distribution generated by the model. We present experiments demonstrating that
BEAMs outperform RBMs and GANs on multiple benchmarks.
| 0 | 0 | 0 | 1 | 0 | 0 |
A Non-Gaussian, Nonparametric Structure for Gene-Gene and Gene-Environment Interactions in Case-Control Studies Based on Hierarchies of Dirichlet Processes | It is becoming increasingly clear that complex interactions among genes and
environmental factors play crucial roles in triggering complex diseases. Thus,
understanding such interactions is vital, which is possible only through
statistical models that adequately account for such intricate, albeit unknown,
dependence structures. Bhattacharya & Bhattacharya (2016b) attempt such
modeling, relating finite mixtures composed of Dirichlet processes that
represent unknown number of genetic sub-populations through a hierarchical
matrix-normal structure that incorporates gene-gene interactions, and possible
mutations, induced by environmental variables. However, the product dependence
structure implied by their matrix-normal model seems to be too simple to be
appropriate for general complex, realistic situations. In this article, we
propose and develop a novel nonparametric Bayesian model for case-control
genotype data using hierarchies of Dirichlet processes that offers a more
realistic and nonparametric dependence structure between the genes, induced by
the environmental variables. In this regard, we propose a novel and highly
parallelisable MCMC algorithm that is rendered quite efficient by the
combination of modern parallel computing technology, effective Gibbs sampling
steps, retrospective sampling and Transformation based Markov Chain Monte Carlo
(TMCMC). We use appropriate Bayesian hypothesis testing procedures to detect
the roles of genes and environment in case-control studies. We apply our ideas
to 5 biologically realistic case-control genotype datasets simulated under
distinct set-ups, and obtain encouraging results in each case. We finally apply
our ideas to a real, myocardial infarction dataset, and obtain interesting
results on gene-gene and gene-environment interaction, while broadly agreeing
with the results reported in the literature.
| 0 | 0 | 0 | 1 | 0 | 0 |
An Efficient Bayesian Robust Principal Component Regression | Principal component regression is a linear regression model with principal
components as regressors. This type of modelling is particularly useful for
prediction in settings with high-dimensional covariates. Surprisingly, the
existing literature treating of Bayesian approaches is relatively sparse. In
this paper, we aim at filling some gaps through the following practical
contribution: we introduce a Bayesian approach with detailed guidelines for a
straightforward implementation. The approach features two characteristics that
we believe are important. First, it effectively involves the relevant principal
components in the prediction process. This is achieved in two steps. The first
one is model selection; the second one is to average out the predictions
obtained from the selected models according to model averaging mechanisms,
allowing to account for model uncertainty. The model posterior probabilities
are required for model selection and model averaging. For this purpose, we
include a procedure leading to an efficient reversible jump algorithm. The
second characteristic of our approach is whole robustness, meaning that the
impact of outliers on inference gradually vanishes as they approach plus or
minus infinity. The conclusions obtained are consequently consistent with the
majority of observations (the bulk of the data).
| 0 | 0 | 0 | 1 | 0 | 0 |
Bayesian estimation from few samples: community detection and related problems | We propose an efficient meta-algorithm for Bayesian estimation problems that
is based on low-degree polynomials, semidefinite programming, and tensor
decomposition. The algorithm is inspired by recent lower bound constructions
for sum-of-squares and related to the method of moments. Our focus is on sample
complexity bounds that are as tight as possible (up to additive lower-order
terms) and often achieve statistical thresholds or conjectured computational
thresholds.
Our algorithm recovers the best known bounds for community detection in the
sparse stochastic block model, a widely-studied class of estimation problems
for community detection in graphs. We obtain the first recovery guarantees for
the mixed-membership stochastic block model (Airoldi et el.) in constant
average degree graphs---up to what we conjecture to be the computational
threshold for this model. We show that our algorithm exhibits a sharp
computational threshold for the stochastic block model with multiple
communities beyond the Kesten--Stigum bound---giving evidence that this task
may require exponential time.
The basic strategy of our algorithm is strikingly simple: we compute the
best-possible low-degree approximation for the moments of the posterior
distribution of the parameters and use a robust tensor decomposition algorithm
to recover the parameters from these approximate posterior moments.
| 1 | 0 | 0 | 1 | 0 | 0 |
X-ray Transform and Boundary Rigidity for Asymptotically Hyperbolic Manifolds | We consider the boundary rigidity problem for asymptotically hyperbolic
manifolds. We show injectivity of the X-ray transform in several cases and
consider the non-linear inverse problem which consists of recovering a metric
from boundary measurements for the geodesic flow.
| 0 | 0 | 1 | 0 | 0 | 0 |
Pinning of longitudinal phonons in holographic spontaneous helices | We consider the spontaneous breaking of translational symmetry and identify
the associated Goldstone mode -- a longitudinal phonon -- in a holographic
model with Bianchi VII helical symmetry. For the first time in holography, we
observe the pinning of this mode after introducing a source for explicit
breaking compatible with the helical symmetry of our setup. We study the
dispersion relation of the resulting pseudo-Goldstone mode, uncovering how its
speed and mass gap depend on the amplitude of the source and temperature. In
addition, we extract the optical conductivity as a function of frequency, which
reveals a metal-insulator transition as a consequence of the pinning.
| 0 | 1 | 0 | 0 | 0 | 0 |
An Extension of the Method of Brackets. Part 1 | The method of brackets is an efficient method for the evaluation of a large
class of definite integrals on the half-line. It is based on a small collection
of rules, some of which are heuristic. The extension discussed here is based on
the concepts of null and divergent series. These are formal representations of
functions, whose coefficients $a_{n}$ have meromorphic representations for $n
\in \mathbb{C}$, but might vanish or blow up when $n \in \mathbb{N}$. These
ideas are illustrated with the evaluation of a variety of entries from the
classical table of integrals by Gradshteyn and Ryzhik.
| 0 | 0 | 1 | 0 | 0 | 0 |
optimParallel: an R Package Providing Parallel Versions of the Gradient-Based Optimization Methods of optim() | The R package optimParallel provides a parallel version of the gradient-based
optimization methods of optim(). The main function of the package is
optimParallel(), which has the same usage and output as optim(). Using
optimParallel() can significantly reduce optimization times. We introduce the R
package and illustrate its implementation, which takes advantage of the lexical
scoping mechanism of R.
| 0 | 0 | 0 | 1 | 0 | 0 |
The barocaloric effect: A Spin-off of the Discovery of High-Temperature Superconductivity | Some key results obtained in joint research projects with Alex Müller are
summarized, concentrating on the invention of the barocaloric effect and its
application for cooling as well as on important findings in the field of
high-temperature superconductivity resulting from neutron scattering
experiments.
| 0 | 1 | 0 | 0 | 0 | 0 |
Spontaneous currents in superconducting systems with strong spin-orbit coupling | We show that Rashba spin-orbit coupling at the interface between a
superconductor and a ferromagnet should produce a spontaneous current in the
atomic thickness region near the interface. This current is counter-balanced by
the superconducting screening current flowing in the region of the width of the
London penetration depth near the interface. Such current carrying state
creates a magnetic field near the superconductor surface, generates a stray
magnetic field outside the sample edges, changes the slope of the temperature
dependence of the critical field $H_{c3}$ and may generate the spontaneous
Abrikosov vortices near the interface.
| 0 | 1 | 0 | 0 | 0 | 0 |
Catching Loosely Synchronized Behavior in Face of Camouflage | Fraud has severely detrimental impacts on the business of social networks and
other online applications. A user can become a fake celebrity by purchasing
"zombie followers" on Twitter. A merchant can boost his reputation through fake
reviews on Amazon. This phenomenon also conspicuously exists on Facebook, Yelp
and TripAdvisor, etc. In all the cases, fraudsters try to manipulate the
platform's ranking mechanism by faking interactions between the fake accounts
they control and the target customers.
| 1 | 0 | 0 | 0 | 0 | 0 |
Three-dimensional localized-delocalized Anderson transition in the time domain | Systems which can spontaneously reveal periodic evolution are dubbed time
crystals. This is in analogy with space crystals that display periodic behavior
in configuration space. While space crystals are modelled with the help of
space periodic potentials, crystalline phenomena in time can be modelled by
periodically driven systems. Disorder in the periodic driving can lead to
Anderson localization in time: the probability for detecting a system at a
fixed point of configuration space becomes exponentially localized around a
certain moment in time. We here show that a three-dimensional system exposed to
a properly disordered pseudo-periodic driving may display a
localized-delocalized Anderson transition in the time domain, in strong analogy
with the usual three-dimensional Anderson transition in disordered systems.
Such a transition could be experimentally observed with ultra-cold atomic
gases.
| 0 | 1 | 0 | 0 | 0 | 0 |
Socio-economic constraints to maximum human lifespan | The analysis of the demographic transition of the past century and a half,
using both empirical data and mathematical models, has rendered a wealth of
well-established facts, including the dramatic increases in life expectancy.
Despite these insights, such analyses have also occasionally triggered debates
which spill over many disciplines, from genetics, to biology, or demography.
Perhaps the hottest discussion is happening around the question of maximum
human lifespan, which --besides its fascinating historical and philosophical
interest-- poses urgent pragmatic warnings on a number of issues in public and
private decision-making. In this paper, we add to the controversy some results
which, based on purely statistical grounds, suggest that the maximum human
lifespan is not fixed, or has not reached yet a plateau. Quite the contrary,
analysis on reliable data for over 150 years in more than 20 industrialized
countries point at a sustained increase in the maximum age at death.
Furthermore, were this trend to continue, a limitless lifespan could be
achieved by 2102. Finally, we quantify the dependence of increases in the
maximum lifespan on socio-economic factors. Our analysis indicates that in some
countries the observed rising patterns can only be sustained by progressively
larger increases in GDP, setting the problem of longevity in a context of
diminishing returns.
| 0 | 0 | 0 | 1 | 1 | 0 |
SD-CPS: Taming the Challenges of Cyber-Physical Systems with a Software-Defined Approach | Cyber-Physical Systems (CPS) revolutionize various application domains with
integration and interoperability of networking, computing systems, and
mechanical devices. Due to its scale and variety, CPS faces a number of
challenges and opens up a few research questions in terms of management,
fault-tolerance, and scalability. We propose a software-defined approach
inspired by Software-Defined Networking (SDN), to address the challenges for a
wider CPS adoption. We thus design a middleware architecture for the correct
and resilient operation of CPS, to manage and coordinate the interacting
devices centrally in the cyberspace whilst not sacrificing the functionality
and performance benefits inherent to a distributed execution.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dual Ore's theorem on distributive intervals of finite groups | This paper gives a self-contained group-theoretic proof of a dual version of
a theorem of Ore on distributive intervals of finite groups. We deduce a bridge
between combinatorics and representations in finite group theory.
| 0 | 0 | 1 | 0 | 0 | 0 |
A construction of trivial Beltrami coefficients | A measurable function $\mu$ on the unit disk $\mathbb{D}$ of the complex
plane with $\|\mu\|_\infty<1$ is sometimes called a Beltrami coefficient. We
say that $\mu$ is trivial if it is the complex dilatation $f_{\bar z}/f_z$ of a
quasiconformal automorphism $f$ of $\mathbb{D}$ satisfying the trivial boundary
condition $f(z)=z,~|z|=1.$ Since it is not easy to solve the Beltrami equation
explicitly, to detect triviality of a given Beltrami coefficient is a hard
problem, in general. In the present article, we offer a sufficient condition
for a Beltrami coefficient to be trivial. Our proof is based on Betker's
theorem on Löwner chains.
| 0 | 0 | 1 | 0 | 0 | 0 |
A GNS construction of three-dimensional abelian Dijkgraaf-Witten theories | We give a detailed account of the so-called "universal construction" that
aims to extend invariants of closed manifolds, possibly with additional
structure, to topological field theories and show that it amounts to a
generalization of the GNS construction. We apply this construction to an
invariant defined in terms of the groupoid cardinality of groupoids of bundles
to recover Dijkgraaf-Witten theories, including the vector spaces obtained as a
linearization of spaces of principal bundles.
| 0 | 0 | 1 | 0 | 0 | 0 |
Long-path formation in a deformed microdisk laser | An asymmetric resonant cavity can be used to form a path that is much longer
than the cavity size. We demonstrate this capability for a deformed microdisk
equipped with two linear waveguides, by constructing a multiply reflected
periodic orbit that is confined by total internal reflection within the
deformed microdisk and outcoupled by the two linear waveguides. Resonant mode
analysis reveals that the modes corresponding to the periodic orbit are
characterized by high quality factors. From measured spectral and far-field
data, we confirm that the fabricated devices can form a path about 9.3 times
longer than the average diameter of the deformed microdisk.
| 0 | 1 | 0 | 0 | 0 | 0 |
Spitzer Observations of Large Amplitude Variables in the LMC and IC 1613 | The 3.6 and 4.5 micron characteristics of AGB variables in the LMC and IC1613
are discussed. For C-rich Mira variables there is a very clear
period-luminosity-colour relation, where the [3.6]-[4.5] colour is associated
with the amount of circumstellar material and correlated with the pulsation
amplitude. The [4.5] period-luminosity relation for dusty stars is
approximately one mag brighter than for their naked counterparts with
comparable periods.
| 0 | 1 | 0 | 0 | 0 | 0 |
Synergistic Team Composition | Effective teams are crucial for organisations, especially in environments
that require teams to be constantly created and dismantled, such as software
development, scientific experiments, crowd-sourcing, or the classroom. Key
factors influencing team performance are competences and personality of team
members. Hence, we present a computational model to compose proficient and
congenial teams based on individuals' personalities and their competences to
perform tasks of different nature. With this purpose, we extend Wilde's
post-Jungian method for team composition, which solely employs individuals'
personalities. The aim of this study is to create a model to partition agents
into teams that are balanced in competences, personality and gender. Finally,
we present some preliminary empirical results that we obtained when analysing
student performance. Results show the benefits of a more informed team
composition that exploits individuals' competences besides information about
their personalities.
| 1 | 0 | 0 | 0 | 0 | 0 |
The inverse hull of 0-left cancellative semigroups | Given a semigroup S with zero, which is left-cancellative in the sense that
st=sr \neq 0 implies that t=r, we construct an inverse semigroup called the
inverse hull of S, denoted H(S). When S admits least common multiples, in a
precise sense defined below, we study the idempotent semilattice of H(S), with
a focus on its spectrum. When S arises as the language semigroup for a subsift
X on a finite alphabet, we discuss the relationship between H(S) and several
C*-algebras associated to X appearing in the literature.
| 0 | 0 | 1 | 0 | 0 | 0 |
The Hadamard Determinant Inequality - Extensions to Operators on a Hilbert Space | A generalization of classical determinant inequalities like Hadamard's
inequality and Fischer's inequality is studied. For a version of the
inequalities originally proved by Arveson for positive operators in von Neumann
algebras with a tracial state, we give a different proof. We also improve and
generalize to the setting of finite von Neumann algebras, some `Fischer-type'
inequalities by Matic for determinants of perturbed positive-definite matrices.
In the process, a conceptual framework is established for viewing these
inequalities as manifestations of Jensen's inequality in conjunction with the
theory of operator monotone and operator convex functions on $[0,\infty)$. We
place emphasis on documenting necessary and sufficient conditions for equality
to hold.
| 0 | 0 | 1 | 0 | 0 | 0 |
Experimental results : Reinforcement Learning of POMDPs using Spectral Methods | We propose a new reinforcement learning algorithm for partially observable
Markov decision processes (POMDP) based on spectral decomposition methods.
While spectral methods have been previously employed for consistent learning of
(passive) latent variable models such as hidden Markov models, POMDPs are more
challenging since the learner interacts with the environment and possibly
changes the future observations in the process. We devise a learning algorithm
running through epochs, in each epoch we employ spectral techniques to learn
the POMDP parameters from a trajectory generated by a fixed policy. At the end
of the epoch, an optimization oracle returns the optimal memoryless planning
policy which maximizes the expected reward based on the estimated POMDP model.
We prove an order-optimal regret bound with respect to the optimal memoryless
policy and efficient scaling with respect to the dimensionality of observation
and action spaces.
| 1 | 0 | 0 | 1 | 0 | 0 |
Discreteness of silting objects and t-structures in triangulated categories | We introduce the notion of ST-pairs of triangulated subcategories, a
prototypical example of which is the pair of the bound homotopy category and
the bound derived category of a finite-dimensional algebra. For an ST-pair
$(\C,\D)$, we construct an order-preserving map from silting objects in $\C$ to
bounded $t$-structures on $\D$ and show that the map is bijective if and only
if $\C$ is silting-discrete if and only if $\D$ is $t$-discrete. Based on a
work of Qiu and Woolf, the above result is applied to show that if $\C$ is
silting-discrete then the stability space of $\D$ is contractible. This is used
to obtain the contractibility of the stability spaces of some Calabi--Yau
triangulated categories associated to Dynkin quivers.
| 0 | 0 | 1 | 0 | 0 | 0 |
Sketching the order of events | We introduce features for massive data streams. These stream features can be
thought of as "ordered moments" and generalize stream sketches from "moments of
order one" to "ordered moments of arbitrary order". In analogy to classic
moments, they have theoretical guarantees such as universality that are
important for learning algorithms.
| 1 | 0 | 1 | 1 | 0 | 0 |
Deep Multi-view Learning to Rank | We study the problem of learning to rank from multiple sources. Though
multi-view learning and learning to rank have been studied extensively leading
to a wide range of applications, multi-view learning to rank as a synergy of
both topics has received little attention. The aim of the paper is to propose a
composite ranking method while keeping a close correlation with the individual
rankings simultaneously. We propose a multi-objective solution to ranking by
capturing the information of the feature mapping from both within each view as
well as across views using autoencoder-like networks. Moreover, a novel
end-to-end solution is introduced to enhance the joint ranking with minimum
view-specific ranking loss, so that we can achieve the maximum global view
agreements within a single optimization process. The proposed method is
validated on a wide variety of ranking problems, including university ranking,
multi-view lingual text ranking and image data ranking, providing superior
results.
| 0 | 0 | 0 | 1 | 0 | 0 |
One-Shot Visual Imitation Learning via Meta-Learning | In order for a robot to be a generalist that can perform a wide range of
jobs, it must be able to acquire a wide variety of skills quickly and
efficiently in complex unstructured environments. High-capacity models such as
deep neural networks can enable a robot to represent complex skills, but
learning each skill from scratch then becomes infeasible. In this work, we
present a meta-imitation learning method that enables a robot to learn how to
learn more efficiently, allowing it to acquire new skills from just a single
demonstration. Unlike prior methods for one-shot imitation, our method can
scale to raw pixel inputs and requires data from significantly fewer prior
tasks for effective learning of new skills. Our experiments on both simulated
and real robot platforms demonstrate the ability to learn new tasks,
end-to-end, from a single visual demonstration.
| 1 | 0 | 0 | 0 | 0 | 0 |
Second variation of Selberg zeta functions and curvature asymptotics | We give an explicit formula for the second variation of the logarithm of the
Selberg zeta function, $Z(s)$, on Teichmüller space. We then use this formula
to determine the asymptotic behavior as $s \to \infty$ of the second variation.
As a consequence, we determine the signature of the Hessian of $\log Z(s)$ for
sufficiently large $s$. As a further consequence, the asymptotic behavior of
the second variation of $\log Z(s)$ shows that the Ricci curvature of the Hodge
bundle $H^0(\mathcal K^m_t)\mapsto t$ over Teichmüller space agrees with the
Quillen curvature up to a term of exponential decay, $O(s^2 e^{-l_0 s}),$ where
$l_0$ is the length of the shortest closed hyperbolic geodesic.
| 0 | 0 | 1 | 0 | 0 | 0 |
Magnetic Fields Threading Black Holes: restrictions from general relativity and implications for astrophysical black holes | The idea that black hole spin is instrumental in the generation of powerful
jets in active galactic nuclei and X-ray binaries is arguably the most
contentious claim in black hole astrophysics. Because jets are thought to
originate in the context of electromagnetism, and the modeling of Maxwell
fields in curved spacetime around black holes is challenging, various
approximations are made in numerical simulations that fall under the guise of
'ideal magnetohydrodynamics'. But the simplifications of this framework may
struggle to capture relevant details of real astrophysical environments near
black holes. In this work, we highlight tension between analytic and numerical
results, specifically between the analytically derived conserved Noether
currents for rotating black hole spacetimes and the results of general
relativistic numerical simulations (GRMHD). While we cannot definitively
attribute the issue to any specific approximation used in the numerical
schemes, there seem to be natural candidates, which we explore. GRMHD
notwithstanding, if electromagnetic fields around rotating black holes are
brought to the hole by accretion, we show from first principles that prograde
accreting disks likely experience weaker large-scale black hole-threading
fields, implying weaker jets than in retrograde configurations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Distance covariance for stochastic processes | The distance covariance of two random vectors is a measure of their
dependence. The empirical distance covariance and correlation can be used as
statistical tools for testing whether two random vectors are independent. We
propose an analogs of the distance covariance for two stochastic processes
defined on some interval. Their empirical analogs can be used to test the
independence of two processes.
| 0 | 0 | 1 | 1 | 0 | 0 |
On Nonparametric Regression using Data Depth | We investigate nonparametric regression methods based on statistical depth
functions. These nonparametric regression procedures can be used in situations,
where the response is multivariate and the covariate is a random element in a
metric space. This includes regression with functional covariate as a special
case. Our objective is to study different features of the conditional
distribution of the response given the covariate. We construct measures of the
center and the spread of the conditional distribution using depth based
nonparametric regression procedures. We establish the asymptotic consistency of
those measures and develop a test for heteroscedasticity based on the measure
of conditional spread. The usefulness of the methodology is demonstrated in
some real datasets. In one dataset consisting of Italian household expenditure
data for the period 1973 to 1992, we regress the expenditure for different
items on their prices. In another dataset, our responses are the nutritional
contents of different meat samples measured by their protein, fat and moisture
contents, and the functional covariate is the absorbance spectra of the meat
samples.
| 0 | 0 | 0 | 1 | 0 | 0 |
A Backward Simulation Method for Stochastic Optimal Control Problems | A number of optimal decision problems with uncertainty can be formulated into
a stochastic optimal control framework. The Least-Squares Monte Carlo (LSMC)
algorithm is a popular numerical method to approach solutions of such
stochastic control problems as analytical solutions are not tractable in
general. This paper generalizes the LSMC algorithm proposed in Shen and Weng
(2017) to solve a wide class of stochastic optimal control models. Our
algorithm has three pillars: a construction of auxiliary stochastic control
model, an artificial simulation of the post-action value of state process, and
a shape-preserving sieve estimation method which equip the algorithm with a
number of merits including bypassing forward simulation and control
randomization, evading extrapolating the value function, and alleviating
computational burden of the tuning parameter selection. The efficacy of the
algorithm is corroborated by an application to pricing equity-linked insurance
products.
| 0 | 0 | 0 | 0 | 0 | 1 |
Damped Posterior Linearization Filter | The iterated posterior linearization filter (IPLF) is an algorithm for
Bayesian state estimation that performs the measurement update using iterative
statistical regression. The main result behind IPLF is that the posterior
approximation is more accurate when the statistical regression of measurement
function is done in the posterior instead of the prior as is done in
non-iterative Kalman filter extensions. In IPLF, each iteration in principle
gives a better posterior estimate to obtain a better statistical regression and
more accurate posterior estimate in the next iteration. However, IPLF may
diverge. IPLF's fixed- points are not described as solutions to an optimization
problem, which makes it challenging to improve its convergence properties. In
this letter, we introduce a double-loop version of IPLF, where the inner loop
computes the posterior mean using an optimization algorithm. Simulation results
are presented to show that the proposed algorithm has better convergence than
IPLF and its accuracy is similar to or better than other state-of-the-art
algorithms.
| 0 | 0 | 1 | 1 | 0 | 0 |
Status updates through M/G/1/1 queues with HARQ | We consider a system where randomly generated updates are to be transmitted
to a monitor, but only a single update can be in the transmission service at a
time. Therefore, the source has to prioritize between the two possible
transmission policies: preempting the current update or discarding the new one.
We consider Poisson arrivals and general service time, and refer to this system
as the M/G/1/1 queue. We start by studying the average status update age and
the optimal update arrival rate for these two schemes under general service
time distribution. We then apply these results on two practical scenarios in
which updates are sent through an erasure channel using (a) an infinite
incremental redundancy (IIR) HARQ system and (b) a fixed redundancy (FR) HARQ
system. We show that in both schemes the best strategy would be not to preempt.
Moreover, we also prove that, from an age point of view, IIR is better than FR.
| 1 | 0 | 0 | 0 | 0 | 0 |
A mean score method for sensitivity analysis to departures from the missing at random assumption in randomised trials | Most analyses of randomised trials with incomplete outcomes make untestable
assumptions and should therefore be subjected to sensitivity analyses. However,
methods for sensitivity analyses are not widely used. We propose a mean score
approach for exploring global sensitivity to departures from missing at random
or other assumptions about incomplete outcome data in a randomised trial. We
assume a single outcome analysed under a generalised linear model. One or more
sensitivity parameters, specified by the user, measure the degree of departure
from missing at random in a pattern mixture model. Advantages of our method are
that its sensitivity parameters are relatively easy to interpret and so can be
elicited from subject matter experts; it is fast and non-stochastic; and its
point estimate, standard error and confidence interval agree perfectly with
standard methods when particular values of the sensitivity parameters make
those standard methods appropriate. We illustrate the method using data from a
mental health trial.
| 0 | 0 | 1 | 1 | 0 | 0 |
On the union complexity of families of axis-parallel rectangles with a low packing number | Let R be a family of n axis-parallel rectangles with packing number p-1,
meaning that among any p of the rectangles, there are two with a non-empty
intersection. We show that the union complexity of R is at most O(n+p^2), and
that the (<=k)-level complexity of R is at most O(kn+k^2p^2). Both upper bounds
are tight.
| 1 | 0 | 1 | 0 | 0 | 0 |
Using photo-ionisation models to derive carbon and oxygen gas-phase abundances in the rest UV | We present a new method to derive oxygen and carbon abundances using the
ultraviolet (UV) lines emitted by the gas-phase ionised by massive stars. The
method is based on the comparison of the nebular emission-line ratios with
those predicted by a large grid of photo-ionisation models. Given the large
dispersion in the O/H - C/O plane, our method firstly fixes C/O using ratios of
appropriate emission lines and, in a second step, calculates O/H and the
ionisation parameter from carbon lines in the UV. We find abundances totally
consistent with those provided by the direct method when we apply this method
to a sample of objects with an empirical determination of the electron
temperature using optical emission lines. The proposed methodology appears as a
powerful tool for systematic studies of nebular abundances in star-forming
galaxies at high redshift.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Probabilistic Framework for Location Inference from Social Media | We study the extent to which we can infer users' geographical locations from
social media. Location inference from social media can benefit many
applications, such as disaster management, targeted advertising, and news
content tailoring. In recent years, a number of algorithms have been proposed
for identifying user locations on social media platforms such as Twitter and
Facebook from message contents, friend networks, and interactions between
users. In this paper, we propose a novel probabilistic model based on factor
graphs for location inference that offers several unique advantages for this
task. First, the model generalizes previous methods by incorporating content,
network, and deep features learned from social context. The model is also
flexible enough to support both supervised learning and semi-supervised
learning. Second, we explore several learning algorithms for the proposed
model, and present a Two-chain Metropolis-Hastings (MH+) algorithm, which
improves the inference accuracy. Third, we validate the proposed model on three
different genres of data - Twitter, Weibo, and Facebook - and demonstrate that
the proposed model can substantially improve the inference accuracy (+3.3-18.5%
by F1-score) over that of several state-of-the-art methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Novel ABALONE Photosensor Technology: 4-Year Long Tests of Vacuum Integrity, Internal Pumping and Afterpulsing | The ABALONE Photosensor Technology (U.S. Patent 9064678 2015) has the
capability of supplanting the expensive 80 year old Photomultiplier Tube (PMT)
manufacture by providing a modern and cost effective alternative product. An
ABALONE Photosensor comprises only three monolithic glass components, sealed
together by our new thin film adhesive. In 2013, we left one of the early
ABALONE Photosensor prototypes intact for continuous stress testing, and here
we report its long term vacuum integrity. The exceptionally low ion
afterpulsing rate (approximately two orders of magnitude lower than in PMTs)
has been constantly improving. We explain the physical and technological
reasons for this achievement. Due to the cost-effectiveness and the specific
combination of features, including low level of radioactivity, integration into
large-area panels, and robustness, this technology can open new horizons in the
fields of fundamental physics, functional medical imaging, and nuclear
security.
| 0 | 1 | 0 | 0 | 0 | 0 |
Multi-robot Dubins Coverage with Autonomous Surface Vehicles | In large scale coverage operations, such as marine exploration or aerial
monitoring, single robot approaches are not ideal, as they may take too long to
cover a large area. In such scenarios, multi-robot approaches are preferable.
Furthermore, several real world vehicles are non-holonomic, but can be modeled
using Dubins vehicle kinematics. This paper focuses on environmental monitoring
of aquatic environments using Autonomous Surface Vehicles (ASVs). In
particular, we propose a novel approach for solving the problem of complete
coverage of a known environment by a multi-robot team consisting of Dubins
vehicles. It is worth noting that both multi-robot coverage and Dubins vehicle
coverage are NP-complete problems. As such, we present two heuristics methods
based on a variant of the traveling salesman problem -- k-TSP -- formulation
and clustering algorithms that efficiently solve the problem. The proposed
methods are tested both in simulations to assess their scalability and with a
team of ASVs operating on a lake to ensure their applicability in real world.
| 1 | 0 | 0 | 0 | 0 | 0 |
New Fairness Metrics for Recommendation that Embrace Differences | We study fairness in collaborative-filtering recommender systems, which are
sensitive to discrimination that exists in historical data. Biased data can
lead collaborative filtering methods to make unfair predictions against
minority groups of users. We identify the insufficiency of existing fairness
metrics and propose four new metrics that address different forms of
unfairness. These fairness metrics can be optimized by adding fairness terms to
the learning objective. Experiments on synthetic and real data show that our
new metrics can better measure fairness than the baseline, and that the
fairness objectives effectively help reduce unfairness.
| 1 | 0 | 0 | 0 | 0 | 0 |
Graph heat mixture model learning | Graph inference methods have recently attracted a great interest from the
scientific community, due to the large value they bring in data interpretation
and analysis. However, most of the available state-of-the-art methods focus on
scenarios where all available data can be explained through the same graph, or
groups corresponding to each graph are known a priori. In this paper, we argue
that this is not always realistic and we introduce a generative model for mixed
signals following a heat diffusion process on multiple graphs. We propose an
expectation-maximisation algorithm that can successfully separate signals into
corresponding groups, and infer multiple graphs that govern their behaviour. We
demonstrate the benefits of our method on both synthetic and real data.
| 1 | 0 | 0 | 1 | 0 | 0 |
Technical Report: Reactive Navigation in Partially Known Non-Convex Environments | This paper presents a provably correct method for robot navigation in 2D
environments cluttered with familiar but unexpected non-convex, star-shaped
obstacles as well as completely unknown, convex obstacles. We presuppose a
limited range onboard sensor, capable of recognizing, localizing and
(leveraging ideas from constructive solid geometry) generating online from its
catalogue of the familiar, non-convex shapes an implicit representation of each
one. These representations underlie an online change of coordinates to a
completely convex model planning space wherein a previously developed online
construction yields a provably correct reactive controller that is pulled back
to the physically sensed representation to generate the actual robot commands.
We extend the construction to differential drive robots, and suggest the
empirical utility of the proposed control architecture using both formal proofs
and numerical simulations.
| 1 | 0 | 0 | 0 | 0 | 0 |
Automatic smoothness detection of the resolvent Krylov subspace method for the approximation of $C_0$-semigroups | The resolvent Krylov subspace method builds approximations to operator
functions $f(A)$ times a vector $v$. For the semigroup and related operator
functions, this method is proved to possess the favorable property that the
convergence is automatically faster when the vector $v$ is smoother. The user
of the method does not need to know the presented theory and alterations of the
method are not necessary in order to adapt to the (possibly unknown) smoothness
of $v$. The findings are illustrated by numerical experiments.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the observability of Pauli crystals | The best known manifestation of the Fermi-Dirac statistics is the Pauli
exclusion principle: no two identical fermions can occupy the same one-particle
state. This principle enforces high order correlations in systems of many
identical fermions and is responsible for a particular geometric arrangement of
trapped particles even when all mutual interactions are absent [1]. These
geometric structures, called Pauli crystals, are predicted for a system of $N$
identical atoms trapped in a harmonic potential. They emerge as the most
frequent configurations in a collection of single-shot pictures of the system.
Here we study how fragile Pauli crystals are when realistic experimental
limitations are taken into account. The influence of the number of single-shots
pictures available to analysis, thermal fluctuations and finite efficiency of
detection are considered. The role of these sources of noise on the possibility
of experimental observation of Pauli crystals is shown and conditions necessary
for the detection of the geometrical arrangements of particles are identified.
| 0 | 1 | 0 | 0 | 0 | 0 |
Interpolating Between Choices for the Approximate Intermediate Value Theorem | This paper proves the approximate intermediate value theorem, constructively
and from notably weak hypotheses: from pointwise rather than uniform
continuity, without assuming that reals are presented with rational
approximants, and without using countable choice. The theorem is that if a
pointwise continuous function has both a negative and a positive value, then it
has values arbitrarily close to 0. The proof builds on the usual classical
proof by bisection, which repeatedly selects the left or right half of an
interval; the algorithm here selects an interval of half the size in a
continuous way, interpolating between those two possibilities.
| 1 | 0 | 1 | 0 | 0 | 0 |
HAZMAT II: Ultraviolet Variability of Low-Mass Stars in the GALEX Archive | The ultraviolet (UV) light from a host star influences a planet's atmospheric
photochemistry and will affect interpretations of exoplanetary spectra from
future missions like the James Webb Space Telescope. These effects will be
particularly critical in the study of planetary atmospheres around M dwarfs,
including Earth-sized planets in the habitable zone. Given the higher activity
levels of M dwarfs compared to Sun-like stars, time resolved UV data are needed
for more accurate input conditions for exoplanet atmospheric modeling. The
Galaxy Evolution Explorer (\emph{GALEX}) provides multi-epoch photometric
observations in two UV bands: near-ultraviolet (NUV; 1771 -- 2831 \AA) and
far-ultraviolet (FUV; 1344 -- 1786 \AA). Within 30 pc of Earth, there are 357
and 303 M dwarfs in the NUV and FUV bands, respectively, with multiple\GALEX
observations. Simultaneous NUV and FUV detections exist for 145 stars in
both\GALEX bands. Our analyses of these data show that low-mass stars are
typically more variable in the FUV than the NUV. Median variability increases
with later spectral types in the NUV with no clear trend in the FUV. We find
evidence that flares increase the FUV flux density far more than the NUV flux
density, leading to variable FUV to NUV flux density ratios in the \GALEX\
bandpasses.The ratio of FUV to NUV flux is important for interpreting the
presence of atmospheric molecules in planetary atmospheres such as oxygen and
methane as a high FUV to NUV ratio may cause false-positive biosignature
detections. This ratio of flux density in the\GALEX\ bands spans three orders
of magnitude in our sample, from 0.008 to 4.6, and is 1 to 2 orders of
magnitude higher than for G dwarfs like the Sun. These results characterize the
UV behavior for the largest set of low-mass stars to date.
| 0 | 1 | 0 | 0 | 0 | 0 |
Exactly solvable Schrödinger equation with double-well potential for hydrogen bond | We construct a double-well potential for which the Schrödinger equation can
be exactly solved via reducing to the confluent Heun's one. Thus the wave
function is expressed via the confluent Heun's function. The latter is
tabulated in {\sl {Maple}} so that the obtained solution is easily treated. The
potential is infinite at the boundaries of the final interval that makes it to
be highly suitable for modeling hydrogen bonds (both ordinary and low-barrier
ones). We exemplify theoretical results by detailed treating the hydrogen bond
in $KHCO_3$ and show their good agreement with literature experimental data.
| 0 | 1 | 0 | 0 | 0 | 0 |
The co-evolution of emotional well-being with weak and strong friendship ties | Social ties are strongly related to well-being. But what characterizes this
relationship? This study investigates social mechanisms explaining how social
ties affect well-being through social integration and social influence, and how
well-being affects social ties through social selection. We hypothesize that
highly integrated individuals - those with more extensive and dense friendship
networks - report higher emotional well-being than others. Moreover, emotional
well-being should be influenced by the well-being of close friends. Finally,
well-being should affect friendship selection when individuals prefer others
with higher levels of well-being, and others whose well-being is similar to
theirs. We test our hypotheses using longitudinal social network and well-being
data of 117 individuals living in a graduate housing community. The application
of a novel extension of Stochastic Actor-Oriented Models for ordered networks
(ordered SAOMs) allows us to detail and test our hypotheses for weak- and
strong-tied friendship networks simultaneously. Results do not support our
social integration and social influence hypotheses but provide evidence for
selection: individuals with higher emotional well-being tend to have more
strong-tied friends, and there are homophily processes regarding emotional
well-being in strong-tied networks. Our study highlights the two-directional
relationship between social ties and well-being, and demonstrates the
importance of considering different tie strengths for various social processes.
| 1 | 0 | 0 | 1 | 0 | 0 |
A magnetic version of the Smilansky-Solomyak model | We analyze spectral properties of two mutually related families of magnetic
Schrödinger operators, $H_{\mathrm{Sm}}(A)=(i \nabla +A)^2+\omega^2
y^2+\lambda y \delta(x)$ and $H(A)=(i \nabla +A)^2+\omega^2 y^2+ \lambda y^2
V(x y)$ in $L^2(R^2)$, with the parameters $\omega>0$ and $\lambda<0$, where
$A$ is a vector potential corresponding to a homogeneous magnetic field
perpendicular to the plane and $V$ is a regular nonnegative and compactly
supported potential. We show that the spectral properties of the operators
depend crucially on the one-dimensional Schrödinger operators $L=
-\frac{\mathrm{d}^2}{\mathrm{d}x^2} +\omega^2 +\lambda \delta (x)$ and $L (V)=
- \frac{\mathrm{d}^2}{\mathrm{d}x^2} +\omega^2 +\lambda V(x)$, respectively.
Depending on whether the operators $L$ and $L(V)$ are positive or not, the
spectrum of $H_{\mathrm{Sm}}(A)$ and $H(V)$ exhibits a sharp transition.
| 0 | 0 | 1 | 0 | 0 | 0 |
John-Nirenberg Radius and Collapse in Conformal Geometry | Given a positive function $u\in W^{1,n}$, we define its John-Nirenberg radius
at point $x$ to be the supreme of the radius such that $\int_{B_t}|\nabla\log
u|^n<\epsilon_0^n$ when $n>2$, and $\int_{B_t}|\nabla u|^2<\epsilon_0^2$ when
$n=2$. We will show that for a collapsing sequence in a fixed conformal class
under some curvature conditions, the radius is bounded below by a positive
constant. As applications, we will study the convergence of a conformal metric
sequence on a $4$-manifold with bounded $\|K\|_{W^{1,2}}$, and prove a
generalized Hélein's Convergence Theorem.
| 0 | 0 | 1 | 0 | 0 | 0 |
Deep Prior | The recent literature on deep learning offers new tools to learn a rich
probability distribution over high dimensional data such as images or sounds.
In this work we investigate the possibility of learning the prior distribution
over neural network parameters using such tools. Our resulting variational
Bayes algorithm generalizes well to new tasks, even when very few training
examples are provided. Furthermore, this learned prior allows the model to
extrapolate correctly far from a given task's training data on a meta-dataset
of periodic signals.
| 1 | 0 | 0 | 1 | 0 | 0 |
Generalized two-dimensional linear discriminant analysis with regularization | Recent advances show that two-dimensional linear discriminant analysis
(2DLDA) is a successful matrix based dimensionality reduction method. However,
2DLDA may encounter the singularity issue theoretically and the sensitivity to
outliers. In this paper, a generalized Lp-norm 2DLDA framework with
regularization for an arbitrary $p>0$ is proposed, named G2DLDA. There are
mainly two contributions of G2DLDA: one is G2DLDA model uses an arbitrary
Lp-norm to measure the between-class and within-class scatter, and hence a
proper $p$ can be selected to achieve the robustness. The other one is that by
introducing an extra regularization term, G2DLDA achieves better generalization
performance, and solves the singularity problem. In addition, G2DLDA can be
solved through a series of convex problems with equality constraint, and it has
closed solution for each single problem. Its convergence can be guaranteed
theoretically when $1\leq p\leq2$. Preliminary experimental results on three
contaminated human face databases show the effectiveness of the proposed
G2DLDA.
| 0 | 0 | 0 | 1 | 0 | 0 |
Time-Frequency Audio Features for Speech-Music Classification | Distinct striation patterns are observed in the spectrograms of speech and
music. This motivated us to propose three novel time-frequency features for
speech-music classification. These features are extracted in two stages. First,
a preset number of prominent spectral peak locations are identified from the
spectra of each frame. These important peak locations obtained from each frame
are used to form Spectral peak sequences (SPS) for an audio interval. In second
stage, these SPS are treated as time series data of frequency locations. The
proposed features are extracted as periodicity, average frequency and
statistical attributes of these spectral peak sequences. Speech-music
categorization is performed by learning binary classifiers on these features.
We have experimented with Gaussian mixture models, support vector machine and
random forest classifiers. Our proposal is validated on four datasets and
benchmarked against three baseline approaches. Experimental results establish
the validity of our proposal.
| 1 | 0 | 0 | 0 | 0 | 0 |
Kinodynamic Planning on Constraint Manifolds | This paper presents a motion planner for systems subject to kinematic and
dynamic constraints. The former appear when kinematic loops are present in the
system, such as in parallel manipulators, in robots that cooperate to achieve a
given task, or in situations involving contacts with the environment. The
latter are necessary to obtain realistic trajectories, taking into account the
forces acting on the system. The kinematic constraints make the state space
become an implicitly-defined manifold, which complicates the application of
common motion planning techniques. To address this issue, the planner
constructs an atlas of the state space manifold incrementally, and uses this
atlas both to generate random states and to dynamically simulate the steering
of the system towards such states. The resulting tools are then exploited to
construct a rapidly-exploring random tree (RRT) over the state space. To the
best of our knowledge, this is the first randomized kinodynamic planner for
implicitly-defined state spaces. The test cases presented in this paper
validate the approach in significantly-complex systems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Binary Classification from Positive-Confidence Data | Can we learn a binary classifier from only positive data, without any
negative data or unlabeled data? We show that if one can equip positive data
with confidence (positive-confidence), one can successfully learn a binary
classifier, which we name positive-confidence (Pconf) classification. Our work
is related to one-class classification which is aimed at "describing" the
positive class by clustering-related methods, but one-class classification does
not have the ability to tune hyper-parameters and their aim is not on
"discriminating" positive and negative classes. For the Pconf classification
problem, we provide a simple empirical risk minimization framework that is
model-independent and optimization-independent. We theoretically establish the
consistency and an estimation error bound, and demonstrate the usefulness of
the proposed method for training deep neural networks through experiments.
| 1 | 0 | 0 | 1 | 0 | 0 |
Skewing Methods for Variance-Stabilizing Local Linear Regression Estimation | It is well-known that kernel regression estimators do not produce a constant
estimator variance over a domain. To correct this problem, Nishida and Kanazawa
(2015) proposed a variance-stabilizing (VS) local variable bandwidth for Local
Linear (LL) regression estimator. In contrast, Choi and Hall (1998) proposed
the skewing (SK) methods for a univariate LL estimator and constructed a convex
combination of one LL estimator and two SK estimators that are symmetrically
placed on both sides of the LL estimator (the convex combination (CC)
estimator) to eliminate higher-order terms in its asymptotic bias. To obtain a
CC estimator with a constant estimator variance without employing the VS local
variable bandwidth, the weight in the convex combination must be determined
locally to produce a constant estimator variance. In this study, we compare the
performances of two VS methods for a CC estimator and find cases in which the
weighting method can superior to the VS bandwidth method in terms of the degree
of variance stabilization.
| 0 | 0 | 0 | 1 | 0 | 0 |
Proximodistal Exploration in Motor Learning as an Emergent Property of Optimization | To harness the complexity of their high-dimensional bodies during
sensorimotor development, infants are guided by patterns of freezing and
freeing of degrees of freedom. For instance, when learning to reach, infants
free the degrees of freedom in their arm proximodistally, i.e. from joints that
are closer to the body to those that are more distant. Here, we formulate and
study computationally the hypothesis that such patterns can emerge
spontaneously as the result of a family of stochastic optimization processes
(evolution strategies with covariance-matrix adaptation), without an innate
encoding of a maturational schedule. In particular, we present simulated
experiments with an arm where a computational learner progressively acquires
reaching skills through adaptive exploration, and we show that a proximodistal
organization appears spontaneously, which we denote PDFF (ProximoDistal
Freezing and Freeing of degrees of freedom). We also compare this emergent
organization between different arm morphologies -- from human-like to quite
unnatural ones -- to study the effect of different kinematic structures on the
emergence of PDFF. Keywords: human motor learning; proximo-distal exploration;
stochastic optimization; modelling; evolution strategies; cross-entropy
methods; policy search; morphology.}
| 1 | 0 | 0 | 0 | 0 | 0 |
Near-optimal Sample Complexity Bounds for Robust Learning of Gaussians Mixtures via Compression Schemes | We prove that $\tilde{\Theta}(k d^2 / \varepsilon^2)$ samples are necessary
and sufficient for learning a mixture of $k$ Gaussians in $\mathbb{R}^d$, up to
error $\varepsilon$ in total variation distance. This improves both the known
upper bounds and lower bounds for this problem. For mixtures of axis-aligned
Gaussians, we show that $\tilde{O}(k d / \varepsilon^2)$ samples suffice,
matching a known lower bound. Moreover, these results hold in the
agnostic-learning/robust-estimation setting as well, where the target
distribution is only approximately a mixture of Gaussians.
The upper bound is shown using a novel technique for distribution learning
based on a notion of `compression.' Any class of distributions that allows such
a compression scheme can also be learned with few samples. Moreover, if a class
of distributions has such a compression scheme, then so do the classes of
products and mixtures of those distributions. The core of our main result is
showing that the class of Gaussians in $\mathbb{R}^d$ admits a small-sized
compression scheme.
| 1 | 0 | 1 | 0 | 0 | 0 |
Induction of Non-Monotonic Logic Programs to Explain Boosted Tree Models Using LIME | We present a heuristic based algorithm to induce \textit{nonmonotonic} logic
programs that will explain the behavior of XGBoost trained classifiers. We use
the technique based on the LIME approach to locally select the most important
features contributing to the classification decision. Then, in order to explain
the model's global behavior, we propose the LIME-FOLD algorithm ---a
heuristic-based inductive logic programming (ILP) algorithm capable of learning
non-monotonic logic programs---that we apply to a transformed dataset produced
by LIME. Our proposed approach is agnostic to the choice of the ILP algorithm.
Our experiments with UCI standard benchmarks suggest a significant improvement
in terms of classification evaluation metrics. Meanwhile, the number of induced
rules dramatically decreases compared to ALEPH, a state-of-the-art ILP system.
| 0 | 0 | 0 | 1 | 0 | 0 |
Combining Self-Supervised Learning and Imitation for Vision-Based Rope Manipulation | Manipulation of deformable objects, such as ropes and cloth, is an important
but challenging problem in robotics. We present a learning-based system where a
robot takes as input a sequence of images of a human manipulating a rope from
an initial to goal configuration, and outputs a sequence of actions that can
reproduce the human demonstration, using only monocular images as input. To
perform this task, the robot learns a pixel-level inverse dynamics model of
rope manipulation directly from images in a self-supervised manner, using about
60K interactions with the rope collected autonomously by the robot. The human
demonstration provides a high-level plan of what to do and the low-level
inverse model is used to execute the plan. We show that by combining the high
and low-level plans, the robot can successfully manipulate a rope into a
variety of target shapes using only a sequence of human-provided images for
direction.
| 1 | 0 | 0 | 0 | 0 | 0 |
Online learning with graph-structured feedback against adaptive adversaries | We derive upper and lower bounds for the policy regret of $T$-round online
learning problems with graph-structured feedback, where the adversary is
nonoblivious but assumed to have a bounded memory. We obtain upper bounds of
$\widetilde O(T^{2/3})$ and $\widetilde O(T^{3/4})$ for strongly-observable and
weakly-observable graphs, respectively, based on analyzing a variant of the
Exp3 algorithm. When the adversary is allowed a bounded memory of size 1, we
show that a matching lower bound of $\widetilde\Omega(T^{2/3})$ is achieved in
the case of full-information feedback. We also study the particular loss
structure of an oblivious adversary with switching costs, and show that in such
a setting, non-revealing strongly-observable feedback graphs achieve a lower
bound of $\widetilde\Omega(T^{2/3})$, as well.
| 0 | 0 | 0 | 1 | 0 | 0 |
Dynamical characteristics of electromagnetic field under conditions of total reflection | The dynamical characteristics of electromagnetic fields include energy,
momentum, angular momentum (spin) and helicity. We analyze their spatial
distributions near the planar interface between two transparent and
non-dispersive media, when the incident monochromatic plane wave with arbitrary
polarization is totally reflected, and an evanescent wave is formed in the
medium with lower optical density. Based on the recent arguments in favor of
the Minkowski definition of the electromagnetic momentum in a material medium
[Phys. Rev. A 83, 013823 (2011); 86, 055802 (2012); Phys. Rev. Lett. 119,
073901 (2017)], we derive the explicit expressions for the dynamical
characteristics in both media, with special attention to their behavior at the
interface. Especially, the "extraordinary" spin and momentum components
orthogonal to the plane of incidence are described, and the canonical (spin -
orbital) momentum decomposition is performed that contains no singular terms.
The field energy, helicity, the spin momentum and orbital momentum components
are everywhere regular but experience discontinuities at the interface; the
spin components parallel to the interface appear to be continuous, which
testifies for the consistency of the adopted Minkowski picture. The results
supply a meaningful example of the electromagnetic momentum decomposition, with
separation of spatial and polarization degrees of freedom, in inhomogeneous
media, and can be used in engineering the structured fields designed for
optical sorting, dispatching and micromanipulation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Functors induced by Cauchy extension of C*-algebras | In this paper we give three functors $\mathfrak{P}$, $[\cdot]_K$ and
$\mathfrak{F}$ on the category of C$^\ast$-algebras. The functor $\mathfrak{P}$
assigns to each C$^\ast$-algebra $\mathcal{A}$ a pre-C$^\ast$-algebra
$\mathfrak{P}(\mathcal{A})$ with completion $[\mathcal{A}]_K$. The functor
$[\cdot]_K$ assigns to each C$^\ast$-algebra $\mathcal{A}$ the Cauchy extension
$[\mathcal{A}]_K$ of $\mathcal{A}$ by a non-unital C$^\ast$-algebra
$\mathfrak{F}(\mathcal{A})$. Some properties of these functors are also given.
In particular, we show that the functors $[\cdot]_K$ and $\mathfrak{F}$ are
exact and the functor $\mathfrak{P}$ is normal exact.
| 0 | 0 | 1 | 0 | 0 | 0 |
Hierarchically cocompact classifying spaces for mapping class groups of surfaces | We define the notion of a hierarchically cocompact classifying space for a
family of subgroups of a group. Our main application is to show that the
mapping class group $\mbox{Mod}(S)$ of any connected oriented compact surface
$S$, possibly with punctures and boundary components and with negative Euler
characteristic has a hierarchically cocompact model for the family of virtually
cyclic subgroups of dimension at most $\mbox{vcd} \mbox{Mod}(S)+1$. When the
surface is closed, we prove that this bound is optimal. In particular, this
answers a question of Lück for mapping class groups of surfaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Likelihood-Free Inference Framework for Population Genetic Data using Exchangeable Neural Networks | An explosion of high-throughput DNA sequencing in the past decade has led to
a surge of interest in population-scale inference with whole-genome data.
Recent work in population genetics has centered on designing inference methods
for relatively simple model classes, and few scalable general-purpose inference
techniques exist for more realistic, complex models. To achieve this, two
inferential challenges need to be addressed: (1) population data are
exchangeable, calling for methods that efficiently exploit the symmetries of
the data, and (2) computing likelihoods is intractable as it requires
integrating over a set of correlated, extremely high-dimensional latent
variables. These challenges are traditionally tackled by likelihood-free
methods that use scientific simulators to generate datasets and reduce them to
hand-designed, permutation-invariant summary statistics, often leading to
inaccurate inference. In this work, we develop an exchangeable neural network
that performs summary statistic-free, likelihood-free inference. Our framework
can be applied in a black-box fashion across a variety of simulation-based
tasks, both within and outside biology. We demonstrate the power of our
approach on the recombination hotspot testing problem, outperforming the
state-of-the-art.
| 0 | 0 | 0 | 1 | 1 | 0 |
Coherent single-atom superradiance | Quantum effects, prevalent in the microscopic scale, generally elusive in
macroscopic systems due to dissipation and decoherence. Quantum phenomena in
large systems emerge only when particles are strongly correlated as in
superconductors and superfluids. Cooperative interaction of correlated atoms
with electromagnetic fields leads to superradiance, the enhanced quantum
radiation phenomenon, exhibiting novel physics such as quantum Dicke phase and
ultranarrow linewidth for optical clocks. Recent researches to imprint atomic
correlation directly demonstrated controllable collective atom-field
interactions. Here, we report cavity-mediated coherent single-atom
superradiance. Single atoms with predefined correlation traverse a high-Q
cavity one by one, emitting photons cooperatively with the atoms already gone
through the cavity. Such collective behavior of time-separated atoms is
mediated by the long-lived cavity field. As a result, a coherent field is
generated in the steady state, whose intensity varies as the square of the
number of traversing atoms during the cavity decay time, exhibiting more than
ten-fold enhancement from noncollective cases. The correlation among single
atoms is prepared with the aligned atomic phase achieved by nanometer-precision
position control of atoms with a nanohole-array aperture. The present work
deepens our understanding of the collective matter-light interaction and
provides an advanced platform for phase-controlled atom-field interactions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Recurrent Deterministic Policy Gradient Method for Bipedal Locomotion on Rough Terrain Challenge | This paper presents a deep learning framework that is capable of solving
partially observable locomotion tasks based on our novel interpretation of
Recurrent Deterministic Policy Gradient (RDPG). We study on bias of sampled
error measure and its variance induced by the partial observability of
environment and subtrajectory sampling, respectively. Three major improvements
are introduced in our RDPG based learning framework: tail-step bootstrap of
interpolated temporal difference, initialisation of hidden state using past
trajectory scanning, and injection of external experiences learned by other
agents. The proposed learning framework was implemented to solve the
Bipedal-Walker challenge in OpenAI's gym simulation environment where only
partial state information is available. Our simulation study shows that the
autonomous behaviors generated by the RDPG agent are highly adaptive to a
variety of obstacles and enables the agent to effectively traverse rugged
terrains for long distance with higher success rate than leading contenders.
| 1 | 0 | 0 | 0 | 0 | 0 |
Subsets and Splits