title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Global Model Interpretation via Recursive Partitioning | In this work, we propose a simple but effective method to interpret black-box
machine learning models globally. That is, we use a compact binary tree, the
interpretation tree, to explicitly represent the most important decision rules
that are implicitly contained in the black-box machine learning models. This
tree is learned from the contribution matrix which consists of the
contributions of input variables to predicted scores for each single
prediction. To generate the interpretation tree, a unified process recursively
partitions the input variable space by maximizing the difference in the average
contribution of the split variable between the divided spaces. We demonstrate
the effectiveness of our method in diagnosing machine learning models on
multiple tasks. Also, it is useful for new knowledge discovery as such insights
are not easily identifiable when only looking at single predictions. In
general, our work makes it easier and more efficient for human beings to
understand machine learning models.
| 0 | 0 | 0 | 1 | 0 | 0 |
Flow-GAN: Combining Maximum Likelihood and Adversarial Learning in Generative Models | Adversarial learning of probabilistic models has recently emerged as a
promising alternative to maximum likelihood. Implicit models such as generative
adversarial networks (GAN) often generate better samples compared to explicit
models trained by maximum likelihood. Yet, GANs sidestep the characterization
of an explicit density which makes quantitative evaluations challenging. To
bridge this gap, we propose Flow-GANs, a generative adversarial network for
which we can perform exact likelihood evaluation, thus supporting both
adversarial and maximum likelihood training. When trained adversarially,
Flow-GANs generate high-quality samples but attain extremely poor
log-likelihood scores, inferior even to a mixture model memorizing the training
data; the opposite is true when trained by maximum likelihood. Results on MNIST
and CIFAR-10 demonstrate that hybrid training can attain high held-out
likelihoods while retaining visual fidelity in the generated samples.
| 1 | 0 | 0 | 1 | 0 | 0 |
SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud | Inference using deep neural networks is often outsourced to the cloud since
it is a computationally demanding task. However, this raises a fundamental
issue of trust. How can a client be sure that the cloud has performed inference
correctly? A lazy cloud provider might use a simpler but less accurate model to
reduce its own computational load, or worse, maliciously modify the inference
results sent to the client. We propose SafetyNets, a framework that enables an
untrusted server (the cloud) to provide a client with a short mathematical
proof of the correctness of inference tasks that they perform on behalf of the
client. Specifically, SafetyNets develops and implements a specialized
interactive proof (IP) protocol for verifiable execution of a class of deep
neural networks, i.e., those that can be represented as arithmetic circuits.
Our empirical results on three- and four-layer deep neural networks demonstrate
the run-time costs of SafetyNets for both the client and server are low.
SafetyNets detects any incorrect computations of the neural network by the
untrusted server with high probability, while achieving state-of-the-art
accuracy on the MNIST digit recognition (99.4%) and TIMIT speech recognition
tasks (75.22%).
| 1 | 0 | 0 | 0 | 0 | 0 |
The Malgrange Form and Fredholm Determinants | We consider the factorization problem of matrix symbols relative to a closed
contour, i.e., a Riemann-Hilbert problem, where the symbol depends analytically
on parameters. We show how to define a function $\tau$ which is locally
analytic on the space of deformations and that is expressed as a Fredholm
determinant of an operator of "integrable" type in the sense of
Its-Izergin-Korepin-Slavnov. The construction is not unique and the
non-uniqueness highlights the fact that the tau function is really the section
of a line bundle.
| 0 | 1 | 0 | 0 | 0 | 0 |
Programming from Metaphorisms | This paper presents a study of the metaphorism pattern of relational
specification, showing how it can be refined into recursive programs.
Metaphorisms express input-output relationships which preserve relevant
information while at the same time some intended optimization takes place. Text
processing, sorting, representation changers, etc., are examples of
metaphorisms. The kind of metaphorism refinement studied in this paper is a
strategy known as change of virtual data structure. By framing metaphorisms in
the class of (inductive) regular relations, sufficient conditions are given for
such implementations to be calculated using relation algebra. The strategy is
illustrated with examples including the derivation of the quicksort and
mergesort algorithms, showing what they have in common and what makes them
different from the very start of development.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the Conditional Distribution of a Multivariate Normal given a Transformation - the Linear Case | We show that the orthogonal projection operator onto the range of the adjoint
of a linear operator $T$ can be represented as $UT,$ where $U$ is an invertible
linear operator. Using this representation we obtain a decomposition of a
Normal random vector $Y$ as the sum of a linear transformation of $Y$ that is
independent of $TY$ and an affine transformation of $TY$. We then use this
decomposition to prove that the conditional distribution of a Normal random
vector $Y$ given a linear transformation $\mathcal{T}Y$ is again a multivariate
Normal distribution. This result is equivalent to the well-known result that
given a $k$-dimensional component of a $n$-dimensional Normal random vector,
where $k<n$, the conditional distribution of the remaining
$\left(n-k\right)$-dimensional component is a $\left(n-k\right)$-dimensional
multivariate Normal distribution, and sets the stage for approximating the
conditional distribution of $Y$ given $g\left(Y\right)$, where $g$ is a
continuously differentiable vector field.
| 0 | 0 | 1 | 1 | 0 | 0 |
Spacings Around An Order Statistic | We determine the joint limiting distribution of adjacent spacings around a
central, intermediate, or an extreme order statistic $X_{k:n}$ of a random
sample of size $n$ from a continuous distribution $F$. For central and
intermediate cases, normalized spacings in the left and right neighborhoods are
asymptotically i.i.d. exponential random variables. The associated independent
Poisson arrival processes are independent of $X_{k:n}$. For an extreme
$X_{k:n}$, the asymptotic independence property of spacings fails for $F$ in
the domain of attraction of Fréchet and Weibull ($\alpha \neq 1$)
distributions. This work also provides additional insight into the limiting
distribution for the number of observations around $X_{k:n}$ for all three
cases.
| 0 | 0 | 1 | 1 | 0 | 0 |
On Nevanlinna - Cartan theory for holomorphic curves with Tsuji characteristics | In this paper, we prove some fundamental theorems for holomorphic curves on
angular domain intersecting a hypersurface, finite set of fixed hyperplanes in
general position and finite set of fixed hypersurfaces in general position on
complex projective variety with the level of truncation. As applications of the
second main theorems for an angle, we will discuss the uniqueness problem of
holomorphic curves in an angle instead of the whole complex plane. Detail, we
establish a result for uniqueness problem of holomorphic curve by inverse image
of a hypersurface. In my knowledge, this is the first result for uniqueness
problem of holomorphic curve by inverse image of hypersurface on angular
domain. On complex plane, we obtain a uniqueness result for holomorphic curves,
it is improvement of some results before [5, 10] in this trend.
| 0 | 0 | 1 | 0 | 0 | 0 |
Theoretical investigation of excitonic magnetism in LaSrCoO$_{4}$ | We use the LDA+U approach to search for possible ordered ground states of
LaSrCoO$_4$. We find a staggered arrangement of magnetic multipoles to be
stable over a broad range of Co $3d$ interaction parameters. This ordered state
can be described as a spin-denity-wave-type condensate of $d_{xy} \otimes
d_{x^2-y^2}$ excitons carrying spin $S=1$. Further, we construct an effective
strong-coupling model, calculate the exciton dispersion and investigate closing
of the exciton gap, which marks the exciton condensation instability. Comparing
the layered LaSrCoO$_4$ with its pseudo cubic analog LaCoO$_3$, we find that
for the same interaction parameters the excitonic gap is smaller (possibly
vanishing) in the layered cobaltite.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Dark Matter Programme of the Cherenkov Telescope Array | In the last decades a vaste amount of evidence for the existence of dark
matter has been accumulated. At the same time, many efforts have been
undertaken to try to identify what dark matter is. Indirect searches look at
places in the Universe where dark matter is believed to be abundant and seek
for possible annihilation or decay signatures. The Cherenkov Telescope Array
(CTA) represents the next generation of imaging Cherenkov telescopes and, with
one site in the Southern hemisphere and one in the Northern hemisphere, will be
able to observe all the sky with unprecedented sensitivity and angular
resolution above a few tens of GeV. The CTA Consortium will undertake an
ambitious program of indirect dark matter searches for which we report here the
brightest prospects.
| 0 | 1 | 0 | 0 | 0 | 0 |
Modeling the Vertical Structure of Nuclear Starburst Discs: A Possible Source of AGN Obscuration at $z\sim 1$ | Nuclear starburst discs (NSDs) are star-forming discs that may be residing in
the nuclear regions of active galaxies at intermediate redshifts. One
dimensional (1D) analytical models developed by Thompson et al. (2005) show
that these discs can possess an inflationary atmosphere when dust is sublimated
on parsec scales. This make NSDs a viable source for AGN obscuration. We model
the two dimensional (2D) structure of NSDs using an iterative method in order
to compute the explicit vertical solutions for a given annulus. These solutions
satisfy energy and hydrostatic balance, as well as the radiative transfer
equation. In comparison to the 1D model, the 2D calculation predicts a less
extensive expansion of the atmosphere by orders of magnitude at the
parsec/sub-parsec scale, but the new scale-height $h$ may still exceed the
radial distance $R$ for various physical conditions. A total of 192 NSD models
are computed across the input parameter space in order to predict distributions
of a line of sight column density $N_H$. Assuming a random distribution of
input parameters, the statistics yield 56% of Type 1, 23% of Compton-thin Type
2s (CN), and 21% of Compton-thick (CK) AGNs. Depending on a viewing angle
($\theta$) of a particular NSD (fixed physical conditions), any central AGN can
appear to be Type 1, CN, or CK which is consistent with the basic unification
theory of AGNs. Our results show that $\log[N_H(\text{cm}^{-2})]\in$ [23,25.5]
can be oriented at any $\theta$ from 0$^\circ$ to $\approx$80$^\circ$ due to
the degeneracy in the input parameters.
| 0 | 1 | 0 | 0 | 0 | 0 |
Rapid behavioral transitions produce chaotic mixing by a planktonic microswimmer | Despite their vast morphological diversity, many invertebrates have similar
larval forms characterized by ciliary bands, innervated arrays of beating cilia
that facilitate swimming and feeding. Hydrodynamics suggests that these bands
should tightly constrain the behavioral strategies available to the larvae;
however, their apparent ubiquity suggests that these bands also confer
substantial adaptive advantages. Here, we use hydrodynamic techniques to
investigate "blinking," an unusual behavioral phenomenon observed in many
invertebrate larvae in which ciliary bands across the body rapidly change
beating direction and produce transient rearrangement of the local flow field.
Using a general theoretical model combined with quantitative experiments on
starfish larvae, we find that the natural rhythm of larval blinking is
hydrodynamically optimal for inducing strong mixing of the local fluid
environment due to transient streamline crossing, thereby maximizing the
larvae's overall feeding rate. Our results are consistent with previous
hypotheses that filter feeding organisms may use chaotic mixing dynamics to
overcome circulation constraints in viscous environments, and it suggests
physical underpinnings for complex neurally-driven behaviors in early-divergent
animals.
| 0 | 0 | 0 | 0 | 1 | 0 |
Testing for long memory in panel random-coefficient AR(1) data | It is well-known that random-coefficient AR(1) process can have long memory
depending on the index $\beta$ of the tail distribution function of the random
coefficient, if it is a regularly varying function at unity. We discuss
estimation of $\beta$ from panel data comprising N random-coefficient AR(1)
series, each of length T. The estimator of $\beta$ is constructed as a version
of the tail index estimator of Goldie and Smith (1987) applied to sample lag 1
autocorrelations of individual time series. Its asymptotic normality is derived
under certain conditions on N, T and some parameters of our statistical model.
Based on this result, we construct a statistical procedure to test if the panel
random-coefficient AR(1) data exhibit long memory. A simulation study
illustrates finite-sample performance of the introduced estimator and testing
procedure.
| 0 | 0 | 1 | 1 | 0 | 0 |
Hilsum-Skandalis maps as Frobenius adjunctions with application to geometric morphisms | Hilsum-Skandalis maps, from differential geometry, are studied in the context
of a cartesian category. It is shown that Hilsum-Skandalis maps can be
represented as stably Frobenius adjunctions. This leads to a new and more
general proof that Hilsum-Skandalis maps represent a universal way of inverting
essential equivalences between internal groupoids. To prove the representation
theorem, a new characterisation of the con- nected components adjunction of any
internal groupoid is given. The charaterisation is that the adjunction is
covered by a stable Frobenius adjunction that is a slice and whose right
adjoint is monadic. Geometric morphisms can be represented as stably Frobenius
adjunctions. As applications of the study we show how it is easy to recover
properties of geometric morphisms, seeing them as aspects of properties of
stably Frobenius adjunctions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Pairs of commuting isometries - I | We present an explicit version of Berger, Coburn and Lebow's classification
result for pure pairs of commuting isometries in the sense of an explicit
recipe for constructing pairs of commuting isometric multipliers with precise
coefficients. We describe a complete set of (joint) unitary invariants and
compare the Berger, Coburn and Lebow's representations with other natural
analytic representations of pure pairs of commuting isometries. Finally, we
study the defect operators of pairs of commuting isometries.
| 0 | 0 | 1 | 0 | 0 | 0 |
Sampled-Data Boundary Feedback Control of 1-D Parabolic PDEs | The paper provides results for the application of boundary feedback control
with Zero-Order-Hold (ZOH) to 1-D linear parabolic systems on bounded domains.
It is shown that the continuous-time boundary feedback applied in a
sample-and-hold fashion guarantees closed-loop exponential stability, provided
that the sampling period is sufficiently small. Two different continuous-time
feedback designs are considered: the reduced model design and the backstepping
design. The obtained results provide stability estimates for weighted 2-norms
of the state and robustness with respect to perturbations of the sampling
schedule is guaranteed.
| 1 | 0 | 1 | 0 | 0 | 0 |
PEN as self-vetoing structural Material | Polyethylene Naphtalate (PEN) is a mechanically very favorable polymer.
Earlier it was found that thin foils made from PEN can have very high
radio-purity compared to other commercially available foils. In fact, PEN is
already in use for low background signal transmission applications (cables).
Recently it has been realized that PEN also has favorable scintillating
properties. In combination, this makes PEN a very promising candidate as a
self-vetoing structural material in low background experiments. Components
instrumented with light detectors could be built from PEN. This includes
detector holders, detector containments, signal transmission links, etc. The
current R\&D towards qualification of PEN as a self-vetoing low background
structural material is be presented.
| 0 | 1 | 0 | 0 | 0 | 0 |
Quantum Cohomology under Birational Maps and Transitions | This is an expanded version of the third author's lecture in String-Math 2015
at Sanya. It summarizes some of our works in quantum cohomology.
After reviewing the quantum Lefschetz and quantum Leray--Hirsch, we discuss
their applications to the functoriality properties under special smooth flops,
flips and blow-ups. Finally, for conifold transitions of Calabi--Yau 3-folds,
formulations for small resolutions (blow-ups along Weil divisors) are sketched.
| 0 | 0 | 1 | 0 | 0 | 0 |
A conjecture on the zeta functions of pairs of ternary quadratic forms | We consider the prehomogeneous vector space of pairs of ternary quadratic
forms. For the lattice of pairs of integral ternary quadratic forms and its
dual lattice, there are six zeta functions associated with the the
prehomogeneous vector space. We present a conjecture which states that there
are simple relations among the six zeta functions. We prove that the
coefficients coincide on fundamental discriminants.
| 0 | 0 | 1 | 0 | 0 | 0 |
Abstract Syntax Networks for Code Generation and Semantic Parsing | Tasks like code generation and semantic parsing require mapping unstructured
(or partially structured) inputs to well-formed, executable outputs. We
introduce abstract syntax networks, a modeling framework for these problems.
The outputs are represented as abstract syntax trees (ASTs) and constructed by
a decoder with a dynamically-determined modular structure paralleling the
structure of the output tree. On the benchmark Hearthstone dataset for code
generation, our model obtains 79.2 BLEU and 22.7% exact match accuracy,
compared to previous state-of-the-art values of 67.1 and 6.1%. Furthermore, we
perform competitively on the Atis, Jobs, and Geo semantic parsing datasets with
no task-specific engineering.
| 1 | 0 | 0 | 1 | 0 | 0 |
Theoretical derivation of laser-dressed atomic states by using a fractal space | The derivation of approximate wave functions for an electron submitted to
both a coulomb and a time-dependent laser electric fields, the so-called
Coulomb-Volkov (CV) state, is addressed. Despite its derivation for continuum
states does not exhibit any particular problem within the framework of the
standard theory of quantum mechanics (QM), difficulties arise when considering
an initially bound atomic state. Indeed the natural way of translating the
unperturbed momentum by the laser vector potential is no longer possible since
a bound state does not exhibit a plane wave form including explicitely a
momentum. The use of a fractal space permits to naturally define a momentum for
a bound wave function. Within this framework, it is shown how the derivation of
laser-dressed bound states can be performed. Based on a generalized eikonal
approach, a new expression for the laser-dressed states is also derived, fully
symmetric relative to the continuum or bound nature of the initial unperturbed
wave function. It includes an additional crossed term in the Volkov phase which
was not obtained within the standard theory of quantum mechanics. The
derivations within this fractal framework have highlighted other possible ways
to derive approximate laser-dressed states in QM. After comparing the various
obtained wave functions, an application to the prediction of the ionization
probability of hydrogen targets by attosecond XUV pulses within the sudden
approximation is provided. This approach allows to make predictions in various
regimes depending on the laser intensity, going from the non-resonant
multiphoton absorption to tunneling and barrier-suppression ionization.
| 0 | 1 | 0 | 0 | 0 | 0 |
Implications of the interstellar object 1I/'Oumuamua for planetary dynamics and planetesimal formation | 'Oumuamua, the first bona-fide interstellar planetesimal, was discovered
passing through our Solar System on a hyperbolic orbit. This object was likely
dynamically ejected from an extrasolar planetary system after a series of close
encounters with gas giant planets. To account for 'Oumuamua's detection, simple
arguments suggest that ~1 Earth mass of planetesimals are ejected per Solar
mass of Galactic stars. However, that value assumes mono-sized planetesimals.
If the planetesimal mass distribution is instead top-heavy the inferred mass in
interstellar planetesimals increases to an implausibly high value. The tension
between theoretical expectations for the planetesimal mass function and the
observation of 'Oumuamua can be relieved if a small fraction (~0.1-1%) of
planetesimals are tidally disrupted on the pathway to ejection into
'Oumuamua-sized fragments. Using a large suite of simulations of giant planet
dynamics including planetesimals, we confirm that 0.1-1% of planetesimals pass
within the tidal disruption radius of a gas giant on their pathway to ejection.
'Oumuamua may thus represent a surviving fragment of a disrupted planetesimal.
Finally, we argue that an asteroidal composition is dynamically disfavoured for
'Oumuamua, as asteroidal planetesimals are both less abundant and ejected at a
lower efficiency than cometary planetesimals.
| 0 | 1 | 0 | 0 | 0 | 0 |
Automatic symbolic computation for discontinuous Galerkin finite element methods | The implementation of discontinuous Galerkin finite element methods (DGFEMs)
represents a very challenging computational task, particularly for systems of
coupled nonlinear PDEs, including multiphysics problems, whose parameters may
consist of power series or functionals of the solution variables. Thereby, the
exploitation of symbolic algebra to express a given DGFEM approximation of a
PDE problem within a high level language, whose syntax closely resembles the
mathematical definition, is an invaluable tool. Indeed, this then facilitates
the automatic assembly of the resulting system of (nonlinear) equations, as
well as the computation of Fréchet derivative(s) of the DGFEM scheme, needed,
for example, within a Newton-type solver. However, even exploiting symbolic
algebra, the discretisation of coupled systems of PDEs can still be extremely
verbose and hard to debug. Thereby, in this article we develop a further layer
of abstraction by designing a class structure for the automatic computation of
DGFEM formulations. This work has been implemented within the FEniCS package,
based on exploiting the Unified Form Language. Numerical examples are presented
which highlight the simplicity of implementation of DGFEMs for the numerical
approximation of a range of PDE problems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Methodological Framework for Determining the Land Eligibility of Renewable Energy Sources | The quantity and distribution of land which is eligible for renewable energy
sources is fundamental to the role these technologies will play in future
energy systems. As it stands, however, the current state of land eligibility
investigation is found to be insufficient to meet the demands of the future
energy modelling community. Three key areas are identified as the predominate
causes of this; inconsistent criteria definitions, inconsistent or unclear
methodologies, and inconsistent dataset usage. To combat these issues, a land
eligibility framework is developed and described in detail. The validity of
this framework is then shown via the recreation of land eligibility results
found in the literature, showing strong agreement in the majority of cases.
Following this, the framework is used to perform an evaluation of land
eligibility criteria within the European context whereby the relative
importance of commonly considered criteria are compared.
| 1 | 0 | 0 | 0 | 0 | 0 |
Day-ahead electricity price forecasting with high-dimensional structures: Univariate vs. multivariate modeling frameworks | We conduct an extensive empirical study on short-term electricity price
forecasting (EPF) to address the long-standing question if the optimal model
structure for EPF is univariate or multivariate. We provide evidence that
despite a minor edge in predictive performance overall, the multivariate
modeling framework does not uniformly outperform the univariate one across all
12 considered datasets, seasons of the year or hours of the day, and at times
is outperformed by the latter. This is an indication that combining advanced
structures or the corresponding forecasts from both modeling approaches can
bring a further improvement in forecasting accuracy. We show that this indeed
can be the case, even for a simple averaging scheme involving only two models.
Finally, we also analyze variable selection for the best performing
high-dimensional lasso-type models, thus provide guidelines to structuring
better performing forecasting model designs.
| 0 | 0 | 0 | 1 | 0 | 1 |
On Strong Small Loop Transfer Spaces Relative to Subgroups of Fundamental Groups | Let $H$ be a subgroup of the fundamental group $\pi_{1}(X,x_{0})$. By
extending the concept of strong SLT space to a relative version with respect to
$H$, strong $H$-SLT space, first, we investigate the existence of a covering
map for strong $H$-SLT spaces. Moreover, we show that a semicovering map is a
covering map in the presence of strong $H$-SLT property. Second, we present
conditions under which the whisker topology agrees with the lasso topology on
$\widetilde{X}_{H}$. Also, we study the relationship between open subsets of
$\pi_{1}^{wh}(X,x_{0})$ and $\pi_{1}^{l}(X,x_{0})$. Finally, we give some
examples to justify the definition and study of strong $H$-SLT spaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
Headphones on the wire | We analyze a dataset providing the complete information on the effective
plays of thousands of music listeners during several months. Our analysis
confirms a number of properties previously highlighted by research based on
interviews and questionnaires, but also uncover new statistical patterns, both
at the individual and collective levels. In particular, we show that
individuals follow common listening rhythms characterized by the same
fluctuations, alternating heavy and light listening periods, and can be
classified in four groups of similar sizes according to their temporal habits
--- 'early birds', 'working hours listeners', 'evening listeners' and 'night
owls'. We provide a detailed radioscopy of the listeners' interplay between
repeated listening and discovery of new content. We show that different genres
encourage different listening habits, from Classical or Jazz music with a more
balanced listening among different songs, to Hip Hop and Dance with a more
heterogeneous distribution of plays. Finally, we provide measures of how
distant people are from each other in terms of common songs. In particular, we
show that the number of songs $S$ a DJ should play to a random audience of size
$N$ such that everyone hears at least one song he/she currently listens to, is
of the form $S\sim N^\alpha$ where the exponent depends on the music genre and
is in the range $[0.5,0.8]$. More generally, our results show that the recent
access to virtually infinite catalogs of songs does not promote exploration for
novelty, but that most users favor repetition of the same songs.
| 1 | 1 | 0 | 0 | 0 | 0 |
Li-intercalated Graphene on SiC(0001): an STM study | We present a systematical study via scanning tunneling microscopy (STM) and
low-energy electron diffraction (LEED) on the effect of the exposure of Lithium
(Li) on graphene on silicon carbide (SiC). We have investigated Li deposition
both on epitaxial monolayer graphene and on buffer layer surfaces on the
Si-face of SiC. At room temperature, Li immediately intercalates at the
interface between the SiC substrate and the buffer layer and transforms the
buffer layer into a quasi-free-standing graphene. This conclusion is
substantiated by LEED and STM evidence. We show that intercalation occurs
through the SiC step sites or graphene defects. We obtain a good quantitative
agreement between the number of Li atoms deposited and the number of available
Si bonds at the surface of the SiC crystal. Through STM analysis, we are able
to determine the interlayer distance induced by Li-intercalation at the
interface between the SiC substrate and the buffer layer.
| 0 | 1 | 0 | 0 | 0 | 0 |
Generative Adversarial Perturbations | In this paper, we propose novel generative models for creating adversarial
examples, slightly perturbed images resembling natural images but maliciously
crafted to fool pre-trained models. We present trainable deep neural networks
for transforming images to adversarial perturbations. Our proposed models can
produce image-agnostic and image-dependent perturbations for both targeted and
non-targeted attacks. We also demonstrate that similar architectures can
achieve impressive results in fooling classification and semantic segmentation
models, obviating the need for hand-crafting attack methods for each task.
Using extensive experiments on challenging high-resolution datasets such as
ImageNet and Cityscapes, we show that our perturbations achieve high fooling
rates with small perturbation norms. Moreover, our attacks are considerably
faster than current iterative methods at inference time.
| 1 | 0 | 0 | 1 | 0 | 0 |
Lifted Polymatroid Inequalities for Mean-Risk Optimization with Indicator Variables | We investigate a mixed 0-1 conic quadratic optimization problem with
indicator variables arising in mean-risk optimization. The indicator variables
are often used to model non-convexities such as fixed charges or cardinality
constraints. Observing that the problem reduces to a submodular function
minimization for its binary restriction, we derive three classes of strong
convex valid inequalities by lifting the polymatroid inequalities on the binary
variables. Computational experiments demonstrate the effectiveness of the
inequalities in strengthening the convex relaxations and, thereby, improving
the solution times for mean-risk problems with fixed charges and cardinality
constraints significantly.
| 0 | 0 | 1 | 0 | 0 | 0 |
Illusion and Reality in the Atmospheres of Exoplanets | The atmospheres of exoplanets reveal all their properties beyond mass,
radius, and orbit. Based on bulk densities, we know that exoplanets larger than
1.5 Earth radii must have gaseous envelopes, hence atmospheres. We discuss
contemporary techniques for characterization of exoplanetary atmospheres. The
measurements are difficult, because - even in current favorable cases - the
signals can be as small as 0.001-percent of the host star's flux. Consequently,
some early results have been illusory, and not confirmed by subsequent
investigations. Prominent illusions to date include polarized scattered light,
temperature inversions, and the existence of carbon planets. The field moves
from the first tentative and often incorrect conclusions, converging to the
reality of exoplanetary atmospheres. That reality is revealed using transits
for close-in exoplanets, and direct imaging for young or massive exoplanets in
distant orbits. Several atomic and molecular constituents have now been
robustly detected in exoplanets as small as Neptune. In our current
observations, the effects of clouds and haze appear ubiquitous. Topics at the
current frontier include the measurement of heavy element abundances in giant
planets, detection of carbon-based molecules, measurement of atmospheric
temperature profiles, definition of heat circulation efficiencies for tidally
locked planets, and the push to detect and characterize the atmospheres of
super-Earths. Future observatories for this quest include the James Webb Space
Telescope, and the new generation of Extremely Large Telescopes on the ground.
On a more distant horizon, NASA's concepts for the HabEx and LUVOIR missions
could extend the study of exoplanetary atmospheres to true twins of Earth.
| 0 | 1 | 0 | 0 | 0 | 0 |
Actors without Borders: Amnesty for Imprisoned State | In concurrent systems, some form of synchronisation is typically needed to
achieve data-race freedom, which is important for correctness and safety. In
actor-based systems, messages are exchanged concurrently but executed
sequentially by the receiving actor. By relying on isolation and non-sharing,
an actor can access its own state without fear of data-races, and the internal
behavior of an actor can be reasoned about sequentially.
However, actor isolation is sometimes too strong to express useful patterns.
For example, letting the iterator of a data-collection alias the internal
structure of the collection allows a more efficient implementation than if each
access requires going through the interface of the collection. With full
isolation, in order to maintain sequential reasoning the iterator must be made
part of the collection, which bloats the interface of the collection and means
that a client must have access to the whole data-collection in order to use the
iterator.
In this paper, we propose a programming language construct that enables a
relaxation of isolation but without sacrificing sequential reasoning. We
formalise the mechanism in a simple lambda calculus with actors and passive
objects, and show how an actor may leak parts of its internal state while
ensuring that any interaction with this data is still synchronised.
| 1 | 0 | 0 | 0 | 0 | 0 |
Bayesian Estimation of Gaussian Graphical Models with Predictive Covariance Selection | Gaussian graphical models are used for determining conditional relationships
between variables. This is accomplished by identifying off-diagonal elements in
the inverse-covariance matrix that are non-zero. When the ratio of variables
(p) to observations (n) approaches one, the maximum likelihood estimator of the
covariance matrix becomes unstable and requires shrinkage estimation. Whereas
several classical (frequentist) methods have been introduced to address this
issue, fully Bayesian methods remain relatively uncommon in practice and
methodological literatures. Here we introduce a Bayesian method for estimating
sparse matrices, in which conditional relationships are determined with
projection predictive selection. With this method, that uses Kullback-Leibler
divergence and cross-validation for neighborhood selection, we reconstruct the
inverse-covariance matrix in both low and high-dimensional settings. Through
simulation and applied examples, we characterized performance compared to
several Bayesian methods and the graphical lasso, in addition to TIGER that
similarly estimates the inverse-covariance matrix with regression. Our results
demonstrate that projection predictive selection not only has superior
performance compared to selecting the most probable model and Bayesian model
averaging, particularly for high-dimensional data, but also compared to the the
Bayesian and classical glasso methods. Further, we show that estimating the
inverse-covariance matrix with multiple regression is often more accurate, with
respect to various loss functions, and efficient than direct estimation. In
low-dimensional settings, we demonstrate that projection predictive selection
also provides competitive performance. We have implemented the projection
predictive method for covariance selection in the R package GGMprojpred
| 0 | 0 | 0 | 1 | 0 | 0 |
ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models | While deep learning models have achieved state-of-the-art accuracies for many
prediction tasks, understanding these models remains a challenge. Despite the
recent interest in developing visual tools to help users interpret deep
learning models, the complexity and wide variety of models deployed in
industry, and the large-scale datasets that they used, pose unique design
challenges that are inadequately addressed by existing work. Through
participatory design sessions with over 15 researchers and engineers at
Facebook, we have developed, deployed, and iteratively improved ActiVis, an
interactive visualization system for interpreting large-scale deep learning
models and results. By tightly integrating multiple coordinated views, such as
a computation graph overview of the model architecture, and a neuron activation
view for pattern discovery and comparison, users can explore complex deep
neural network models at both the instance- and subset-level. ActiVis has been
deployed on Facebook's machine learning platform. We present case studies with
Facebook researchers and engineers, and usage scenarios of how ActiVis may work
with different models.
| 1 | 0 | 0 | 1 | 0 | 0 |
A topological lower bound for the energy of a unit vector field on a closed Euclidean hypersurface | For a unit vector field on a closed immersed Euclidean hypersurface
$M^{2n+1}$, $n\geq 1$, we exhibit a nontrivial lower bound for its energy which
depends on the degree of the Gauss map of the immersion. When the hypersurface
is the unit sphere $\mathbb{S}^{2n+1}$, immersed with degree one, this lower
bound corresponds to a well established value from the literature. We introduce
a list of functionals $\mathcal{B}_k$ on a compact Riemannian manifold $M^{m}$,
$1\leq k\leq m$, and show that, when the underlying manifold is a closed
hypersurface, these functionals possess similar properties regarding the degree
of the immersion. In addition, we prove that Hopf flows minimize
$\mathcal{B}_n$ on $\mathbb{S}^{2n+1}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Permutation invariant proper polyhedral cones and their Lyapunov rank | The Lyapunov rank of a proper cone $K$ in a finite dimensional real Hilbert
space is defined as the dimension of the space of all Lyapunov-like
transformations on $K$, or equivalently, the dimension of the Lie algebra of
the automorphism group of $K$. This (rank) measures the number of linearly
independent bilinear relations needed to express a complementarity system on
$K$ (that arises, for example, from a linear program or a complementarity
problem on the cone). Motivated by the problem of describing spectral/proper
cones where the complementarity system can be expressed as a square system
(that is, where the Lyapunov rank is greater than equal to the dimension of the
ambient space), we consider proper polyhedral cones in $\mathbb{R}^n$ that are
permutation invariant. For such cones we show that the Lyapunov rank is either
1 (in which case, the cone is irreducible) or n (in which case, the cone is
isomorphic to the nonnegative orthart in $\mathbb{R}^n$). In the latter case,
we show that the corresponding spectral cone is isomorphic to a symmetric cone.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the Relation of External and Internal Feature Interactions: A Case Study | Detecting feature interactions is imperative for accurately predicting
performance of highly-configurable systems. State-of-the-art performance
prediction techniques rely on supervised machine learning for detecting feature
interactions, which, in turn, relies on time consuming performance measurements
to obtain training data. By providing information about potentially interacting
features, we can reduce the number of required performance measurements and
make the overall performance prediction process more time efficient. We expect
that the information about potentially interacting features can be obtained by
statically analyzing the source code of a highly-configurable system, which is
computationally cheaper than performing multiple performance measurements. To
this end, we conducted a qualitative case study in which we explored the
relation between control-flow feature interactions (detected through static
program analysis) and performance feature interactions (detected by performance
prediction techniques using performance measurements). We found that a relation
exists, which can potentially be exploited to predict performance interactions.
| 1 | 0 | 0 | 0 | 0 | 0 |
Properties of the water to boron nitride interaction: from zero to two dimensions with benchmark accuracy | Molecular adsorption on surfaces plays an important part in catalysis,
corrosion, desalination, and various other processes that are relevant to
industry and in nature. As a complement to experiments, accurate adsorption
energies can be obtained using various sophisticated electronic structure
methods that can now be applied to periodic systems. The adsorption energy of
water on boron nitride substrates, going from zero to 2-dimensional
periodicity, is particularly interesting as it calls for an accurate treatment
of polarizable electrostatics and dispersion interactions, as well as posing a
practical challenge to experiments and electronic structure methods. Here, we
present reference adsorption energies, static polarizabilities, and dynamic
polarizabilities, for water on BN substrates of varying size and dimension.
Adsorption energies are computed with coupled cluster theory, fixed-node
quantum Monte Carlo (FNQMC), the random phase approximation (RPA), and second
order M{\o}ller-Plesset (MP2) theory. These explicitly correlated methods are
found to agree in molecular as well as periodic systems. The best estimate of
the water/h-BN adsorption energy is $-107\pm7$ meV from FNQMC. In addition, the
water adsorption energy on the BN substrates could be expected to grow
monotonically with the size of the substrate due to increased dispersion
interactions but interestingly, this is not the case here. This peculiar
finding is explained using the static polarizabilities and molecular dispersion
coefficients of the systems, as computed from time-dependent density functional
theory (DFT). Dynamic as well as static polarizabilities are found to be highly
anisotropic in these systems. In addition, the many-body dispersion method in
DFT emerges as a particularly useful estimation of finite size effects for
other expensive, many-body wavefunction based methods.
| 0 | 1 | 0 | 0 | 0 | 0 |
Theory of Large Intrinsic Spin Hall Effect in Iridate Semimetals | We theoretically investigate the mechanism to generate large intrinsic spin
Hall effect in iridates or more broadly in 5d transition metal oxides with
strong spin-orbit coupling. We demonstrate such a possibility by taking the
example of orthorhombic perovskite iridate with nonsymmorphic lattice symmetry,
SrIrO$_3$, which is a three-dimensional semimetal with nodal line spectrum. It
is shown that large intrinsic spin Hall effect arises in this system via the
spin-Berry curvature originating from the nearly degenerate electronic spectra
surrounding the nodal line. This effect exists even when the nodal line is
gently gapped out, due to the persistent nearly degenerate electronic
structure, suggesting a distinct robustness. The magnitude of the spin Hall
conductivity is shown to be comparable to the best known example such as doped
topological insulators and the biggest in any transition metal oxides. To gain
further insight, we compute the intrinsic spin Hall conductivity in both of the
bulk and thin film systems. We find that the geometric confinement in thin
films leads to significant modifications of the electronic states, leading to
even bigger spin Hall conductivity in certain cases. We compare our findings
with the recent experimental report on the discovery of large spin Hall effect
in SrIrO$_3$ thin films.
| 0 | 1 | 0 | 0 | 0 | 0 |
A pathway-based kernel boosting method for sample classification using genomic data | The analysis of cancer genomic data has long suffered "the curse of
dimensionality". Sample sizes for most cancer genomic studies are a few
hundreds at most while there are tens of thousands of genomic features studied.
Various methods have been proposed to leverage prior biological knowledge, such
as pathways, to more effectively analyze cancer genomic data. Most of the
methods focus on testing marginal significance of the associations between
pathways and clinical phenotypes. They can identify relevant pathways, but do
not involve predictive modeling. In this article, we propose a Pathway-based
Kernel Boosting (PKB) method for integrating gene pathway information for
sample classification, where we use kernel functions calculated from each
pathway as base learners and learn the weights through iterative optimization
of the classification loss function. We apply PKB and several competing methods
to three cancer studies with pathological and clinical information, including
tumor grade, stage, tumor sites, and metastasis status. Our results show that
PKB outperforms other methods, and identifies pathways relevant to the outcome
variables.
| 0 | 0 | 0 | 1 | 1 | 0 |
Analyzing the Approximation Error of the Fast Graph Fourier Transform | The graph Fourier transform (GFT) is in general dense and requires O(n^2)
time to compute and O(n^2) memory space to store. In this paper, we pursue our
previous work on the approximate fast graph Fourier transform (FGFT). The FGFT
is computed via a truncated Jacobi algorithm, and is defined as the product of
J Givens rotations (very sparse orthogonal matrices). The truncation parameter,
J, represents a trade-off between precision of the transform and time of
computation (and storage space). We explore further this trade-off and study,
on different types of graphs, how is the approximation error distributed along
the spectrum.
| 1 | 0 | 0 | 0 | 0 | 0 |
Deriving Enhanced Geographical Representations via Similarity-based Spectral Analysis: Predicting Colorectal Cancer Survival Curves in Iowa | Neural networks are capable of learning rich, nonlinear feature
representations shown to be beneficial in many predictive tasks. In this work,
we use such models to explore different geographical feature representations in
the context of predicting colorectal cancer survival curves for patients in the
state of Iowa, spanning the years 1989 to 2013. Specifically, we compare model
performance using "area between the curves" (ABC) to assess (a) whether
survival curves can be reasonably predicted for colorectal cancer patients in
the state of Iowa, (b) whether geographical features improve predictive
performance, (c) whether a simple binary representation, or a richer, spectral
analysis-elicited representation perform better, and (d) whether spectral
analysis-based representations can be improved upon by leveraging
geographically-descriptive features. In exploring (d), we devise a
similarity-based spectral analysis procedure, which allows for the combination
of geographically relational and geographically descriptive features. Our
findings suggest that survival curves can be reasonably estimated on average,
with predictive performance deviating at the five-year survival mark among all
models. We also find that geographical features improve predictive performance,
and that better performance is obtained using richer, spectral
analysis-elicited features. Furthermore, we find that similarity-based spectral
analysis-elicited representations improve upon the original spectral analysis
results by approximately 40%.
| 0 | 0 | 0 | 1 | 0 | 0 |
On the Integrality Gap of the Prize-Collecting Steiner Forest LP | In the prize-collecting Steiner forest (PCSF) problem, we are given an
undirected graph $G=(V,E)$, edge costs $\{c_e\geq 0\}_{e\in E}$, terminal pairs
$\{(s_i,t_i)\}_{i=1}^k$, and penalties $\{\pi_i\}_{i=1}^k$ for each terminal
pair; the goal is to find a forest $F$ to minimize $c(F)+\sum_{i:
(s_i,t_i)\text{ not connected in }F}\pi_i$. The Steiner forest problem can be
viewed as the special case where $\pi_i=\infty$ for all $i$. It was widely
believed that the integrality gap of the natural (and well-studied)
linear-programming (LP) relaxation for PCSF is at most 2. We dispel this belief
by showing that the integrality gap of this LP is at least $9/4$. This holds
even for planar graphs. We also show that using this LP, one cannot devise a
Lagrangian-multiplier-preserving (LMP) algorithm with approximation guarantee
better than $4$. Our results thus show a separation between the integrality
gaps of the LP-relaxations for prize-collecting and non-prize-collecting (i.e.,
standard) Steiner forest, as well as the approximation ratios achievable
relative to the optimal LP solution by LMP- and non-LMP- approximation
algorithms for PCSF. For the special case of prize-collecting Steiner tree
(PCST), we prove that the natural LP relaxation admits basic feasible solutions
with all coordinates of value at most $1/3$ and all edge variables positive.
Thus, we rule out the possibility of approximating PCST with guarantee better
than $3$ using a direct iterative rounding method.
| 1 | 0 | 1 | 0 | 0 | 0 |
Estimating Average Treatment Effects: Supplementary Analyses and Remaining Challenges | There is a large literature on semiparametric estimation of average treatment
effects under unconfounded treatment assignment in settings with a fixed number
of covariates. More recently attention has focused on settings with a large
number of covariates. In this paper we extend lessons from the earlier
literature to this new setting. We propose that in addition to reporting point
estimates and standard errors, researchers report results from a number of
supplementary analyses to assist in assessing the credibility of their
estimates.
| 0 | 0 | 0 | 1 | 0 | 0 |
Markov cubature rules for polynomial processes | We study discretizations of polynomial processes using finite state Markov
processes satisfying suitable moment matching conditions. The states of these
Markov processes together with their transition probabilities can be
interpreted as Markov cubature rules. The polynomial property allows us to
study such rules using algebraic techniques. Markov cubature rules aid the
tractability of path-dependent tasks such as American option pricing in models
where the underlying factors are polynomial processes.
| 0 | 0 | 0 | 0 | 0 | 1 |
Statistical inference for high dimensional regression via Constrained Lasso | In this paper, we propose a new method for estimation and constructing
confidence intervals for low-dimensional components in a high-dimensional
model. The proposed estimator, called Constrained Lasso (CLasso) estimator, is
obtained by simultaneously solving two estimating equations---one imposing a
zero-bias constraint for the low-dimensional parameter and the other forming an
$\ell_1$-penalized procedure for the high-dimensional nuisance parameter. By
carefully choosing the zero-bias constraint, the resulting estimator of the low
dimensional parameter is shown to admit an asymptotically normal limit
attaining the Cramér-Rao lower bound in a semiparametric sense. We propose
a tuning-free iterative algorithm for implementing the CLasso. We show that
when the algorithm is initialized at the Lasso estimator, the de-sparsified
estimator proposed in van de Geer et al. [\emph{Ann. Statist.} {\bf 42} (2014)
1166--1202] is asymptotically equivalent to the first iterate of the algorithm.
We analyse the asymptotic properties of the CLasso estimator and show the
globally linear convergence of the algorithm. We also demonstrate encouraging
empirical performance of the CLasso through numerical studies.
| 0 | 0 | 1 | 1 | 0 | 0 |
VALES: I. The molecular gas content in star-forming dusty H-ATLAS galaxies up to z=0.35 | We present an extragalactic survey using observations from the Atacama Large
Millimeter/submillimeter Array (ALMA) to characterise galaxy populations up to
$z=0.35$: the Valparaíso ALMA Line Emission Survey (VALES). We use ALMA
Band-3 CO(1--0) observations to study the molecular gas content in a sample of
67 dusty normal star-forming galaxies selected from the $Herschel$
Astrophysical Terahertz Large Area Survey ($H$-ATLAS). We have spectrally
detected 49 galaxies at $>5\sigma$ significance and 12 others are seen at low
significance in stacked spectra. CO luminosities are in the range of
$(0.03-1.31)\times10^{10}$ K km s$^{-1}$ pc$^2$, equivalent to $\log({\rm
M_{gas}/M_{\odot}}) =8.9-10.9$ assuming an $\alpha_{\rm CO}$=4.6(K km s$^{-1}$
pc$^{2}$)$^{-1}$, which perfectly complements the parameter space previously
explored with local and high-z normal galaxies. We compute the optical to CO
size ratio for 21 galaxies resolved by ALMA at $\sim 3$."$5$ resolution (6.5
kpc), finding that the molecular gas is on average $\sim$ 0.6 times more
compact than the stellar component. We obtain a global Schmidt-Kennicutt
relation, given by $\log [\Sigma_{\rm SFR}/({\rm M_{\odot}
yr^{-1}kpc^{-2}})]=(1.26 \pm 0.02) \times \log [\Sigma_{\rm M_{H2}}/({\rm
M_{\odot}\,pc^{-2}})]-(3.6 \pm 0.2)$. We find a significant fraction of
galaxies lying at `intermediate efficiencies' between a long-standing mode of
star-formation activity and a starburst, specially at $\rm L_{IR}=10^{11-12}
L_{\odot}$. Combining our observations with data taken from the literature, we
propose that star formation efficiencies can be parameterised by $\log [{\rm
SFR/M_{H2}}]=0.19 \times {\rm (\log {L_{IR}}-11.45)}-8.26-0.41 \times
\arctan[-4.84 (\log {\rm L_{IR}}-11.45) ]$. Within the redshift range we
explore ($z<0.35$), we identify a rapid increase of the gas content as a
function of redshift.
| 0 | 1 | 0 | 0 | 0 | 0 |
Variational methods for degenerate Kirchhoff equations | For a degenerate autonomous Kirchhoff equation which is set on $\mathbb{R}^N$
and involves the Berestycki-Lions type nonlinearity, we cope with the cases
$N=2,3$ and $N\geq5$ by using mountain pass and symmetric mountain pass
approaches and by using Clark theorem respectively.
| 0 | 0 | 1 | 0 | 0 | 0 |
Chang'e 3 lunar mission and upper limit on stochastic background of gravitational wave around the 0.01 Hz band | The Doppler tracking data of the Chang'e 3 lunar mission is used to constrain
the stochastic background of gravitational wave in cosmology within the 1 mHz
to 0.05 Hz frequency band. Our result improves on the upper bound on the energy
density of the stochastic background of gravitational wave in the 0.02 Hz to
0.05 Hz band obtained by the Apollo missions, with the improvement reaching
almost one order of magnitude at around 0.05 Hz. Detailed noise analysis of the
Doppler tracking data is also presented, with the prospect that these noise
sources will be mitigated in future Chinese deep space missions. A feasibility
study is also undertaken to understand the scientific capability of the Chang'e
4 mission, due to be launched in 2018, in relation to the stochastic
gravitational wave background around 0.01 Hz. The study indicates that the
upper bound on the energy density may be further improved by another order of
magnitude from the Chang'e 3 mission, which will fill the gap in the frequency
band from 0.02 Hz to 0.1 Hz in the foreseeable future.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the complexity of range searching among curves | Modern tracking technology has made the collection of large numbers of
densely sampled trajectories of moving objects widely available. We consider a
fundamental problem encountered when analysing such data: Given $n$ polygonal
curves $S$ in $\mathbb{R}^d$, preprocess $S$ into a data structure that answers
queries with a query curve $q$ and radius $\rho$ for the curves of $S$ that
have \Frechet distance at most $\rho$ to $q$.
We initiate a comprehensive analysis of the space/query-time trade-off for
this data structuring problem. Our lower bounds imply that any data structure
in the pointer model model that achieves $Q(n) + O(k)$ query time, where $k$ is
the output size, has to use roughly $\Omega\left((n/Q(n))^2\right)$ space in
the worst case, even if queries are mere points (for the discrete \Frechet
distance) or line segments (for the continuous \Frechet distance). More
importantly, we show that more complex queries and input curves lead to
additional logarithmic factors in the lower bound. Roughly speaking, the number
of logarithmic factors added is linear in the number of edges added to the
query and input curve complexity. This means that the space/query time
trade-off worsens by an exponential factor of input and query complexity. This
behaviour addresses an open question in the range searching literature: whether
it is possible to avoid the additional logarithmic factors in the space and
query time of a multilevel partition tree. We answer this question negatively.
On the positive side, we show we can build data structures for the \Frechet
distance by using semialgebraic range searching. Our solution for the discrete
\Frechet distance is in line with the lower bound, as the number of levels in
the data structure is $O(t)$, where $t$ denotes the maximal number of vertices
of a curve. For the continuous \Frechet distance, the number of levels
increases to $O(t^2)$.
| 1 | 0 | 0 | 0 | 0 | 0 |
Topic Compositional Neural Language Model | We propose a Topic Compositional Neural Language Model (TCNLM), a novel
method designed to simultaneously capture both the global semantic meaning and
the local word ordering structure in a document. The TCNLM learns the global
semantic coherence of a document via a neural topic model, and the probability
of each learned latent topic is further used to build a Mixture-of-Experts
(MoE) language model, where each expert (corresponding to one topic) is a
recurrent neural network (RNN) that accounts for learning the local structure
of a word sequence. In order to train the MoE model efficiently, a matrix
factorization method is applied, by extending each weight matrix of the RNN to
be an ensemble of topic-dependent weight matrices. The degree to which each
member of the ensemble is used is tied to the document-dependent probability of
the corresponding topics. Experimental results on several corpora show that the
proposed approach outperforms both a pure RNN-based model and other
topic-guided language models. Further, our model yields sensible topics, and
also has the capacity to generate meaningful sentences conditioned on given
topics.
| 1 | 0 | 0 | 0 | 0 | 0 |
Transmission spectra and valley processing of graphene and carbon nanotube superlattices with inter-valley coupling | We numerically investigate the electronic transport properties of graphene
nanoribbons and carbon nanotubes with inter-valley coupling, e.g., in \sqrt{3}N
\times \sqrt{3}N and 3N \times 3N superlattices. By taking the \sqrt{3} \times
\sqrt{3} graphene superlattice as an example, we show that tailoring the bulk
graphene superlattice results in rich structural configurations of nanoribbons
and nanotubes. After studying the electronic characteristics of the
corresponding armchair and zigzag nanoribbon geometries, we find that the
linear bands of carbon nanotubes can lead to the Klein tunnelling-like
phenomenon, i.e., electrons propagate along tubes without backscattering even
in the presence of a barrier. Due to the coupling between K and K' valleys of
pristine graphene by \sqrt{3} \times \sqrt{3} supercells,we propose a
valley-field-effect transistor based on the armchair carbon nanotube, where the
valley polarization of the current can be tuned by applying a gate voltage or
varying the length of the armchair carbon nanotubes.
| 0 | 1 | 0 | 0 | 0 | 0 |
Parametric Adversarial Divergences are Good Task Losses for Generative Modeling | Generative modeling of high dimensional data like images is a notoriously
difficult and ill-defined problem. In particular, how to evaluate a learned
generative model is unclear. In this position paper, we argue that adversarial
learning, pioneered with generative adversarial networks (GANs), provides an
interesting framework to implicitly define more meaningful task losses for
generative modeling tasks, such as for generating "visually realistic" images.
We refer to those task losses as parametric adversarial divergences and we give
two main reasons why we think parametric divergences are good learning
objectives for generative modeling. Additionally, we unify the processes of
choosing a good structured loss (in structured prediction) and choosing a
discriminator architecture (in generative modeling) using statistical decision
theory; we are then able to formalize and quantify the intuition that "weaker"
losses are easier to learn from, in a specific setting. Finally, we propose two
new challenging tasks to evaluate parametric and nonparametric divergences: a
qualitative task of generating very high-resolution digits, and a quantitative
task of learning data that satisfies high-level algebraic constraints. We use
two common divergences to train a generator and show that the parametric
divergence outperforms the nonparametric divergence on both the qualitative and
the quantitative task.
| 1 | 0 | 0 | 1 | 0 | 0 |
Boundedness and homogeneous asymptotics for a fractional logistic Keller-Segel equations | In this paper we consider a $d$-dimensional ($d=1,2$) parabolic-elliptic
Keller-Segel equation with a logistic forcing and a fractional diffusion of
order $\alpha \in (0,2)$. We prove uniform in time boundedness of its solution
in the supercritical range $\alpha>d\left(1-c\right)$, where $c$ is an explicit
constant depending on parameters of our problem. Furthermore, we establish
sufficient conditions for $\|u(t)-u_\infty\|_{L^\infty}\rightarrow0$, where
$u_\infty\equiv 1$ is the only nontrivial homogeneous solution. Finally, we
provide a uniqueness result.
| 0 | 0 | 1 | 0 | 0 | 0 |
Explicit formulas, symmetry and symmetry breaking for Willmore surfaces of revolution | In this paper we prove explicit formulas for all Willmore surfaces of
revolution and demonstrate their use in the discussion of the associated
Dirichlet boundary value problems. It is shown by an explicit example that
symmetric Dirichlet boundary conditions do in general not entail the symmetry
of the surface. In addition we prove a symmetry result for a subclass of
Willmore surfaces satisfying symmetric Dirichlet boundary data.
| 0 | 0 | 1 | 0 | 0 | 0 |
Dynamic Task Allocation for Crowdsourcing Settings | We consider the problem of optimal budget allocation for crowdsourcing
problems, allocating users to tasks to maximize our final confidence in the
crowdsourced answers. Such an optimized worker assignment method allows us to
boost the efficacy of any popular crowdsourcing estimation algorithm. We
consider a mutual information interpretation of the crowdsourcing problem,
which leads to a stochastic subset selection problem with a submodular
objective function. We present experimental simulation results which
demonstrate the effectiveness of our dynamic task allocation method for
achieving higher accuracy, possibly requiring fewer labels, as well as
improving upon a previous method which is sensitive to the proportion of users
to questions.
| 1 | 0 | 0 | 1 | 0 | 0 |
From optimal transport to generative modeling: the VEGAN cookbook | We study unsupervised generative modeling in terms of the optimal transport
(OT) problem between true (but unknown) data distribution $P_X$ and the latent
variable model distribution $P_G$. We show that the OT problem can be
equivalently written in terms of probabilistic encoders, which are constrained
to match the posterior and prior distributions over the latent space. When
relaxed, this constrained optimization problem leads to a penalized optimal
transport (POT) objective, which can be efficiently minimized using stochastic
gradient descent by sampling from $P_X$ and $P_G$. We show that POT for the
2-Wasserstein distance coincides with the objective heuristically employed in
adversarial auto-encoders (AAE) (Makhzani et al., 2016), which provides the
first theoretical justification for AAEs known to the authors. We also compare
POT to other popular techniques like variational auto-encoders (VAE) (Kingma
and Welling, 2014). Our theoretical results include (a) a better understanding
of the commonly observed blurriness of images generated by VAEs, and (b)
establishing duality between Wasserstein GAN (Arjovsky and Bottou, 2017) and
POT for the 1-Wasserstein distance.
| 0 | 0 | 0 | 1 | 0 | 0 |
The Gauss map of a free boundary minimal surface | In this paper, we study the Gauss map of a free boundary minimal surface. The
main theorem asserts that if components of the Gauss map are eigenfunctions of
the Jacobi-Steklov operator, then the surface must be rotationally symmetric.
| 0 | 0 | 1 | 0 | 0 | 0 |
Computational Experiments on $a^4+b^4+c^4+d^4=(a+b+c+d)^4$ | Computational approaches to finding non-trivial integer solutions of the
equation in the title are discussed. We summarize previous work and provide
several new solutions.
| 0 | 0 | 1 | 0 | 0 | 0 |
An information theoretic approach to the autoencoder | We present a variation of the Autoencoder (AE) that explicitly maximizes the
mutual information between the input data and the hidden representation. The
proposed model, the InfoMax Autoencoder (IMAE), by construction is able to
learn a robust representation and good prototypes of the data. IMAE is compared
both theoretically and then computationally with the state of the art models:
the Denoising and Contractive Autoencoders in the one-hidden layer setting and
the Variational Autoencoder in the multi-layer case. Computational experiments
are performed with the MNIST and Fashion-MNIST datasets and demonstrate
particularly the strong clusterization performance of IMAE.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Certified-Complete Bimanual Manipulation Planner | Planning motions for two robot arms to move an object collaboratively is a
difficult problem, mainly because of the closed-chain constraint, which arises
whenever two robot hands simultaneously grasp a single rigid object. In this
paper, we propose a manipulation planning algorithm to bring an object from an
initial stable placement (position and orientation of the object on the support
surface) towards a goal stable placement. The key specificity of our algorithm
is that it is certified-complete: for a given object and a given environment,
we provide a certificate that the algorithm will find a solution to any
bimanual manipulation query in that environment whenever one exists. Moreover,
the certificate is constructive: at run-time, it can be used to quickly find a
solution to a given query. The algorithm is tested in software and hardware on
a number of large pieces of furniture.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Short Baseline Neutrino Oscillation Program at Fermilab | The Short-Baseline Neutrino (SBN) Program is a short-baseline neutrino
oscillation experiment in the Booster Neutrino Beam-line (BNB) at Fermilab. It
consists of three Liquid Argon Time Projection Chambers (LArTPCs) from the
Short-Baseline Near Detector (SBND), Micro Booster Neutrino Experiment
(MicroBooNE), and Imaging Cosmic And Rare Underground Signals (ICARUS)
experiments. The SBN Program will definitively search for short-baseline
neutrino oscillations in the 1 eV mass range, make precision neutrino-argon
interaction measurements, and further develop the LArTPC technology. The
physics program and current status of the program, and its constituent
experiments, are presented.
| 0 | 1 | 0 | 0 | 0 | 0 |
Best Rank-One Tensor Approximation and Parallel Update Algorithm for CPD | A novel algorithm is proposed for CANDECOMP/PARAFAC tensor decomposition to
exploit best rank-1 tensor approximation. Different from the existing
algorithms, our algorithm updates rank-1 tensors simultaneously in parallel. In
order to achieve this, we develop new all-at-once algorithms for best rank-1
tensor approximation based on the Levenberg-Marquardt method and the rotational
update. We show that the LM algorithm has the same complexity of first-order
optimisation algorithms, while the rotational method leads to solving the best
rank-1 approximation of tensors of size $2 \times 2 \times \cdots \times 2$. We
derive a closed-form expression of the best rank-1 tensor of $2\times 2 \times
2$ tensors and present an ALS algorithm which updates 3 component at a time for
higher order tensors. The proposed algorithm is illustrated in decomposition of
difficult tensors which are associated with multiplication of two matrices.
| 1 | 0 | 0 | 0 | 0 | 0 |
Predictive Independence Testing, Predictive Conditional Independence Testing, and Predictive Graphical Modelling | Testing (conditional) independence of multivariate random variables is a task
central to statistical inference and modelling in general - though
unfortunately one for which to date there does not exist a practicable
workflow. State-of-art workflows suffer from the need for heuristic or
subjective manual choices, high computational complexity, or strong parametric
assumptions.
We address these problems by establishing a theoretical link between
multivariate/conditional independence testing, and model comparison in the
multivariate predictive modelling aka supervised learning task. This link
allows advances in the extensively studied supervised learning workflow to be
directly transferred to independence testing workflows - including automated
tuning of machine learning type which addresses the need for a heuristic
choice, the ability to quantitatively trade-off computational demand with
accuracy, and the modern black-box philosophy for checking and interfacing.
As a practical implementation of this link between the two workflows, we
present a python package 'pcit', which implements our novel multivariate and
conditional independence tests, interfacing the supervised learning API of the
scikit-learn package. Theory and package also allow for straightforward
independence test based learning of graphical model structure.
We empirically show that our proposed predictive independence test outperform
or are on par to current practice, and the derived graphical model structure
learning algorithms asymptotically recover the 'true' graph. This paper, and
the 'pcit' package accompanying it, thus provide powerful, scalable,
generalizable, and easy-to-use methods for multivariate and conditional
independence testing, as well as for graphical model structure learning.
| 1 | 0 | 1 | 1 | 0 | 0 |
Sharp Minima Can Generalize For Deep Nets | Despite their overwhelming capacity to overfit, deep learning architectures
tend to generalize relatively well to unseen data, allowing them to be deployed
in practice. However, explaining why this is the case is still an open area of
research. One standing hypothesis that is gaining popularity, e.g. Hochreiter &
Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the
loss function found by stochastic gradient based methods results in good
generalization. This paper argues that most notions of flatness are problematic
for deep models and can not be directly applied to explain generalization.
Specifically, when focusing on deep networks with rectifier units, we can
exploit the particular geometry of parameter space induced by the inherent
symmetries that these architectures exhibit to build equivalent models
corresponding to arbitrarily sharper minima. Furthermore, if we allow to
reparametrize a function, the geometry of its parameters can change drastically
without affecting its generalization properties.
| 1 | 0 | 0 | 0 | 0 | 0 |
Coarse-grained model of the J-integral of carbon nanotube reinforced polymer composites | The J-integral is recognized as a fundamental parameter in fracture mechanics
that characterizes the inherent resistance of materials to crack growth.
However, the conventional methods to calculate the J-integral, which require
knowledge of the exact position of a crack tip and the continuum fields around
it, are unable to precisely measure the J-integral of polymer composites at the
nanoscale. This work aims to propose an effective calculation method based on
coarse-grained (CG) simulations for predicting the J-integral of carbon
nanotube (CNT)/polymer composites. In the proposed approach, the J-integral is
determined from the load displacement curve of a single specimen. The
distinguishing feature of the method is the calculation of J-integral without
need of information about the crack tip, which makes it applicable to complex
polymer systems. The effects of the CNT weight fraction and covalent
cross-links between the polymer matrix and nanotubes, and polymer chains on the
fracture behavior of the composites are studied in detail. The dependence of
the J-integral on the crack length and the size of representative volume
element (RVE) is also explored.
| 0 | 1 | 0 | 0 | 0 | 0 |
On Estimating Multi-Attribute Choice Preferences using Private Signals and Matrix Factorization | Revealed preference theory studies the possibility of modeling an agent's
revealed preferences and the construction of a consistent utility function.
However, modeling agent's choices over preference orderings is not always
practical and demands strong assumptions on human rationality and
data-acquisition abilities. Therefore, we propose a simple generative choice
model where agents are assumed to generate the choice probabilities based on
latent factor matrices that capture their choice evaluation across multiple
attributes. Since the multi-attribute evaluation is typically hidden within the
agent's psyche, we consider a signaling mechanism where agents are provided
with choice information through private signals, so that the agent's choices
provide more insight about his/her latent evaluation across multiple
attributes. We estimate the choice model via a novel multi-stage matrix
factorization algorithm that minimizes the average deviation of the factor
estimates from choice data. Simulation results are presented to validate the
estimation performance of our proposed algorithm.
| 0 | 0 | 0 | 1 | 0 | 0 |
Analysis of evolutionary origins of genomic loci harboring 59,732 candidate human-specific regulatory sequences identifies genetic divergence patterns during evolution of Great Apes | Our view of the universe of genomic regions harboring various types of
candidate human-specific regulatory sequences (HSRS) has been markedly expanded
in recent years. To infer the evolutionary origins of loci harboring HSRS,
analyses of conservations patterns of 59,732 loci in Modern Humans, Chimpanzee,
Bonobo, Gorilla, Orangutan, Gibbon, and Rhesus genomes have been performed. Two
major evolutionary pathways have been identified comprising thousands of
sequences that were either inherited from extinct common ancestors (ECAs) or
created de novo in humans after human/chimpanzee split. Thousands of HSRS
appear inherited from ECAs yet bypassed genomes of our closest evolutionary
relatives, presumably due to the incomplete lineage sorting and/or
species-specific loss or regulatory DNA. The bypassing pattern is prominent for
HSRS associated with development and functions of human brain. Common genomic
loci that may contributed to speciation during evolution of Great Apes comprise
248 insertions sites of African Great Ape-specific retrovirus PtERV1 (45.9%; p
= 1.03E-44) intersecting regions harboring 442 HSRS, which are enriched for
HSRS associated with human-specific (HS) changes of gene expression in cerebral
organoids. Among non-human primates (NHP), most significant fractions of
candidate HSRS associated with HS expression changes in both excitatory neurons
(347 loci; 67%) and radial glia (683 loci; 72%) are highly conserved in Gorilla
genome. Modern Humans acquired unique combinations of regulatory sequences
highly conserved in distinct species of six NHP separated by 30 million years
of evolution. Concurrently, this unique mosaic of regulatory sequences
inherited from ECAs was supplemented with 12,486 created de novo HSRS. These
observations support the model of complex continuous speciation process during
evolution of Great Apes that is not likely to occur as an instantaneous event.
| 0 | 0 | 0 | 0 | 1 | 0 |
Verification of operational solar flare forecast: Case of Regional Warning Center Japan | In this article, we discuss a verification study of an operational solar
flare forecast in the Regional Warning Center (RWC) Japan. The RWC Japan has
been issuing four-categorical deterministic solar flare forecasts for a long
time. In this forecast verification study, we used solar flare forecast data
accumulated over 16 years (from 2000 to 2015). We compiled the forecast data
together with solar flare data obtained with the Geostationary Operational
Environmental Satellites (GOES). Using the compiled data sets, we estimated
some conventional scalar verification measures with 95% confidence intervals.
We also estimated a multi-categorical scalar verification measure. These scalar
verification measures were compared with those obtained by the persistence
method and recurrence method. As solar activity varied during the 16 years, we
also applied verification analyses to four subsets of forecast-observation pair
data with different solar activity levels. We cannot conclude definitely that
there are significant performance difference between the forecasts of RWC Japan
and the persistence method, although a slightly significant difference is found
for some event definitions. We propose to use a scalar verification measure to
assess the judgment skill of the operational solar flare forecast. Finally, we
propose a verification strategy for deterministic operational solar flare
forecasting.
| 0 | 1 | 0 | 1 | 0 | 0 |
Conformational dynamics of a single protein monitored for 24 hours at video rate | We use plasmon rulers to follow the conformational dynamics of a single
protein for up to 24 h at a video rate. The plasmon ruler consists of two gold
nanospheres connected by a single protein linker. In our experiment, we follow
the dynamics of the molecular chaperone heat shock protein 90, which is known
to show open and closed conformations. Our measurements confirm the previously
known conformational dynamics with transition times in the second to minute
time scale and reveals new dynamics on the time scale of minutes to hours.
Plasmon rulers thus extend the observation bandwidth 3/4 orders of magnitude
with respect to single-molecule fluorescence resonance energy transfer and
enable the study of molecular dynamics with unprecedented precision.
| 0 | 0 | 0 | 0 | 1 | 0 |
Dynamics of domain walls in weak ferromagnets | It is shown that the total set of equations, which determines the dynamics of
the domain bounds (DB) in a weak ferromagnet, has the same type of specific
solution as the well-known Walker's solution for ferromagnets. We calculated
the functional dependence of the velocity of the DB on the magnetic field,
which is described by the obtained solution. This function has a maximum at a
finite field and a section of the negative differential mobility of the DB.
According to the calculation, the maximum velocity $ c \approx 2 \times 10^6$
cm/sec in YFeO$_3$ is reached at $H_m \approx 4 \times 10^3$ Oe.
| 0 | 1 | 0 | 0 | 0 | 0 |
Generalized Short Circuit Ratio for Multi Power Electronic based Devices Infeed Systems: Defi-nition and Theoretical Analysis | Short circuit ratio (SCR) is widely applied to analyze the strength of AC
system and the small signal stability for single power elec-tronic based
devices infeed systems (SPEISs). However, there still lacking the theory of
short circuit ratio applicable for multi power electronic based devices infeed
systems (MPEIS), as the complex coupling among multi power electronic devices
(PEDs) leads to difficulties in stability analysis. In this regard, this paper
firstly proposes a concept named generalized short circuit ratio (gSCR) to
measure the strength of connected AC grid in a multi-infeed system from the
small signal stability point of view. Generally, the gSCR is physically and
mathematically extended from conven-tional SCR by decomposing the multi-infeed
system into n inde-pendent single infeed systems. Then the operation gSCR
(OgSCR) is proposed based on gSCR in order to take the variation of op-eration
point into consideration. The participation factors and sensitivity are
analyzed as well. Finally, simulations are conducted to demonstrate the
rationality and effectiveness of the defined gSCR and OgSCR.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Correction Method of a Binary Classifier Applied to Multi-label Pairwise Models | In this work, we addressed the issue of applying a stochastic classifier and
a local, fuzzy confusion matrix under the framework of multi-label
classification. We proposed a novel solution to the problem of correcting label
pairwise ensembles. The main step of the correction procedure is to compute
classifier- specific competence and cross-competence measures, which estimates
error pattern of the underlying classifier. We considered two improvements of
the method of obtaining confusion matrices. The first one is aimed to deal with
imbalanced labels. The other utilizes double labelled instances which are
usually removed during the pairwise transformation. The proposed methods were
evaluated using 29 benchmark datasets. In order to assess the efficiency of the
introduced models, they were compared against 1 state-of-the-art approach and
the correction scheme based on the original method of confusion matrix
estimation. The comparison was performed using four different multi-label
evaluation measures: macro and micro-averaged F1 loss, zero-one loss and
Hamming loss. Additionally, we investigated relations between classification
quality, which is expressed in terms of different quality criteria, and
characteristics of multi-label datasets such as average imbalance ratio or
label density. The experimental study reveals that the correction approaches
significantly outperforms the reference method only in terms of zero-one loss.
| 1 | 0 | 0 | 1 | 0 | 0 |
Augment your batch: better training with larger batches | Large-batch SGD is important for scaling training of deep neural networks.
However, without fine-tuning hyperparameter schedules, the generalization of
the model may be hampered. We propose to use batch augmentation: replicating
instances of samples within the same batch with different data augmentations.
Batch augmentation acts as a regularizer and an accelerator, increasing both
generalization and performance scaling. We analyze the effect of batch
augmentation on gradient variance and show that it empirically improves
convergence for a wide variety of deep neural networks and datasets. Our
results show that batch augmentation reduces the number of necessary SGD
updates to achieve the same accuracy as the state-of-the-art. Overall, this
simple yet effective method enables faster training and better generalization
by allowing more computational resources to be used concurrently.
| 1 | 0 | 0 | 1 | 0 | 0 |
Implicit Regularization in Nonconvex Statistical Estimation: Gradient Descent Converges Linearly for Phase Retrieval, Matrix Completion and Blind Deconvolution | Recent years have seen a flurry of activities in designing provably efficient
nonconvex procedures for solving statistical estimation problems. Due to the
highly nonconvex nature of the empirical loss, state-of-the-art procedures
often require proper regularization (e.g. trimming, regularized cost,
projection) in order to guarantee fast convergence. For vanilla procedures such
as gradient descent, however, prior theory either recommends highly
conservative learning rates to avoid overshooting, or completely lacks
performance guarantees.
This paper uncovers a striking phenomenon in nonconvex optimization: even in
the absence of explicit regularization, gradient descent enforces proper
regularization implicitly under various statistical models. In fact, gradient
descent follows a trajectory staying within a basin that enjoys nice geometry,
consisting of points incoherent with the sampling mechanism. This "implicit
regularization" feature allows gradient descent to proceed in a far more
aggressive fashion without overshooting, which in turn results in substantial
computational savings. Focusing on three fundamental statistical estimation
problems, i.e. phase retrieval, low-rank matrix completion, and blind
deconvolution, we establish that gradient descent achieves near-optimal
statistical and computational guarantees without explicit regularization. In
particular, by marrying statistical modeling with generic optimization theory,
we develop a general recipe for analyzing the trajectories of iterative
algorithms via a leave-one-out perturbation argument. As a byproduct, for noisy
matrix completion, we demonstrate that gradient descent achieves near-optimal
error control --- measured entrywise and by the spectral norm --- which might
be of independent interest.
| 1 | 0 | 1 | 1 | 0 | 0 |
The effect of surface tension on steadily translating bubbles in an unbounded Hele-Shaw cell | New numerical solutions to the so-called selection problem for one and two
steadily translating bubbles in an unbounded Hele-Shaw cell are presented. Our
approach relies on conformal mapping which, for the two-bubble problem,
involves the Schottky-Klein prime function associated with an annulus. We show
that a countably infinite number of solutions exist for each fixed value of
dimensionless surface tension, with the bubble shapes becoming more exotic as
the solution branch number increases. Our numerical results suggest that a
single solution is selected in the limit that surface tension vanishes, with
the scaling between the bubble velocity and surface tension being different to
the well-studied problems for a bubble or a finger propagating in a channel
geometry.
| 0 | 1 | 1 | 0 | 0 | 0 |
Core or cusps: The central dark matter profile of a redshift one strong lensing cluster with a bright central image | We report on SPT-CLJ2011-5228, a giant system of arcs created by a cluster at
$z=1.06$. The arc system is notable for the presence of a bright central image.
The source is a Lyman Break galaxy at $z_s=2.39$ and the mass enclosed within
the 14 arc second radius Einstein ring is $10^{14.2}$ solar masses. We perform
a full light profile reconstruction of the lensed images to precisely infer the
parameters of the mass distribution. The brightness of the central image
demands that the central total density profile of the lens be shallow. By
fitting the dark matter as a generalized Navarro-Frenk-White profile---with a
free parameter for the inner density slope---we find that the break radius is
$270^{+48}_{-76}$ kpc, and that the inner density falls with radius to the
power $-0.38\pm0.04$ at 68 percent confidence. Such a shallow profile is in
strong tension with our understanding of relaxed cold dark matter halos; dark
matter only simulations predict the inner density should fall as $r^{-1}$. The
tension can be alleviated if this cluster is in fact a merger; a two halo model
can also reconstruct the data, with both clumps (density going as $r^{-0.8}$
and $r^{-1.0}$) much more consistent with predictions from dark matter only
simulations. At the resolution of our Dark Energy Survey imaging, we are unable
to choose between these two models, but we make predictions for forthcoming
Hubble Space Telescope imaging that will decisively distinguish between them.
| 0 | 1 | 0 | 0 | 0 | 0 |
Bayesian Optimization with Automatic Prior Selection for Data-Efficient Direct Policy Search | One of the most interesting features of Bayesian optimization for direct
policy search is that it can leverage priors (e.g., from simulation or from
previous tasks) to accelerate learning on a robot. In this paper, we are
interested in situations for which several priors exist but we do not know in
advance which one fits best the current situation. We tackle this problem by
introducing a novel acquisition function, called Most Likely Expected
Improvement (MLEI), that combines the likelihood of the priors and the expected
improvement. We evaluate this new acquisition function on a transfer learning
task for a 5-DOF planar arm and on a possibly damaged, 6-legged robot that has
to learn to walk on flat ground and on stairs, with priors corresponding to
different stairs and different kinds of damages. Our results show that MLEI
effectively identifies and exploits the priors, even when there is no obvious
match between the current situations and the priors.
| 1 | 0 | 0 | 1 | 0 | 0 |
Coupled elliptic systems involving the square root of the Laplacian and Trudinger-Moser critical growth | In this paper we prove the existence of a nonnegative ground state solution
to the following class of coupled systems involving Schrödinger equations
with square root of the Laplacian
$$
\left\{
\begin{array}{lr}
(-\Delta)^{1/2}u+V_{1}(x)u=f_{1}(u)+\lambda(x)v, & x\in\mathbb{R},
(-\Delta)^{1/2}v+V_{2}(x)v=f_{2}(v)+\lambda(x)u, & x\in\mathbb{R},
\end{array}
\right.
$$
where the nonlinearities $f_{1}(s)$ and $f_{2}(s)$ have exponential critical
growth of the Trudinger-Moser type, the potentials $V_{1}(x)$ and $V_{2}(x)$
are nonnegative and periodic. Moreover, we assume that there exists $\delta\in
(0,1)$ such that $\lambda(x)\leq\delta\sqrt{V_{1}(x)V_{2}(x)}$. We are also
concerned with the existence of ground states when the potentials are
asymptotically periodic. Our approach is variational and based on minimization
technique over the Nehari manifold.
| 0 | 0 | 1 | 0 | 0 | 0 |
Gravitational mass and energy gradient in the ultra-strong magnetic fields | The paper aims to apply the complex octonion to explore the influence of the
energy gradient on the Eotvos experiment, impacting the gravitational mass in
the ultra-strong magnetic fields. Until now the Eotvos experiment has never
been validated under the ultra-strong magnetic field. It is aggravating the
existing serious qualms about the Eotvos experiment. According to the
electromagnetic and gravitational theory described with the complex octonions,
the ultra-strong magnetic field must result in a tiny variation of the
gravitational mass. The magnetic field with the gradient distribution will
generate the energy gradient. These influencing factors will exert an influence
on the state of equilibrium in the Eotvos experiment. That is, the
gravitational mass will depart from the inertial mass to a certain extent, in
the ultra-strong magnetic fields. Only under exceptional circumstances,
especially in the case of the weak field strength, the gravitational mass may
be equal to the inertial mass approximately. The paper appeals intensely to
validate the Eotvos experiment in the ultra-strong electromagnetic strengths.
It is predicted that the physical property of gravitational mass will be
distinct from that of inertial mass.
| 0 | 1 | 0 | 0 | 0 | 0 |
Retrofitting Distributional Embeddings to Knowledge Graphs with Functional Relations | Knowledge graphs are a versatile framework to encode richly structured data
relationships, but it can be challenging to combine these graphs with
unstructured data. Methods for retrofitting pre-trained entity representations
to the structure of a knowledge graph typically assume that entities are
embedded in a connected space and that relations imply similarity. However,
useful knowledge graphs often contain diverse entities and relations (with
potentially disjoint underlying corpora) which do not accord with these
assumptions. To overcome these limitations, we present Functional Retrofitting,
a framework that generalizes current retrofitting methods by explicitly
modeling pairwise relations. Our framework can directly incorporate a variety
of pairwise penalty functions previously developed for knowledge graph
completion. Further, it allows users to encode, learn, and extract information
about relation semantics. We present both linear and neural instantiations of
the framework. Functional Retrofitting significantly outperforms existing
retrofitting methods on complex knowledge graphs and loses no accuracy on
simpler graphs (in which relations do imply similarity). Finally, we
demonstrate the utility of the framework by predicting new drug--disease
treatment pairs in a large, complex health knowledge graph.
| 1 | 0 | 0 | 1 | 0 | 0 |
Privacy-Preserving Economic Dispatch in Competitive Electricity Market | With the emerging of smart grid techniques, cyber attackers may be able to
gain access to critical energy infrastructure data and strategic market
participants may be able to identify offer prices of their rivals. This paper
discusses a privacy-preserving economic dispatch approach in competitive
electricity market, in which individual generation companies (GENCOs) and load
serving entities (LSEs) can mask their actual bidding information and physical
data by multiplying with random numbers before submitting to Independent System
Operators (ISOs) and Regional Transmission Owners (RTOs). This would avoid
potential information leakage of critical energy infrastructure and financial
data of market participants. The optimal solution to the original ED problem,
including optimal dispatches of generators and loads and locational marginal
prices (LMPs), can be retrieved from the optimal solution of the proposed
privacy-preserving ED approach. Numerical case studies show the effectiveness
of the proposed approach for protecting private information of individual
market participants while guaranteeing the same optimal ED solution.
Computation and communication costs of the proposed privacy-preserving ED
approach and the original ED are also compared in case studies.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the estimation of the current density in space plasmas: multi versus single-point techniques | Thanks to multi-spacecraft mission, it has recently been possible to directly
estimate the current density in space plasmas, by using magnetic field time
series from four satellites flying in a quasi perfect tetrahedron
configuration. The technique developed, commonly called 'curlometer' permits a
good estimation of the current density when the magnetic field time series vary
linearly in space. This approximation is generally valid for small spacecraft
separation. The recent space missions Cluster and Magnetospheric Multiscale
(MMS) have provided high resolution measurements with inter-spacecraft
separation up to 100 km and 10 km, respectively. The former scale corresponds
to the proton gyroradius/ion skin depth in 'typical' solar wind conditions,
while the latter to sub-proton scale. However, some works have highlighted an
underestimation of the current density via the curlometer technique with
respect to the current computed directly from the velocity distribution
functions, measured at sub-proton scales resolution with MMS. In this paper we
explore the limit of the curlometer technique studying synthetic data sets
associated to a cluster of four artificial satellites allowed to fly in a
static turbulent field, spanning a wide range of relative separation. This
study tries to address the relative importance of measuring plasma moments at
very high resolution from a single spacecraft with respect to the
multi-spacecraft missions in the current density evaluation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Homological vanishing for the Steinberg representation | For a field $k$, we prove that the $i$th homology of the groups $GL_n(k)$,
$SL_n(k)$, $Sp_{2n}(k)$, $SO_{n,n}(k)$, and $SO_{n,n+1}(k)$ with coefficients
in their Steinberg representations vanish for $n \geq 2i+2$.
| 0 | 0 | 1 | 0 | 0 | 0 |
A tale of seven narrow spikes and a long trough: constraining the timing of the percolation of HII bubbles at the tail-end of reionization with ULAS J1120+0641 | High-signal to noise observations of the Ly$\alpha$ forest transmissivity in
the z = 7.085 QSO ULAS J1120+0641 show seven narrow transmission spikes
followed by a long 240 cMpc/h trough. Here we use radiative transfer
simulations of cosmic reionization previously calibrated to match a wider range
of Ly$\alpha$ forest data to show that the occurrence of seven transmission
spikes in the narrow redshift range z = 5.85 - 6.1 is very sensitive to the
exact timing of reionization. Occurrence of the spikes requires the most under
dense regions of the IGM to be already fully ionised. The rapid onset of a long
trough at z = 6.12 requires a strong decrease of the photo-ionisation rate at
z$\sim$6.1 in this line-of-sight, consistent with the end of percolation at
this redshift. The narrow range of reionisation histories that we previously
found to be consistent with a wider range of Ly$\alpha$ forest data have a
reasonable probability of showing seven spikes and the mock absorption spectra
provide an excellent match to the spikes and the trough in the observed
spectrum of ULAS J1120+0641. Despite the large overall opacity of Ly$\alpha$ at
z > 5.8, larger samples of high signal-to-noise observations of rare
transmission spikes should therefore provide important further insights into
the exact timing of the percolation of HII bubbles at the tail-end of
reionization
| 0 | 1 | 0 | 0 | 0 | 0 |
Quantum spin fluctuations in the bulk insulating state of pure and Fe-doped SmB6 | The intermediate-valence compound SmB6 is a well-known Kondo insulator, in
which hybridization of itinerant 5d electrons with localized 4f electrons leads
to a transition from metallic to insulating behavior at low temperatures.
Recent studies suggest that SmB6 is a topological insulator, with topological
metallic surface states emerging from a fully insulating hybridized bulk band
structure. Here we locally probe the bulk magnetic properties of pure and 0.5 %
Fe-doped SmB6 by muon spin rotation/relaxation methods. Below 6 K the Fe
impurity induces simultaneous changes in the bulk local magnetism and the
electrical conductivity. In the low-temperature insulating bulk state we
observe a temperature-independent dynamic relaxation rate indicative of
low-lying magnetic excitations driven primarily by quantum fluctuations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Device-Aware Routing and Scheduling in Multi-Hop Device-to-Device Networks | The dramatic increase in data and connectivity demand, in addition to
heterogeneous device capabilities, poses a challenge for future wireless
networks. One of the promising solutions is Device-to-Device (D2D) networking.
D2D networking, advocating the idea of connecting two or more devices directly
without traversing the core network, is promising to address the increasing
data and connectivity demand. In this paper, we consider D2D networks, where
devices with heterogeneous capabilities including computing power, energy
limitations, and incentives participate in D2D activities heterogeneously. We
develop (i) a device-aware routing and scheduling algorithm (DARS) by taking
into account device capabilities, and (ii) a multi-hop D2D testbed using
Android-based smartphones and tablets by exploiting Wi-Fi Direct and legacy
Wi-Fi connections. We show that DARS significantly improves throughput in our
testbed as compared to state-of-the-art.
| 1 | 0 | 0 | 0 | 0 | 0 |
Lazily Adapted Constant Kinky Inference for Nonparametric Regression and Model-Reference Adaptive Control | Techniques known as Nonlinear Set Membership prediction, Lipschitz
Interpolation or Kinky Inference are approaches to machine learning that
utilise presupposed Lipschitz properties to compute inferences over unobserved
function values. Provided a bound on the true best Lipschitz constant of the
target function is known a priori they offer convergence guarantees as well as
bounds around the predictions. Considering a more general setting that builds
on Hoelder continuity relative to pseudo-metrics, we propose an online method
for estimating the Hoelder constant online from function value observations
that possibly are corrupted by bounded observational errors. Utilising this to
compute adaptive parameters within a kinky inference rule gives rise to a
nonparametric machine learning method, for which we establish strong universal
approximation guarantees. That is, we show that our prediction rule can learn
any continuous function in the limit of increasingly dense data to within a
worst-case error bound that depends on the level of observational uncertainty.
We apply our method in the context of nonparametric model-reference adaptive
control (MRAC). Across a range of simulated aircraft roll-dynamics and
performance metrics our approach outperforms recently proposed alternatives
that were based on Gaussian processes and RBF-neural networks. For
discrete-time systems, we provide guarantees on the tracking success of our
learning-based controllers both for the batch and the online learning setting.
| 1 | 0 | 1 | 1 | 0 | 0 |
Information Pursuit: A Bayesian Framework for Sequential Scene Parsing | Despite enormous progress in object detection and classification, the problem
of incorporating expected contextual relationships among object instances into
modern recognition systems remains a key challenge. In this work we propose
Information Pursuit, a Bayesian framework for scene parsing that combines prior
models for the geometry of the scene and the spatial arrangement of objects
instances with a data model for the output of high-level image classifiers
trained to answer specific questions about the scene. In the proposed
framework, the scene interpretation is progressively refined as evidence
accumulates from the answers to a sequence of questions. At each step, we
choose the question to maximize the mutual information between the new answer
and the full interpretation given the current evidence obtained from previous
inquiries. We also propose a method for learning the parameters of the model
from synthesized, annotated scenes obtained by top-down sampling from an
easy-to-learn generative scene model. Finally, we introduce a database of
annotated indoor scenes of dining room tables, which we use to evaluate the
proposed approach.
| 1 | 0 | 0 | 1 | 0 | 0 |
Decentralized Clustering based on Robust Estimation and Hypothesis Testing | This paper considers a network of sensors without fusion center that may be
difficult to set up in applications involving sensors embedded on autonomous
drones or robots. In this context, this paper considers that the sensors must
perform a given clustering task in a fully decentralized setup. Standard
clustering algorithms usually need to know the number of clusters and are very
sensitive to initialization, which makes them difficult to use in a fully
decentralized setup. In this respect, this paper proposes a decentralized
model-based clustering algorithm that overcomes these issues. The proposed
algorithm is based on a novel theoretical framework that relies on hypothesis
testing and robust M-estimation. More particularly, the problem of deciding
whether two data belong to the same cluster can be optimally solved via Wald's
hypothesis test on the mean of a Gaussian random vector. The p-value of this
test makes it possible to define a new type of score function, particularly
suitable for devising an M-estimation of the centroids. The resulting
decentralized algorithm efficiently performs clustering without prior knowledge
of the number of clusters. It also turns out to be less sensitive to
initialization than the already existing clustering algorithms, which makes it
appropriate for use in a network of sensors without fusion center.
| 0 | 0 | 1 | 1 | 0 | 0 |
Modeling open nanophotonic systems using the Fourier modal method: Generalization to 3D Cartesian coordinates | Recently, an open geometry Fourier modal method based on a new combination of
an open boundary condition and a non-uniform $k$-space discretization was
introduced for rotationally symmetric structures providing a more efficient
approach for modeling nanowires and micropillar cavities [J. Opt. Soc. Am. A
33, 1298 (2016)]. Here, we generalize the approach to three-dimensional (3D)
Cartesian coordinates allowing for the modeling of rectangular geometries in
open space. The open boundary condition is a consequence of having an infinite
computational domain described using basis functions that expand the whole
space. The strength of the method lies in discretizing the Fourier integrals
using a non-uniform circular "dartboard" sampling of the Fourier $k$ space. We
show that our sampling technique leads to a more accurate description of the
continuum of the radiation modes that leak out from the structure. We also
compare our approach to conventional discretization with direct and inverse
factorization rules commonly used in established Fourier modal methods. We
apply our method to a variety of optical waveguide structures and demonstrate
that the method leads to a significantly improved convergence enabling more
accurate and efficient modeling of open 3D nanophotonic structures.
| 0 | 1 | 0 | 0 | 0 | 0 |
Microscopic Conductivity of Lattice Fermions at Equilibrium - Part II: Interacting Particles | We apply Lieb-Robinson bounds for multi-commutators we recently derived to
study the (possibly non-linear) response of interacting fermions at thermal
equilibrium to perturbations of the external electromagnetic field. This
analysis leads to an extension of the results for quasi-free fermions of
\cite{OhmI,OhmII} to fermion systems on the lattice with short-range
interactions. More precisely, we investigate entropy production and charge
transport properties of non-autonomous $C^{\ast }$-dynamical systems associated
with interacting lattice fermions within bounded static potentials and in
presence of an electric field that is time- and space-dependent. We verify the
1st law of thermodynamics for the heat production of the system under
consideration. In linear response theory, the latter is related with Ohm and
Joule's laws. These laws are proven here to hold at the microscopic scale,
uniformly with respect to the size of the (microscopic) region where the
electric field is applied. An important outcome is the extension of the notion
of conductivity measures to interacting fermions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Revealing the Unseen: How to Expose Cloud Usage While Protecting User Privacy | Cloud users have little visibility into the performance characteristics and
utilization of the physical machines underpinning the virtualized cloud
resources they use. This uncertainty forces users and researchers to reverse
engineer the inner workings of cloud systems in order to understand and
optimize the conditions their applications operate. At Massachusetts Open Cloud
(MOC), as a public cloud operator, we'd like to expose the utilization of our
physical infrastructure to stop this wasteful effort. Mindful that such
exposure can be used maliciously for gaining insight into other users
workloads, in this position paper we argue for the need for an approach that
balances openness of the cloud overall with privacy for each tenant inside of
it. We believe that this approach can be instantiated via a novel combination
of several security and privacy technologies. We discuss the potential
benefits, implications of transparency for cloud systems and users, and
technical challenges/possibilities.
| 1 | 0 | 0 | 0 | 0 | 0 |
An hp-adaptive strategy for elliptic problems | In this paper a new hp-adaptive strategy for elliptic problems based on
refinement history is proposed, which chooses h-, p- or hp-refinement on
individual elements according to a posteriori error estimate, as well as
smoothness estimate of the solution obtained by comparing the actual and
expected error reduction rate. Numerical experiments show that exponential
convergence can be achieved with this strategy.
| 1 | 0 | 1 | 0 | 0 | 0 |
High Radiation Pressure on Interstellar Dust Computed by Light-Scattering Simulation on Fluffy Agglomerates of Magnesium-silicate Grains with Metallic-iron Inclusions | Recent space missions have provided information on the physical and chemical
properties of interstellar grains such as the ratio $\beta$ of radiation
pressure to gravity acting on the grains in addition to the composition,
structure, and size distribution of the grains. Numerical simulation on the
trajectories of interstellar grains captured by Stardust and returned to Earth
constrained the $\beta$ ratio for the Stardust samples of interstellar origin.
However, recent accurate calculations of radiation pressure cross sections for
model dust grains have given conflicting stories in the $\beta$ ratio of
interstellar grains. The $\beta$ ratio for model dust grains of so-called
"astronomical silicate" in the femto-kilogram range lies below unity, in
conflict with $\beta \sim 1$ for the Stardust interstellar grains. Here, I
tackle this conundrum by re-evaluating the $\beta$ ratio of interstellar grains
on the assumption that the grains are aggregated particles grown by coagulation
and composed of amorphous MgSiO$_{3}$ with the inclusion of metallic iron. My
model is entirely consistent with the depletion and the correlation of major
rock-forming elements in the Local Interstellar Cloud surrounding the Sun and
the mineralogical identification of interstellar grains in the Stardust and
Cassini missions. I find that my model dust particles fulfill the constraints
on the $\beta$ ratio derived from not only the Stardust mission but also the
Ulysses and Cassini missions. My results suggest that iron is not incorporated
into silicates but exists as metal, contrary to the majority of interstellar
dust models available to date.
| 0 | 1 | 0 | 0 | 0 | 0 |
Unexpected Robustness of the Band Gaps of TiO2 under High Pressures | Titanium dioxide (TiO2) is a wide band gap semiconducting material which is
promising for photocatalysis. Here we present first-principles calculations to
study the pressure dependence of structural and electronic properties of two
TiO2 phases: the cotunnite-type and the Fe2P-type structure. The band gaps are
calculated using density functional theory (DFT) with the generalized gradient
approximation (GGA), as well as the many-body perturbation theory with the GW
approximation. The band gaps of both phases are found to be unexpectedly robust
across a broad range pressures. The corresponding pressure coefficients are
significantly smaller than that of diamond and silicon carbide (SiC), whose
pressure coefficient is the smallest value ever measured by experiment. The
robustness originates from the synchronous change of valence band maximum (VBM)
and conduction band minimum (CBM) with nearly identical rates of changes. A
step-like jump of band gaps around the phase transition pressure point is
expected and understood in light of the difference in crystal structures.
| 0 | 1 | 0 | 0 | 0 | 0 |
Geometric GAN | Generative Adversarial Nets (GANs) represent an important milestone for
effective generative models, which has inspired numerous variants seemingly
different from each other. One of the main contributions of this paper is to
reveal a unified geometric structure in GAN and its variants. Specifically, we
show that the adversarial generative model training can be decomposed into
three geometric steps: separating hyperplane search, discriminator parameter
update away from the separating hyperplane, and the generator update along the
normal vector direction of the separating hyperplane. This geometric intuition
reveals the limitations of the existing approaches and leads us to propose a
new formulation called geometric GAN using SVM separating hyperplane that
maximizes the margin. Our theoretical analysis shows that the geometric GAN
converges to a Nash equilibrium between the discriminator and generator. In
addition, extensive numerical results show that the superior performance of
geometric GAN.
| 1 | 1 | 0 | 1 | 0 | 0 |
A Searchable Symmetric Encryption Scheme using BlockChain | At present, the cloud storage used in searchable symmetric encryption schemes
(SSE) is provided in a private way, which cannot be seen as a true cloud.
Moreover, the cloud server is thought to be credible, because it always returns
the search result to the user, even they are not correct. In order to really
resist this malicious adversary and accelerate the usage of the data, it is
necessary to store the data on a public chain, which can be seen as a
decentralized system. As the increasing amount of the data, the search problem
becomes more and more intractable, because there does not exist any effective
solution at present.
In this paper, we begin by pointing out the importance of storing the data in
a public chain. We then innovatively construct a model of SSE using
blockchain(SSE-using-BC) and give its security definition to ensure the privacy
of the data and improve the search efficiency. According to the size of data,
we consider two different cases and propose two corresponding schemes. Lastly,
the security and performance analyses show that our scheme is feasible and
secure.
| 1 | 0 | 0 | 0 | 0 | 0 |
Operando imaging of all-electric spin texture manipulation in ferroelectric and multiferroic Rashba semiconductors | The control of the electron spin by external means is a key issue for
spintronic devices. Using spin- and angle-resolved photoemission spectroscopy
(SARPES) with three-dimensional spin detection, we demonstrate operando
electrostatic spin manipulation in ferroelectric GeTe and multiferroic
Ge1-xMnxTe. We not only demonstrate for the first time electrostatic spin
manipulation in Rashba semiconductors due to ferroelectric polarization
reversal, but are also able to follow the switching pathway in detail, and show
a gain of the Rashba-splitting strength under external fields. In multiferroic
Ge1-xMnxTe operando SARPES reveals switching of the perpendicular spin
component due to electric field induced magnetization reversal. This provides
firm evidence of effective multiferroic coupling which opens up magnetoelectric
functionality with a multitude of spin-switching paths in which the magnetic
and electric order parameters are coupled through ferroelastic relaxation
paths. This work thus provides a new type of magnetoelectric switching
entangled with Rashba-Zeeman splitting in a multiferroic system.
| 0 | 1 | 0 | 0 | 0 | 0 |
Experimental Evidence for Selection Rules in Multiphoton Double Ionization of Helium | We report on the observation of phase space modulations in the correlated
electron emission after strong field double ionization of helium using laser
pulses with a wavelength of 394~nm and an intensity of $3\cdot10^{14}$W/cm$^2$.
Those modulations are identified as direct results of quantum mechanical
selection rules predicted by many theoretical calculations. They only occur for
an odd number of absorbed photons. By that we attribute this effect to the
parity of the continuum wave function.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.