title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
What is the definition of two meromorphic functions sharing a small function? | Two meromorphic functions $f(z)$ and $g(z)$ sharing a small function
$\alpha(z)$ usually is defined in terms of vanishing of the functions
$f-\alpha$ and $g-\alpha$. We argue that it would be better to modify this
definition at the points where $\alpha$ has poles. Related to this issue we
also point out some possible gaps in proofs in the published literature.
| 0 | 0 | 1 | 0 | 0 | 0 |
Nb3Sn wire shape and cross sectional area inhomogeneity in Rutherford cables | During Rutherford cable production the wires are plastically deformed and
their initially round shape is distorted. Using X-ray absorption tomography we
have determined the 3D shape of an unreacted Nb3Sn 11 T dipole Rutherford
cable, and of a reacted and impregnated Nb3Sn cable double stack.
State-of-the-art image processing was applied to correct for tomographic
artefacts caused by the large cable aspect ratio, for the segmentation of the
individual wires and subelement bundles inside the wires, and for the
calculation of the wire cross sectional area and shape variations. The 11 T
dipole cable cross section oscillates by 2% with a frequency of 1.24 mm (1/80
of the transposition pitch length of the 40 wire cable). A comparatively
stronger cross sectional area variation is observed in the individual wires at
the thin edge of the keystoned cable where the wire aspect ratio is largest.
| 0 | 1 | 0 | 0 | 0 | 0 |
Towards Accurate Modelling of Galaxy Clustering on Small Scales: Testing the Standard $Λ\mathrm{CDM}$ + Halo Model | Interpreting the small-scale clustering of galaxies with halo models can
elucidate the connection between galaxies and dark matter halos. Unfortunately,
the modelling is typically not sufficiently accurate for ruling out models
statistically. It is thus difficult to use the information encoded in small
scales to test cosmological models or probe subtle features of the galaxy-halo
connection. In this paper, we attempt to push halo modelling into the
"accurate" regime with a fully numerical mock-based methodology and careful
treatment of statistical and systematic errors. With our forward-modelling
approach, we can incorporate clustering statistics beyond the traditional
two-point statistics. We use this modelling methodology to test the standard
$\Lambda\mathrm{CDM}$ + halo model against the clustering of SDSS DR7 galaxies.
Specifically, we use the projected correlation function, group multiplicity
function and galaxy number density as constraints. We find that while the model
fits each statistic separately, it struggles to fit them simultaneously. Adding
group statistics leads to a more stringent test of the model and significantly
tighter constraints on model parameters. We explore the impact of varying the
adopted halo definition and cosmological model and find that changing the
cosmology makes a significant difference. The most successful model we tried
(Planck cosmology with Mvir halos) matches the clustering of low luminosity
galaxies, but exhibits a 2.3$\sigma$ tension with the clustering of luminous
galaxies, thus providing evidence that the "standard" halo model needs to be
extended. This work opens the door to adding interesting freedom to the halo
model and including additional clustering statistics as constraints.
| 0 | 1 | 0 | 0 | 0 | 0 |
Graphene nanoplatelets induced tailoring in photocatalytic activity and antibacterial characteristics of MgO/graphene nanoplatelets nanocomposites | The synthesis, physical, photocatalytic, and antibacterial properties of MgO
and graphene nanoplatelets (GNPs) nanocomposites are reported. The
crystallinity, phase, morphology, chemical bonding, and vibrational modes of
prepared nanomaterials are studied. The conductive nature of GNPs is tailored
via photocatalysis and enhanced antibacterial activity. It is interestingly
observed that the MgO/GNPs nanocomposite with optimized GNPs content show a
significant photocatalytic activity (97.23% degradation) as compared to bare
MgO (43%) which makes it the potential photocatalyst for purification of
industrial waste water. In addition, the effect of increased amount of GNPs on
antibacterial performance of nanocomposites against pathogenic micro-organisms
is researched, suggesting them toxic. MgO/GNPs 25% nanocomposite may have
potential applications in waste water treatment and nanomedicine due its
multifunctionality.
| 0 | 1 | 0 | 0 | 0 | 0 |
Lower Bounds for Searching Robots, some Faulty | Suppose we are sending out $k$ robots from $0$ to search the real line at
constant speed (with turns) to find a target at an unknown location; $f$ of the
robots are faulty, meaning that they fail to report the target although
visiting its location (called crash type). The goal is to find the target in
time at most $\lambda |d|$, if the target is located at $d$, $|d| \ge 1$, for
$\lambda$ as small as possible. We show that this cannot be achieved for
$$\lambda < 2\frac{\rho^\rho}{(\rho-1)^{\rho-1}}+1,~~ \rho :=
\frac{2(f+1)}{k}~, $$ which is tight due to earlier work (see J. Czyzowitz, E.
Kranakis, D. Krizanc, L. Narayanan, J. Opatrny, PODC'16, where this problem was
introduced). This also gives some better than previously known lower bounds for
so-called Byzantine-type faulty robots that may actually wrongly report a
target.
In the second part of the paper, we deal with the $m$-rays generalization of
the problem, where the hidden target is to be detected on $m$ rays all
emanating at the same point. Using a generalization of our methods, along with
a useful relaxation of the original problem, we establish a tight lower for
this setting as well (as above, with $\rho := m(f+1)/k$). When specialized to
the case $f=0$, this resolves the question on parallel search on $m$ rays,
posed by three groups of scientists some 15 to 30 years ago: by Baeza-Yates,
Culberson, and Rawlins; by Kao, Ma, Sipser, and Yin; and by Bernstein,
Finkelstein, and Zilberstein. The $m$-rays generalization is known to have
connections to other, seemingly unrelated, problems, including hybrid
algorithms for on-line problems, and so-called contract algorithms.
| 1 | 0 | 0 | 0 | 0 | 0 |
An efficient relativistic density-matrix renormalization group implementation in a matrix-product formulation | We present an implementation of the relativistic quantum-chemical density
matrix renormalization group (DMRG) approach based on a matrix-product
formalism. Our approach allows us to optimize matrix product state (MPS) wave
functions including a variational description of scalar-relativistic effects
and spin-orbit coupling from which we can calculate, for example, first-order
electric and magnetic properties in a relativistic framework. While
complementing our pilot implementation (S. Knecht et al., J. Chem. Phys., 140,
041101 (2014)) this work exploits all features provided by its underlying
non-relativistic DMRG implementation based on an matrix product state and
operator formalism. We illustrate the capabilities of our relativistic DMRG
approach by studying the ground-state magnetization as well as current density
of a paramagnetic $f^9$ dysprosium complex as a function of the active orbital
space employed in the MPS wave function optimization.
| 0 | 1 | 0 | 0 | 0 | 0 |
Cyclic Hypergraph Degree Sequences | The problem of efficiently characterizing degree sequences of simple
hypergraphs is a fundamental long-standing open problem in Graph Theory.
Several results are known for restricted versions of this problem. This paper
adds to the list of sufficient conditions for a degree sequence to be {\em
hypergraphic}. This paper proves a combinatorial lemma about cyclically
permuting the columns of a binary table with length $n$ binary sequences as
rows. We prove that for any set of cyclic permutations acting on its columns,
the resulting table has all of its $2^n$ rows distinct. Using this property, we
first define a subset {\em cyclic hyper degrees} of hypergraphic sequences and
show that they admit a polynomial time recognition algorithm. Next, we prove
that there are at least $2^{\frac{(n-1)(n-2)}{2}}$ {\em cyclic hyper degrees},
which also serves as a lower bound on the number of {\em hypergraphic}
sequences. The {\em cyclic hyper degrees} also enjoy a structural
characterization, they are the integral points contained in the union of some
$n$-dimensional rectangles.
| 1 | 0 | 0 | 0 | 0 | 0 |
First and Second Order Methods for Online Convolutional Dictionary Learning | Convolutional sparse representations are a form of sparse representation with
a structured, translation invariant dictionary. Most convolutional dictionary
learning algorithms to date operate in batch mode, requiring simultaneous
access to all training images during the learning process, which results in
very high memory usage and severely limits the training data that can be used.
Very recently, however, a number of authors have considered the design of
online convolutional dictionary learning algorithms that offer far better
scaling of memory and computational cost with training set size than batch
methods. This paper extends our prior work, improving a number of aspects of
our previous algorithm; proposing an entirely new one, with better performance,
and that supports the inclusion of a spatial mask for learning from incomplete
data; and providing a rigorous theoretical analysis of these methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
DPCA: Dimensionality Reduction for Discriminative Analytics of Multiple Large-Scale Datasets | Principal component analysis (PCA) has well-documented merits for data
extraction and dimensionality reduction. PCA deals with a single dataset at a
time, and it is challenged when it comes to analyzing multiple datasets. Yet in
certain setups, one wishes to extract the most significant information of one
dataset relative to other datasets. Specifically, the interest may be on
identifying, namely extracting features that are specific to a single target
dataset but not the others. This paper develops a novel approach for such
so-termed discriminative data analysis, and establishes its optimality in the
least-squares (LS) sense under suitable data modeling assumptions. The
criterion reveals linear combinations of variables by maximizing the ratio of
the variance of the target data to that of the remainders. The novel approach
solves a generalized eigenvalue problem by performing SVD just once. Numerical
tests using synthetic and real datasets showcase the merits of the proposed
approach relative to its competing alternatives.
| 1 | 0 | 0 | 1 | 0 | 0 |
Modeling Magnetic Anisotropy of Single Chain Magnets in $|d/J| \geq 1$ Regime | Single molecule magnets (SMMs) with single-ion anisotropies $\mathbf d$,
comparable to exchange interactions J, between spins have recently been
synthesized. In this paper, we provide theoretical insights into the magnetism
of such systems. We study spin chains with site spins, s=1, 3/2 and 2 and
on-site anisotropy $\mathbf d$ comparable to the exchange constants between the
spins. We find that large $\mathbf d$ leads to crossing of the states with
different $M_S$ values in the same spin manifold of the $\mathbf d = 0$ limit.
For very large $\mathbf d$'s we also find that the $M_S$ states of the higher
energy spin states descend below the $M_S$ states of the ground state spin
manifold. Total spin in this limit is no longer conserved and describing the
molecular anisotropy by the constants $D_M$ and $E_M$ is not possible. However,
the total spin of the low-lying large $M_S$ states is very nearly an integer
and using this spin value it is possible to construct an effective spin
Hamiltonian and compute the molecular magnetic anisotropy constants $D_M$ and
$E_M$. We report effect of finite sizes, rotations of site anisotropies and
chain dimerization on the effective anisotropy of the spin chains.
| 0 | 1 | 0 | 0 | 0 | 0 |
Compressive optical interferometry | Compressive sensing (CS) combines data acquisition with compression coding to
reduce the number of measurements required to reconstruct a sparse signal. In
optics, this usually takes the form of projecting the field onto sequences of
random spatial patterns that are selected from an appropriate random ensemble.
We show here that CS can be exploited in `native' optics hardware without
introducing added components. Specifically, we show that random sub-Nyquist
sampling of an interferogram helps reconstruct the field modal structure. The
distribution of reduced sensing matrices corresponding to random measurements
is provably incoherent and isotropic, which helps us carry out CS successfully.
| 0 | 1 | 0 | 0 | 0 | 0 |
Automated labeling of bugs and tickets using attention-based mechanisms in recurrent neural networks | We explore solutions for automated labeling of content in bug trackers and
customer support systems. In order to do that, we classify content in terms of
several criteria, such as priority or product area. In the first part of the
paper, we provide an overview of existing methods used for text classification.
These methods fall into two categories - the ones that rely on neural networks
and the ones that don't. We evaluate results of several solutions of both
kinds. In the second part of the paper we present our own recurrent neural
network solution based on hierarchical attention paradigm. It consists of
several Hierarchical Attention network blocks with varying Gated Recurrent Unit
cell sizes and a complementary shallow network that goes alongside. Lastly, we
evaluate above-mentioned methods when predicting fields from two datasets -
Arch Linux bug tracker and Chromium bug tracker. Our contributions include a
comprehensive benchmark between a variety of methods on relevant datasets; a
novel solution that outperforms previous generation methods; and two new
datasets that are made public for further research.
| 0 | 0 | 0 | 1 | 0 | 0 |
Robot gains Social Intelligence through Multimodal Deep Reinforcement Learning | For robots to coexist with humans in a social world like ours, it is crucial
that they possess human-like social interaction skills. Programming a robot to
possess such skills is a challenging task. In this paper, we propose a
Multimodal Deep Q-Network (MDQN) to enable a robot to learn human-like
interaction skills through a trial and error method. This paper aims to develop
a robot that gathers data during its interaction with a human and learns human
interaction behaviour from the high-dimensional sensory information using
end-to-end reinforcement learning. This paper demonstrates that the robot was
able to learn basic interaction skills successfully, after 14 days of
interacting with people.
| 1 | 0 | 0 | 1 | 0 | 0 |
Preprint Déjà Vu: an FAQ | I give a brief overview of arXiv history, and describe the current state of
arXiv practice, both technical and sociological. This commentary originally
appeared in the EMBO Journal, 19 Oct 2016. It was intended as an update on
comments from the late 1990s regarding use of preprints by biologists (or lack
thereof), but may be of interest to practitioners of other disciplines. It is
based largely on a keynote presentation I gave to the ASAPbio inaugural meeting
in Feb 2016, and responds as well to some follow-up questions.
| 1 | 1 | 0 | 0 | 0 | 0 |
Implications of hydrodynamical simulations for the interpretation of direct dark matter searches | In recent years, realistic hydrodynamical simulations of galaxies like the
Milky Way have become available, enabling a reliable estimate of the dark
matter density and velocity distribution in the Solar neighborhood. We review
here the status of hydrodynamical simulations and their implications for the
interpretation of direct dark matter searches. We focus in particular on: the
criteria to identify Milky Way-like galaxies; the impact of baryonic physics on
the dark matter velocity distribution; the possible presence of substructures
like clumps, streams, or dark disks; and on the implications for the direct
detection of dark matter with standard and non-standard interactions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Stable Charged Antiparallel Domain Walls in Hyperferroelectrics | Charge-neutral 180$^\circ$ domain walls that separate domains of antiparallel
polarization directions are common structural topological defects in
ferroelectrics. In normal ferroelectrics, charged 180$^\circ$ domain walls
running perpendicular to the polarization directions are highly energetically
unfavorable because of the depolarization field and are difficult to stabilize.
We explore both neutral and charged 180$^\circ$ domain walls in
hyperferroelectrics, a class of proper ferroelectrics with persistent
polarization in the presence of a depolarization field, using density
functional theory. We obtain zero temperature equilibrium structures of
head-to-head and tail-to-tail walls in recently discovered $ABC$-type hexagonal
hyperferroelectrics. Charged domain walls can also be stabilized in canonical
ferroelectrics represented by LiNbO$_3$ without any dopants, defects or
mechanical clamping. First-principles electronic structure calculations show
that charged domain walls can reduce and even close the band gap of host
materials and support quasi-two-dimensional electron(hole) gas with enhanced
electrical conductivity.
| 0 | 1 | 0 | 0 | 0 | 0 |
Healing Data Loss Problems in Android Apps | Android apps should be designed to cope with stop-start events, which are the
events that require stopping and restoring the execution of an app while
leaving its state unaltered. These events can be caused by run-time
configuration changes, such as a screen rotation, and by context-switches, such
as a switch from one app to another. When a stop-start event occurs, Android
saves the state of the app, handles the event, and finally restores the saved
state. To let Android save and restore the state correctly, apps must provide
the appropriate support. Unfortunately, Android developers often implement this
support incorrectly, or do not implement it at all. This bad practice makes
apps to incorrectly react to stop-start events, thus generating what we defined
data loss problems, that is Android apps that lose user data, behave
unexpectedly, and crash due to program variables that lost their values. Data
loss problems are difficult to detect because they might be observed only when
apps are in specific states and with specific inputs. Covering all the possible
cases with testing may require a large number of test cases whose execution
must be checked manually to discover whether the app under test has been
correctly restored after each stop-start event. It is thus important to
complement traditional in-house testing activities with mechanisms that can
protect apps as soon as a data loss problem occurs in the field. In this paper
we present DataLossHealer, a technique for automatically identifying and
healing data loss problems in the field as soon as they occur. DataLossHealer
is a technique that checks at run-time whether states are recovered correctly,
and heals the app when needed. DataLossHealer can learn from experience,
incrementally reducing the overhead that is introduced avoiding to monitor
interactions that have been managed correctly by the app in the past.
| 1 | 0 | 0 | 0 | 0 | 0 |
Emergent SU(N) symmetry in disordered SO(N) spin chains | Strongly disordered spin chains invariant under the SO(N) group are shown to
display antiferromagnetic phases with emergent SU(N) symmetry without fine
tuning. The phases with emergent SU(N) symmetry are of two kinds: one has a
ground state formed of randomly distributed singlets of strongly bound pairs of
SO(N) spins (the `mesonic' phase), while the other has a ground state composed
of singlets made out of strongly bound integer multiples of N SO(N) spins (the
`baryonic' phase). Although the mechanism is general, we argue that the cases
N=2,3,4 and 6 can in principle be realized with the usual spin and orbital
degrees of freedom.
| 0 | 1 | 0 | 0 | 0 | 0 |
Imaging structural transitions in organometallic molecules on Ag(100) for solar thermal energy storage | The use of opto-thermal molecular energy storage at the nanoscale creates new
opportunities for powering future microdevices with flexible synthetic
tailorability. Practical application of these molecular materials, however,
requires a deeper microscopic understanding of how their behavior is altered by
the presence of different types of substrates. Here we present single-molecule
resolved scanning tunneling microscopy imaging of thermally- and
optically-induced structural transitions in (fulvalene)tetracarbonyldiruthenium
molecules adsorbed onto a Ag(100) surface as a prototype system. Both the
parent complex and the photoisomer display distinct thermally-driven phase
transformations when they are in contact with a Ag(100) surface. This behavior
is consistent with the loss of carbonyl ligands due to strong molecule-surface
coupling. Ultraviolet radiation induces marked structural changes only in the
intact parent complex, thus indicating a photoisomerization reaction. These
results demonstrate how stimuli-induced structural transitions in this class of
molecule depend on the nature of the underlying substrate.
| 0 | 1 | 0 | 0 | 0 | 0 |
Spontaneous antiferromagnetic order and strain effect on electronic properties of $α$-graphyne | Using hybrid exchange-correlation functional in ab initio density functional
theory calculations, we study magnetic properties and strain effect on the
electronic properties of $\alpha$-graphyne monolayer. We find that a
spontaneous antiferromagnetic (AF) ordering occurs with energy band gap ($\sim$
0.5 eV) in the equilibrated $\alpha$-graphyne. Bi-axial tensile strain enhances
the stability of AF state as well as the staggered spin moment and value of the
energy gap. The antiferromagnetic semiconductor phase is quite robust against
moderate carrier filling with threshold carrier density up to
1.7$\times$10$^{14}$ electrons/cm$^2$ to destabilize the phase. The spontaneous
AF ordering and strain effect in $\alpha$-graphyne can be well described by the
framework of the Hubbard model. Our study shows that it is essential to
consider the electronic correlation effect properly in $\alpha$-graphyne and
may pave an avenue for exploring magnetic ordering in other carbon allotropes
with mixed hybridization of s and p orbitals.
| 0 | 1 | 0 | 0 | 0 | 0 |
Big Data Technology Accelerate Genomics Precision Medicine | During genomics life science research, the data volume of whole genomics and
life science algorithm is going bigger and bigger, which is calculated as TB,
PB or EB etc. The key problem will be how to store and analyze the data with
optimized way. This paper demonstrates how Intel Big Data Technology and
Architecture help to facilitate and accelerate the genomics life science
research in data store and utilization. Intel defines high performance
GenomicsDB for variant call data query and Lustre filesystem with Hierarchal
Storage Management for genomics data store. Based on these great technology,
Intel defines genomics knowledge share and exchange architecture, which is
landed and validated in BGI China and Shanghai Children Hospital with very
positive feedback. And these big data technology can definitely be scaled to
much more genomics life science partners in the world.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dirichlet-to-Neumann or Poincaré-Steklov operator on fractals described by d -sets | In the framework of the Laplacian transport, described by a Robin boundary
value problem in an exterior domain in $\mathbb{R}^n$, we generalize the
definition of the Poincaré-Steklov operator to $d$-set boundaries, $n-2<
d<n$, and give its spectral properties to compare to the spectra of the
interior domain and also of a truncated domain, considered as an approximation
of the exterior case. The well-posedness of the Robin boundary value problems
for the truncated and exterior domains is given in the general framework of
$n$-sets. The results are obtained thanks to a generalization of the continuity
and compactness properties of the trace and extension operators in Sobolev,
Lebesgue and Besov spaces, in particular, by a generalization of the classical
Rellich-Kondrachov Theorem of compact embeddings for $n$ and $d$-sets.
| 0 | 0 | 1 | 0 | 0 | 0 |
Grammar Variational Autoencoder | Deep generative models have been wildly successful at learning coherent
latent representations for continuous data such as video and audio. However,
generative modeling of discrete data such as arithmetic expressions and
molecular structures still poses significant challenges. Crucially,
state-of-the-art methods often produce outputs that are not valid. We make the
key observation that frequently, discrete data can be represented as a parse
tree from a context-free grammar. We propose a variational autoencoder which
encodes and decodes directly to and from these parse trees, ensuring the
generated outputs are always valid. Surprisingly, we show that not only does
our model more often generate valid outputs, it also learns a more coherent
latent space in which nearby points decode to similar discrete outputs. We
demonstrate the effectiveness of our learned models by showing their improved
performance in Bayesian optimization for symbolic regression and molecular
synthesis.
| 0 | 0 | 0 | 1 | 0 | 0 |
Optimal Bayesian Minimax Rates for Unconstrained Large Covariance Matrices | We obtain the optimal Bayesian minimax rate for the unconstrained large
covariance matrix of multivariate normal sample with mean zero, when both the
sample size, n, and the dimension, p, of the covariance matrix tend to
infinity. Traditionally the posterior convergence rate is used to compare the
frequentist asymptotic performance of priors, but defining the optimality with
it is elusive. We propose a new decision theoretic framework for prior
selection and define Bayesian minimax rate. Under the proposed framework, we
obtain the optimal Bayesian minimax rate for the spectral norm for all rates of
p. We also considered Frobenius norm, Bregman divergence and squared
log-determinant loss and obtain the optimal Bayesian minimax rate under certain
rate conditions on p. A simulation study is conducted to support the
theoretical results.
| 0 | 0 | 1 | 1 | 0 | 0 |
Retirement spending and biological age | We solve a lifecycle model in which the consumer's chronological age does not
move in lockstep with calendar time. Instead, biological age increases at a
stochastic non-linear rate in time like a broken clock that might occasionally
move backwards. In other words, biological age could actually decline. Our
paper is inspired by the growing body of medical literature that has identified
biomarkers which indicate how people age at different rates. This offers better
estimates of expected remaining lifetime and future mortality rates. It isn't
farfetched to argue that in the not-too-distant future personal age will be
more closely associated with biological vs. calendar age. Thus, after
introducing our stochastic mortality model we derive optimal consumption rates
in a classic Yaari (1965) framework adjusted to our proper clock time. In
addition to the normative implications of having access to biological age, our
positive objective is to partially explain the cross-sectional heterogeneity in
retirement spending rates at any given chronological age. In sum, we argue that
neither biological nor chronological age alone is a sufficient statistic for
making economic decisions. Rather, both ages are required to behave rationally.
| 0 | 0 | 0 | 0 | 0 | 1 |
Quantum X Waves with Orbital Angular Momentum in Nonlinear Dispersive Media | We present a complete and consistent quantum theory of generalised X waves
with orbital angular momentum (OAM) in dispersive media. We show that the
resulting quantised light pulses are affected by neither dispersion nor
diffraction and are therefore resilient against external perturbations. The
nonlinear interaction of quantised X waves in quadratic and Kerr nonlinear
media is also presented and studied in detail.
| 0 | 1 | 0 | 0 | 0 | 0 |
Role of Skin Friction Drag during Flow-Induced Reconfiguration of a Flexible Thin Plate | We investigate drag reduction due to the flow-induced reconfiguration of a
flexible thin plate in presence of skin friction drag at low Reynolds Number.
The plate is subjected to a uniform free stream and is tethered at one end. We
extend existing models in the literature to account for the skin friction drag.
The total drag on the plate with respect to a rigid upright plate decreases due
to flow-induced reconfiguration and further reconfiguration increases the total
drag due to increase in skin friction drag. A critical value of Cauchy number
($Ca$) exists at which the total drag on the plate with respect to a rigid
upright plate is minimum at a given Reynolds number. The reconfigured shape of
the plate for this condition is unique, beyond which the total drag increases
on the plate even with reconfiguration. The ratio of the form drag coefficient
for an upright rigid plate and skin drag coefficient for a horizontal rigid
plate ($\lambda$) determines the critical Cauchy number ($Ca_{cr}$). We propose
modification in the drag scaling with free stream velocity ($F_{x}$ ${\propto}$
$U^{n}$) in presence of the skin friction drag. The following expressions of
$n$ are found for $0.01 \leq Re \leq 1$, $n = 4/5 + {\lambda}/5$ for 1 $\leq$
$Ca$ $<$ $Ca_{cr}$ and $n = 1 + {\lambda}/5$ for $Ca_{cr} \leq Ca \leq 300$,
where $Re$ is Reynolds number. We briefly discuss the combined effect of the
skin friction drag and buoyancy on the drag reduction. An assessment of the
feasibility of experiments is presented in order to translate the present model
to physical systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
Generalization Error Bounds with Probabilistic Guarantee for SGD in Nonconvex Optimization | The success of deep learning has led to a rising interest in the
generalization property of the stochastic gradient descent (SGD) method, and
stability is one popular approach to study it. Existing works based on
stability have studied nonconvex loss functions, but only considered the
generalization error of the SGD in expectation. In this paper, we establish
various generalization error bounds with probabilistic guarantee for the SGD.
Specifically, for both general nonconvex loss functions and gradient dominant
loss functions, we characterize the on-average stability of the iterates
generated by SGD in terms of the on-average variance of the stochastic
gradients. Such characterization leads to improved bounds for the
generalization error for SGD. We then study the regularized risk minimization
problem with strongly convex regularizers, and obtain improved generalization
error bounds for proximal SGD. With strongly convex regularizers, we further
establish the generalization error bounds for nonconvex loss functions under
proximal SGD with high-probability guarantee, i.e., exponential concentration
in probability.
| 0 | 0 | 0 | 1 | 0 | 0 |
Distributed algorithm for empty vehicles management in personal rapid transit (PRT) network | In this paper, an original heuristic algorithm of empty vehicles management
in personal rapid transit network is presented. The algorithm is used for the
delivery of empty vehicles for waiting passengers, for balancing the
distribution of empty vehicles within the network, and for providing an empty
space for vehicles approaching a station. Each of these tasks involves a
decision on the trip that has to be done by a selected empty vehicle from its
actual location to some determined destination. The decisions are based on a
multi-parameter function involving a set of factors and thresholds. An
important feature of the algorithm is that it does not use any central database
of passenger input (demand) and locations of free vehicles. Instead, it is
based on the local exchange of data between stations: on their states and on
the vehicles they expect. Therefore, it seems well-tailored for a distributed
implementation. The algorithm is uniform, meaning that the same basic procedure
is used for multiple tasks using a task-specific set of parameters.
| 1 | 0 | 0 | 0 | 0 | 0 |
Controllability of the 1D Schrödinger equation using flatness | We derive in a direct way the exact controllability of the 1D free
Schrödinger equation with Dirichlet boundary control. We use the so-called
flatness approach, which consists in parametrizing the solution and the control
by the derivatives of a "flat output". This provides an explicit and very
regular control achieving the exact controllability in the energy space.
| 0 | 0 | 1 | 0 | 0 | 0 |
ICT Green Governance: new generation model based on Corporate Social Responsibility and Green IT | The strategy of sustainable development in the governance of information and
communication technology (ICT) is a sector of advanced research that leads to
rising challenges posed by social and environmental requirements in the
implementation and establishment of the governance strategy. This paper offers
new generation governance model that we call "ICT Green Governance". The
proposed framework provides an original model based on the Corporate Social
Responsibility (CSR) concept and Green IT strategy. Facing increasing pressure
from stakeholders, the model offers a new vision of ICT governance to ensure
effective and efficient use of ICT in enabling an enterprise to achieve its
goals. We present here the relevance of our model, on the basis of a literature
review, and provide guidelines and principles for effective ICT governance in
the way of sustainable development, in order to improve the economic, social
and environmental performance of companies.
| 1 | 0 | 0 | 0 | 0 | 0 |
Speaker Selective Beamformer with Keyword Mask Estimation | This paper addresses the problem of automatic speech recognition (ASR) of a
target speaker in background speech. The novelty of our approach is that we
focus on a wakeup keyword, which is usually used for activating ASR systems
like smart speakers. The proposed method firstly utilizes a DNN-based mask
estimator to separate the mixture signal into the keyword signal uttered by the
target speaker and the remaining background speech. Then the separated signals
are used for calculating a beamforming filter to enhance the subsequent
utterances from the target speaker. Experimental evaluations show that the
trained DNN-based mask can selectively separate the keyword and background
speech from the mixture signal. The effectiveness of the proposed method is
also verified with Japanese ASR experiments, and we confirm that the character
error rates are significantly improved by the proposed method for both
simulated and real recorded test sets.
| 1 | 0 | 0 | 0 | 0 | 0 |
Proof of an entropy conjecture of Leighton and Moitra | We prove the following conjecture of Leighton and Moitra. Let $T$ be a
tournament on $[n]$ and $S_n$ the set of permutations of $[n]$. For an arc $uv$
of $T$, let $A_{uv}=\{\sigma \in S_n \, : \, \sigma(u)<\sigma(v) \}$.
$\textbf{Theorem.}$ For a fixed $\varepsilon>0$, if $\mathbb{P}$ is a
probability distribution on $S_n$ such that
$\mathbb{P}(A_{uv})>1/2+\varepsilon$ for every arc $uv$ of $T$, then the binary
entropy of $\mathbb{P}$ is at most $(1-\vartheta_{\varepsilon})\log_2 n!$ for
some (fixed) positive $\vartheta_\varepsilon$.
When $T$ is transitive the theorem is due to Leighton and Moitra; for this
case we give a short proof with a better $\vartheta_\varepsilon$.
| 0 | 0 | 1 | 0 | 0 | 0 |
What drives transient behaviour in complex systems? | We study transient behaviour in the dynamics of complex systems described by
a set of non-linear ODE's. Destabilizing nature of transient trajectories is
discussed and its connection with the eigenvalue-based linearization procedure.
The complexity is realized as a random matrix drawn from a modified May-Wigner
model. Based on the initial response of the system, we identify a novel
stable-transient regime. We calculate exact abundances of typical and extreme
transient trajectories finding both Gaussian and Tracy-Widom distributions
known in extreme value statistics. We identify degrees of freedom driving
transient behaviour as connected to the eigenvectors and encoded in a
non-orthogonality matrix $T_0$. We accordingly extend the May-Wigner model to
contain a phase with typical transient trajectories present. An exact norm of
the trajectory is obtained in the vanishing $T_0$ limit where it describes a
normal matrix.
| 0 | 1 | 0 | 0 | 0 | 0 |
Complexity of strong approximation on the sphere | By assuming some widely-believed arithmetic conjectures, we show that the
task of accepting a number that is representable as a sum of $d\geq2$ squares
subjected to given congruence conditions is NP-complete. On the other hand, we
develop and implement a deterministic polynomial-time algorithm that represents
a number as a sum of 4 squares with some restricted congruence conditions, by
assuming a polynomial-time algorithm for factoring integers and
Conjecture~\ref{cc}. As an application, we develop and implement a
deterministic polynomial-time algorithm for navigating LPS Ramanujan graphs,
under the same assumptions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Correlation between Foam-Bubble Size and Drag Coefficient in Hurricane Conditions | Recently proposed model of foam impact on the air sea drag coefficient Cd has
been employed for the estimation of the efficient foam-bubble radius Rb
variation with wind speed U10 in hurricane conditions. The model relates Cd
(U10) with the efficient roughness length Zeff (U10) represented as a sum of
aerodynamic roughness lengths of the foam free and foam covered sea surfaces Zw
(U10 ), and Zf (U10) weighted with the foam coverage coefficient. This relation
is treated for known phenomenological distributions Cd (U10), Zw (U10) at
strong wind speeds as an inverse problem for the efficient roughness parameter
of foam-covered sea surface Zf (U10).
| 0 | 1 | 0 | 0 | 0 | 0 |
On the Structure of Superconducting Order Parameter in High-Temperature Fe-Based Superconductors | This paper discusses the synthesis, characterization, and comprehensive study
of Ba-122 single crystals with various substitutions and various $T_c$. The
paper uses five complementary techniques to obtain a self-consistent set of
data on the superconducting properties of Ba-122. A major conclusion of the
authors work is the coexistence of two superconducting condensates differing in
the electron-boson coupling strength. The two gaps that develop in distinct
Fermi surface sheets are nodeless in the $k_xk_y$-plane and exhibit s-wave
symmetry, the two-band model represents a sufficient data description tool. A
moderate interband coupling and a considerable Coulomb repulsion in the
description of the two-gap superconducting state of barium pnictides favor the
$s^{++}$-model.
| 0 | 1 | 0 | 0 | 0 | 0 |
Rigorous statistical analysis of HTTPS reachability | The use of secure connections using HTTPS as the default means, or even the
only means, to connect to web servers is increasing. It is being pushed from
both sides: from the bottom up by client distributions and plugins, and from
the top down by organisations such as Google. However, there are potential
technical hurdles that might lock some clients out of the modern web. This
paper seeks to measure and precisely quantify those hurdles in the wild. More
than three million measurements provide statistically significant evidence of
degradation. We show this through a variety of statistical techniques. Various
factors are shown to influence the problem, ranging from the client's browser,
to the locale from which they connect.
| 1 | 0 | 0 | 1 | 0 | 0 |
Exploiting Negative Curvature in Deterministic and Stochastic Optimization | This paper addresses the question of whether it can be beneficial for an
optimization algorithm to follow directions of negative curvature. Although
prior work has established convergence results for algorithms that integrate
both descent and negative curvature steps, there has not yet been extensive
numerical evidence showing that such methods offer consistent performance
improvements. In this paper, we present new frameworks for combining descent
and negative curvature directions: alternating two-step approaches and dynamic
step approaches. The aspect that distinguishes our approaches from ones
previously proposed is that they make algorithmic decisions based on
(estimated) upper-bounding models of the objective function. A consequence of
this aspect is that our frameworks can, in theory, employ fixed stepsizes,
which makes the methods readily translatable from deterministic to stochastic
settings. For deterministic problems, we show that instances of our dynamic
framework yield gains in performance compared to related methods that only
follow descent steps. We also show that gains can be made in a stochastic
setting in cases when a standard stochastic-gradient-type method might make
slow progress.
| 0 | 0 | 1 | 0 | 0 | 0 |
Outer automorphism groups of right-angled Coxeter groups are either large or virtually abelian | We generalise the notion of a separating intersection of links (SIL) to give
necessary and sufficient criteria on the defining graph $\Gamma$ of a
right-angled Coxeter group $W_\Gamma$ so that its outer automorphism group is
large: that is, it contains a finite index subgroup that admits the free group
$F_2$ as a quotient. When $Out(W_\Gamma)$ is not large, we show it is virtually
abelian. We also show that the same dichotomy holds for the outer automorphism
groups of graph products of finite abelian groups. As a consequence, these
groups have property (T) if and only if they are finite, or equivalently
$\Gamma$ contains no SIL.
| 0 | 0 | 1 | 0 | 0 | 0 |
Retrieving Instantaneous Field of View and Geophysical Information for Atmospheric Limb Sounding with USGNC Near Real-Time Orbit Data | The Limb-imaging Ionospheric and Thermospheric Extreme-ultraviolet
Spectrograph (LITES) experiment is one of thirteen instruments aboard the Space
Test Program Houston 5 (STP-H5) payload on the International Space Station.
Along with the complementary GPS Radio Occultation and Ultraviolet Photometry
-- Colocated (GROUP-C) experiment, LITES will investigate ionospheric
structures and variability relevant to the global ionosphere. The ISS has an
orbital inclination of 51.6° which combined with its altitude of about 410
km enables middle- and low-latitude measurements from slightly above the peak
region of the ionosphere. The LITES instrument features a 10° by 10°
field of view which is collapsed horizontally, combining all information from a
given altitude. The instrument is installed such it looks in the wake of the
ISS and about 14.5° downwards in order to image altitudes ranging from
about 350 km to 150 km. The actual viewing altitude and geometry is directly
dependent on the pitch of the ISS, affecting the geophysical information
captured by the instrument.
| 0 | 1 | 0 | 0 | 0 | 0 |
Better than counting: Density profiles from force sampling | Calculating one-body density profiles in equilibrium via particle-based
simulation methods involves counting of events of particle occurrences at
(histogram-resolved) space points. Here we investigate an alternative method
based on a histogram of the local force density. Via an exact sum rule the
density profile is obtained with a simple spatial integration. The method
circumvents the inherent ideal gas fluctuations. We have tested the method in
Monte Carlo, Brownian Dynamics and Molecular Dynamics simulations. The results
carry a statistical uncertainty smaller than that of the standard, counting,
method, reducing therefore the computation time.
| 0 | 1 | 0 | 0 | 0 | 0 |
Integrating self-efficacy into a gamified approach to thwart phishing attacks | Security exploits can include cyber threats such as computer programs that
can disturb the normal behavior of computer systems (viruses), unsolicited
e-mail (spam), malicious software (malware), monitoring software (spyware),
attempting to make computer resources unavailable to their intended users
(Distributed Denial-of-Service or DDoS attack), the social engineering, and
online identity theft (phishing). One such cyber threat, which is particularly
dangerous to computer users is phishing. Phishing is well known as online
identity theft, which targets to steal victims' sensitive information such as
username, password and online banking details. This paper focuses on designing
an innovative and gamified approach to educate individuals about phishing
attacks. The study asks how one can integrate self-efficacy, which has a
co-relation with the user's knowledge, into an anti-phishing educational game
to thwart phishing attacks? One of the main reasons would appear to be a lack
of user knowledge to prevent from phishing attacks. Therefore, this research
investigates the elements that influence (in this case, either conceptual or
procedural knowledge or their interaction effect) and then integrate them into
an anti-phishing educational game to enhance people's phishing prevention
behaviour through their motivation.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quantum dynamics of bosons in a two-ring ladder: dynamical algebra, vortex-like excitations and currents | We study the quantum dynamics of the Bose-Hubbard model on a ladder formed by
two rings coupled by tunneling effect. By implementing the Bogoliubov
approximation scheme, we prove that, despite the presence of the inter-ring
coupling term, the Hamiltonian decouples in many independent sub-Hamiltonians
$\hat{H}_k$ associated to momentum-mode pairs $\pm k$. Each sub-Hamiltonian
$\hat{H}_k$ is then shown to be part of a specific dynamical algebra. The
properties of the latter allow us to perform the diagonalization process, to
find energy spectrum, the conserved quantities of the model, and to derive the
time evolution of important physical observables. We then apply this solution
scheme to the simplest possible closed ladder, the double trimer. After
observing that the excitations of the system are weakly-populated vortices, we
explore the corresponding dynamics by varying the initial conditions and the
model parameters. Finally, we show that the inter-ring tunneling determines a
spectral collapse when approaching the border of the dynamical-stability
region.
| 0 | 1 | 0 | 0 | 0 | 0 |
A New Convolutional Network-in-Network Structure and Its Applications in Skin Detection, Semantic Segmentation, and Artifact Reduction | The inception network has been shown to provide good performance on image
classification problems, but there are not much evidences that it is also
effective for the image restoration or pixel-wise labeling problems. For image
restoration problems, the pooling is generally not used because the decimated
features are not helpful for the reconstruction of an image as the output.
Moreover, most deep learning architectures for the restoration problems do not
use dense prediction that need lots of training parameters. From these
observations, for enjoying the performance of inception-like structure on the
image based problems we propose a new convolutional network-in-network
structure. The proposed network can be considered a modification of inception
structure where pool projection and pooling layer are removed for maintaining
the entire feature map size, and a larger kernel filter is added instead.
Proposed network greatly reduces the number of parameters on account of removed
dense prediction and pooling, which is an advantage, but may also reduce the
receptive field in each layer. Hence, we add a larger kernel than the original
inception structure for not increasing the depth of layers. The proposed
structure is applied to typical image-to-image learning problems, i.e., the
problems where the size of input and output are same such as skin detection,
semantic segmentation, and compression artifacts reduction. Extensive
experiments show that the proposed network brings comparable or better results
than the state-of-the-art convolutional neural networks for these problems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Rapid rotators revisited: absolute dimensions of KOI-13 | We analyse Kepler light-curves of the exoplanet KOI-13b transiting its
moderately rapidly rotating (gravity-darkened) parent star. A physical model,
with minimal ad hoc free parameters, reproduces the time-averaged light-curve
at the ca. 10 parts per million level. We demonstrate that this Roche-model
solution allows the absolute dimensions of the system to be determined from the
star's projected equatorial rotation speed, v(e)sin(i), without any additional
assumptions; we find a planetary radius 1.33+/-0.05 R(Jup), stellar polar
radius 1.55+/-0.06 R(sun), combined mass M(*) + M(P) (\simeq M*) = 1.47 +/-
0.17 M(sun), and distance d \simeq 370+/-25 pc, where the errors are dominated
by uncertainties in relative flux contribution of the visual-binary companion
KOI-13B. The implied stellar rotation period is within ca. 5% of the
non-orbital, 25.43-hr signal found in the Kepler photometry. We show that the
model accurately reproduces independent tomographic observations, and yields an
offset between orbital and stellar-rotation angular-momentum vectors of
60.25+/-0.05 degrees.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning Linear Dynamical Systems via Spectral Filtering | We present an efficient and practical algorithm for the online prediction of
discrete-time linear dynamical systems with a symmetric transition matrix. We
circumvent the non-convex optimization problem using improper learning:
carefully overparameterize the class of LDSs by a polylogarithmic factor, in
exchange for convexity of the loss functions. From this arises a
polynomial-time algorithm with a near-optimal regret guarantee, with an
analogous sample complexity bound for agnostic learning. Our algorithm is based
on a novel filtering technique, which may be of independent interest: we
convolve the time series with the eigenvectors of a certain Hankel matrix.
| 1 | 0 | 0 | 1 | 0 | 0 |
Time pressure and honesty in a deception game | Previous experiments have found mixed results on whether honesty is intuitive
or requires deliberation. Here we add to this literature by building on prior
work of Capraro (2017). We report a large study (N=1,389) manipulating time
pressure vs time delay in a deception game. We find that, in this setting,
people are more honest under time pressure, and that this result is not driven
by confounds present in earlier work.
| 0 | 0 | 0 | 0 | 1 | 0 |
A deep learning approach to real-time parking occupancy prediction in spatio-termporal networks incorporating multiple spatio-temporal data sources | A deep learning model is proposed for predicting block-level parking
occupancy in real time. The model leverages Graph-Convolutional Neural Networks
(GCNN) to extract the spatial relations of traffic flow in large-scale
networks, and utilizes Recurrent Neural Networks (RNN) with Long-Short Term
Memory (LSTM) to capture the temporal features. In addition, the model is
capable of taking multiple heterogeneously structured traffic data sources as
input, such as parking meter transactions, traffic speed, and weather
conditions. The model performance is evaluated through a case study in
Pittsburgh downtown area. The proposed model outperforms other baseline methods
including multi-layer LSTM and Lasso with an average testing MAPE of 12.0\%
when predicting block-level parking occupancies 30 minutes in advance. The case
study also shows that, in generally, the prediction model works better for
business areas than for recreational locations. We found that incorporating
traffic speed and weather information can significantly improve the prediction
performance. Weather data is particularly useful for improving predicting
accuracy in recreational areas.
| 1 | 0 | 0 | 1 | 0 | 0 |
Time series experiments and causal estimands: exact randomization tests and trading | We define causal estimands for experiments on single time series, extending
the potential outcome framework to dealing with temporal data. Our approach
allows the estimation of some of these estimands and exact randomization based
p-values for testing causal effects, without imposing stringent assumptions. We
test our methodology on simulated "potential autoregressions,"which have a
causal interpretation. Our methodology is partially inspired by data from a
large number of experiments carried out by a financial company who compared the
impact of two different ways of trading equity futures contracts. We use our
methodology to make causal statements about their trading methods.
| 0 | 0 | 1 | 1 | 0 | 0 |
Sparse phase retrieval of one-dimensional signals by Prony's method | In this paper, we show that sparse signals f representable as a linear
combination of a finite number N of spikes at arbitrary real locations or as a
finite linear combination of B-splines of order m with arbitrary real knots can
be almost surely recovered from O(N^2) Fourier intensity measurements up to
trivial ambiguities. The constructive proof consists of two steps, where in the
first step the Prony method is applied to recover all parameters of the
autocorrelation function and in the second step the parameters of f are
derived. Moreover, we present an algorithm to evaluate f from its Fourier
intensities and illustrate it at different numerical examples.
| 1 | 0 | 1 | 0 | 0 | 0 |
Atomic and electronic structure of a copper/graphene interface as prepared and 1.5 years after | We report the results of X-ray spectroscopy and Raman measurements of
as-prepared graphene on a high quality copper surface and the same materials
after 1.5 years under different conditions (ambient and low humidity). The
obtained results were compared with density functional theory calculations of
the formation energies and electronic structures of various structural defects
in graphene/Cu interfaces. For evaluation of the stability of the carbon cover,
we propose a two-step model. The first step is oxidation of the graphene, and
the second is perforation of graphene with the removal of carbon atoms as part
of the carbon dioxide molecule. Results of the modeling and experimental
measurements provide evidence that graphene grown on high-quality copper
substrate becomes robust and stable in time (1.5 years). However, the stability
of this interface depends on the quality of the graphene and the number of
native defects in the graphene and substrate. The effect of the presence of a
metallic substrate with defects on the stability and electronic structure of
graphene is also discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
Modeling and Control of Humanoid Robots in Dynamic Environments: iCub Balancing on a Seesaw | Forthcoming applications concerning humanoid robots may involve physical
interaction between the robot and a dynamic environment. In such scenario,
classical balancing and walking controllers that neglect the environment
dynamics may not be sufficient for achieving a stable robot behavior. This
paper presents a modeling and control framework for balancing humanoid robots
in contact with a dynamic environment. We first model the robot and environment
dynamics, together with the contact constraints. Then, a control strategy for
stabilizing the full system is proposed. Theoretical results are verified in
simulation with robot iCub balancing on a seesaw.
| 1 | 0 | 0 | 0 | 0 | 0 |
Forward-Backward Selection with Early Dropping | Forward-backward selection is one of the most basic and commonly-used feature
selection algorithms available. It is also general and conceptually applicable
to many different types of data. In this paper, we propose a heuristic that
significantly improves its running time, while preserving predictive accuracy.
The idea is to temporarily discard the variables that are conditionally
independent with the outcome given the selected variable set. Depending on how
those variables are reconsidered and reintroduced, this heuristic gives rise to
a family of algorithms with increasingly stronger theoretical guarantees. In
distributions that can be faithfully represented by Bayesian networks or
maximal ancestral graphs, members of this algorithmic family are able to
correctly identify the Markov blanket in the sample limit. In experiments we
show that the proposed heuristic increases computational efficiency by about
two orders of magnitude in high-dimensional problems, while selecting fewer
variables and retaining predictive performance. Furthermore, we show that the
proposed algorithm and feature selection with LASSO perform similarly when
restricted to select the same number of variables, making the proposed
algorithm an attractive alternative for problems where no (efficient) algorithm
for LASSO exists.
| 1 | 0 | 0 | 1 | 0 | 0 |
Ultracold bosonic scattering dynamics off a repulsive barrier: coherence loss at the dimensional crossover | We explore the impact of dimensionality on the scattering of a small bosonic
ensemble in an elongated harmonic trap off a centered repulsive barrier,
thereby taking particle correlations into account. The loss of coherence as
well as the oscillation of the center of mass are studied and we analyze the
influence of both particle and spatial correlations. Two different mechanisms
of coherence losses in dependence of the aspect ratio are found. For small
aspect ratios, loss of coherence between the region close to the barrier and
outer regions occurs, due to spatial correlations, and for large aspect ratios,
incoherence between the two density fragments of the left and right side of the
barrier arises, due to particle correlations. Apart form the decay of the
center of mass motion induced by the reflection and transmission, further
effects due to the particle and spatial correlations are explored. For tight
transversal traps, the amplitude of the center of mass oscillation experiences
a weaker damping, which can be traced back to the population of a second
natural orbital, and for a weaker transversal confinement, we detect a strong
decay, due to the possibility of transferring energy to transversal excited
modes. These effects are enhanced if the aspect ratio is integer valued.
| 0 | 1 | 0 | 0 | 0 | 0 |
On Lebesgue Integral Quadrature | A new type of quadrature is developed. The Gauss quadrature, for a given
measure, finds optimal values of a function's argument (nodes) and the
corresponding weights. In contrast, the Lebesgue quadrature developed in this
paper, finds optimal values of function (value-nodes) and the corresponding
weights. The Gauss quadrature groups sums by function argument, it can be
viewed as a $n$-point discrete measure, producing the Riemann integral. The
Lebesgue quadrature groups sums by function value, it can be viewed as a
$n$-point discrete distribution, producing the Lebesgue integral.
Mathematically, the problem is reduced to a generalized eigenvalue problem:
Lebesgue quadrature value-nodes are the eigenvalues and the corresponding
weights are the square of the averaged eigenvectors. A numerical estimation of
an integral as the Lebesgue integral is especially advantageous when analyzing
irregular and stochastic processes. The approach separates the outcome
(value-nodes) and the probability of the outcome (weight). For this reason, it
is especially well-suited for the study of non--Gaussian processes. The
software implementing the theory is available from the authors.
| 0 | 0 | 0 | 1 | 0 | 0 |
Protection Number in Plane Trees | The protection number of a plane tree is the minimal distance of the root to
a leaf; this definition carries over to an arbitrary node in a plane tree by
considering the maximal subtree having this node as a root. We study the the
protection number of a uniformly chosen random tree of size $n$ and also the
protection number of a uniformly chosen node in a uniformly chosen random tree
of size $n$. The method is to apply singularity analysis to appropriate
generating functions. Additional results are provided as well.
| 0 | 0 | 1 | 0 | 0 | 0 |
Improved bounds for restricted families of projections to planes in $\mathbb{R}^{3}$ | For $e \in S^{2}$, the unit sphere in $\mathbb{R}^3$, let $\pi_{e}$ be the
orthogonal projection to $e^{\perp} \subset \mathbb{R}^{3}$, and let $W \subset
\mathbb{R}^{3}$ be any $2$-plane, which is not a subspace. We prove that if $K
\subset \mathbb{R}^{3}$ is a Borel set with $\dim_{\mathrm{H}} K \leq
\tfrac{3}{2}$, then $\dim_{\mathrm{H}} \pi_{e}(K) = \dim_{\mathrm{H}} K$ for
$\mathcal{H}^{1}$ almost every $e \in S^{2} \cap W$, where $\mathcal{H}^{1}$
denotes the $1$-dimensional Hausdorff measure and $\dim_{\mathrm{H}}$ the
Hausdorff dimension. This was known earlier, due to Järvenpää,
Järvenpää, Ledrappier and Leikas, for Borel sets $K \subset
\mathbb{R}^{3}$ with $\dim_{\mathrm{H}} K \leq 1$. We also prove a partial
result for sets with dimension exceeding $3/2$, improving earlier bounds by D.
Oberlin and R. Oberlin.
| 0 | 0 | 1 | 0 | 0 | 0 |
Differential galois theory and mechanics | The classical Galois theory deals with certain finite algebraic extensions
and establishes a bijective order reversing correspondence between the
intermediate fields and the subgroups of a group of permutations called the
Galois group of the extension. It has been the dream of many mathematicians at
the end of the nineteenth century to generalize these results to systems of
algebraic partial differential (PD) equations and the corresponding finitely
generated differential extensions, in order to be able to add the word
differential in front of any classical statement. The achievement of the
Picard-Vessiot theory by E. Kolchin between 1950 and 1970 is now well known.
The purpose of this paper is to sketch the general theory for such differential
extensions and algebraic pseudogroups by means of new methods mixing
differential algebra, differential geometry and algebraic geometry. As already
discovered by E. Vessiot in 1904 through the use of automorphic systems, a
concept never acknowledged, the main point is to notice that the Galois theory
(old and new) is mainly a study of principal homogeneous spaces (PHS) for
algebraic groups or pseudogroups. Hence, all the formal theory of PD equations
developped by D.C. Spencer around 1970 must be used together with modern
algebraic geometry, in particular tensor products of rings and fields. However,
the combination of these new tools is not sufficient and we have to create the
analogue for Lie pseudogroups of the so-called invariant derivations introduced
by A. Bialynicki-Birula after 1960 in the study of algebraic groups and fields
with derivations. We shall finally prove the usefulness of the resulting
differential Galois theory through striking applications to mechanics,
revisiting shell theory, chain theory, the Frenet-Serret formulas and the
integration of Hamilton-Jacobi equations.
| 0 | 1 | 1 | 0 | 0 | 0 |
Note on "Average resistance of toroidal graphs" by Rossi, Frasca and Fagnani | In our recent paper W.S. Rossi, P. Frasca and F. Fagnani, "Average resistance
of toroidal graphs", SIAM Journal on Control and Optimization,
53(4):2541--2557, 2015, we studied how the average resistances of
$d$-dimensional toroidal grids depend on the graph topology and on the
dimension of the graph. Our results were based on the connection between
resistance and Laplacian eigenvalues. In this note, we contextualize our work
in the body of literature about random walks on graphs. Indeed, the average
effective resistance of the $d$-dimensional toroidal grid is proportional to
the mean hitting time of the simple random walk on that grid. If $d\geq3 $,
then the average resistance can be bounded uniformly in the number of nodes and
its value is of order $1/d$ for large $d$.
| 1 | 0 | 1 | 0 | 0 | 0 |
A Survey of Runtime Monitoring Instrumentation Techniques | Runtime Monitoring is a lightweight and dynamic verification technique that
involves observing the internal operations of a software system and/or its
interactions with other external entities, with the aim of determining whether
the system satisfies or violates a correctness specification. Compilation
techniques employed in Runtime Monitoring tools allow monitors to be
automatically derived from high-level correctness specifications (aka.
properties). This allows the same property to be converted into different types
of monitors, which may apply different instrumentation techniques for checking
whether the property was satisfied or not. In this paper we compare and
contrast the various types of monitoring methodologies found in the current
literature, and classify them into a spectrum of monitoring instrumentation
techniques, ranging from completely asynchronous monitoring on the one end and
completely synchronous monitoring on the other.
| 1 | 0 | 0 | 0 | 0 | 0 |
Interpretable Counting for Visual Question Answering | Questions that require counting a variety of objects in images remain a major
challenge in visual question answering (VQA). The most common approaches to VQA
involve either classifying answers based on fixed length representations of
both the image and question or summing fractional counts estimated from each
section of the image. In contrast, we treat counting as a sequential decision
process and force our model to make discrete choices of what to count.
Specifically, the model sequentially selects from detected objects and learns
interactions between objects that influence subsequent selections. A
distinction of our approach is its intuitive and interpretable output, as
discrete counts are automatically grounded in the image. Furthermore, our
method outperforms the state of the art architecture for VQA on multiple
metrics that evaluate counting.
| 1 | 0 | 0 | 0 | 0 | 0 |
How consistent is my model with the data? Information-Theoretic Model Check | The choice of model class is fundamental in statistical learning and system
identification, no matter whether the class is derived from physical principles
or is a generic black-box. We develop a method to evaluate the specified model
class by assessing its capability of reproducing data that is similar to the
observed data record. This model check is based on the information-theoretic
properties of models viewed as data generators and is applicable to e.g.
sequential data and nonlinear dynamical models. The method can be understood as
a specific two-sided posterior predictive test. We apply the
information-theoretic model check to both synthetic and real data and compare
it with a classical whiteness test.
| 1 | 0 | 0 | 1 | 0 | 0 |
Stable recovery of deep linear networks under sparsity constraints | We study a deep linear network expressed under the form of a matrix
factorization problem. It takes as input a matrix $X$ obtained by multiplying
$K$ matrices (called factors and corresponding to the action of a layer). Each
factor is obtained by applying a fixed linear operator to a vector of
parameters satisfying a sparsity constraint. In machine learning, the error
between the product of the estimated factors and $X$ (i.e. the reconstruction
error) relates to the statistical risk. The stable recovery of the parameters
defining the factors is required in order to interpret the factors and the
intermediate layers of the network. In this paper, we provide sharp conditions
on the network topology under which the error on the parameters defining the
factors (i.e. the stability of the recovered parameters) scales linearly with
the reconstruction error (i.e. the risk). Therefore, under these conditions on
the network topology, any successful learning tasks leads to robust and
therefore interpretable layers. The analysis is based on the recently proposed
Tensorial Lifting. The particularity of this paper is to consider a sparse
prior. As an illustration, we detail the analysis and provide sharp guarantees
for the stable recovery of convolutional linear network under sparsity prior.
As expected, the condition are rather strong.
| 1 | 0 | 1 | 1 | 0 | 0 |
Revisiting Parametricity: Inductives and Uniformity of Propositions | Reynold's parametricity theory captures the property that parametrically
polymorphic functions behave uniformly: they produce related results on related
instantiations. In dependently-typed programming languages, such relations and
uniformity proofs can be expressed internally, and generated as a program
translation.
We present a new parametricity translation for a significant fragment of Coq.
Previous translations of parametrically polymorphic propositions allowed
non-uniformity. For example, on related instantiations, a function may return
propositions that are logically inequivalent (e.g. True and False). We show
that uniformity of polymorphic propositions is not achievable in general.
Nevertheless, our translation produces proofs that the two propositions are
logically equivalent and also that any two proofs of those propositions are
related. This is achieved at the cost of potentially requiring more assumptions
on the instantiations, requiring them to be isomorphic in the worst case.
Our translation augments the previous one for Coq by carrying and
compositionally building extra proofs about parametricity relations. It is made
easier by a new method for translating inductive types and pattern matching.
The new method builds upon and generalizes previous such translations for
dependently-typed programming languages.
Using reification and reflection, we have implemented our translation as Coq
programs. We obtain several stronger free theorems applicable to an ongoing
compiler-correctness project. Previously, proofs of some of these theorems took
several hours to finish.
| 1 | 0 | 0 | 0 | 0 | 0 |
Characterizing Minimal Semantics-preserving Slices of predicate-linear, Free, Liberal Program Schemas | A program schema defines a class of programs, all of which have identical
statement structure, but whose functions and predicates may differ. A schema
thus defines an entire class of programs according to how its symbols are
interpreted. A subschema of a schema is obtained from a schema by deleting some
of its statements. We prove that given a schema $S$ which is predicate-linear,
free and liberal, such that the true and false parts of every if predicate
satisfy a simple additional condition, and a slicing criterion defined by the
final value of a given variable after execution of any program defined by $S$,
the minimal subschema of $S$ which respects this slicing criterion contains all
the function and predicate symbols `needed' by the variable according to the
data dependence and control dependence relations used in program slicing, which
is the symbol set given by Weiser's static slicing algorithm. Thus this
algorithm gives predicate-minimal slices for classes of programs represented by
schemas satisfying our set of conditions. We also give an example to show that
the corresponding result with respect to the slicing criterion defined by
termination behaviour is incorrect. This complements a result by the authors in
which $S$ was required to be function-linear, instead of predicate-linear.
| 1 | 0 | 0 | 0 | 0 | 0 |
Exposing Twitter Users to Contrarian News | Polarized topics often spark discussion and debate on social media. Recent
studies have shown that polarized debates have a specific clustered structure
in the endorsement net- work, which indicates that users direct their
endorsements mostly to ideas they already agree with. Understanding these
polarized discussions and exposing social media users to content that broadens
their views is of paramount importance.
The contribution of this demonstration is two-fold. (i) A tool to visualize
retweet networks about controversial issues on Twitter. By using our
visualization, users can understand how polarized discussions are shaped on
Twitter, and explore the positions of the various actors. (ii) A solution to
reduce polarization of such discussions. We do so by exposing users to
information which presents a contrarian point of view. Users can visually
inspect our recommendations and understand why and how these would play out in
terms of the retweet network.
Our demo (this https URL homepage)
provides one of the first steps in developing automated tools that help users
explore, and possibly escape, their echo chambers. The ideas in the demo can
also help content providers design tools to broaden their reach to people with
different political and ideological backgrounds.
| 1 | 0 | 0 | 0 | 0 | 0 |
Value-Decomposition Networks For Cooperative Multi-Agent Learning | We study the problem of cooperative multi-agent reinforcement learning with a
single joint reward signal. This class of learning problems is difficult
because of the often large combined action and observation spaces. In the fully
centralized and decentralized approaches, we find the problem of spurious
rewards and a phenomenon we call the "lazy agent" problem, which arises due to
partial observability. We address these problems by training individual agents
with a novel value decomposition network architecture, which learns to
decompose the team value function into agent-wise value functions. We perform
an experimental evaluation across a range of partially-observable multi-agent
domains and show that learning such value-decompositions leads to superior
results, in particular when combined with weight sharing, role information and
information channels.
| 1 | 0 | 0 | 0 | 0 | 0 |
Application of a Shallow Neural Network to Short-Term Stock Trading | Machine learning is increasingly prevalent in stock market trading. Though
neural networks have seen success in computer vision and natural language
processing, they have not been as useful in stock market trading. To
demonstrate the applicability of a neural network in stock trading, we made a
single-layer neural network that recommends buying or selling shares of a stock
by comparing the highest high of 10 consecutive days with that of the next 10
days, a process repeated for the stock's year-long historical data. A
chi-squared analysis found that the neural network can accurately and
appropriately decide whether to buy or sell shares for a given stock, showing
that a neural network can make simple decisions about the stock market.
| 1 | 0 | 0 | 0 | 0 | 0 |
A characterization of round spheres in space forms | Let $\mathbb Q^{n+1}_c$ be the complete simply-connected $(n+1)$-dimensional
space form of curvature $c$. In this paper we obtain a new characterization of
geodesic spheres in $\mathbb Q^{n+1}_c$ in terms of the higher order mean
curvatures. In particular, we prove that the geodesic sphere is the only
complete bounded immersed hypersurface in $\mathbb Q^{n+1}_c,\;c\leq 0,$ with
constant mean curvature and constant scalar curvature. The proof relies on the
well known Omori-Yau maximum principle, a formula of Walter for the Laplacian
of the $r$-th mean curvature of a hypersurface in a space form, and a classical
inequality of G\aa rding for hyperbolic polynomials.
| 0 | 0 | 1 | 0 | 0 | 0 |
Oxygen - Dislocation interaction in zirconium from first principles | Plasticity in zirconium alloys is mainly controlled by the interaction of 1/3
1210 screw dislocations with oxygen atoms in interstitial octahedral sites of
the hexagonal close-packed lattice. This process is studied here using ab
initio calculations based on the density functional theory. The atomic
simulations show that a strong repulsion exists only when the O atoms lie in
the dislocation core and belong to the prismatic dislocation habit plane. This
is a consequence of the destruction of the octahedral sites by the stacking
fault arising from the dislocation dissociation. Because of the repulsion, the
dislocation partially cross-slips to an adjacent prismatic plane, in agreement
with experiments where the lattice friction on screw dislocations in Zr-O
alloys has been attributed to the presence of jogs on the dislocations due to
local cross-slip.
| 0 | 1 | 0 | 0 | 0 | 0 |
Generating Spatial Spectrum with Metasurfaces | Fourier optics, the principle of using Fourier Transformation to understand
the functionalities of optical elements, lies at the heart of modern optics,
and has been widely applied to optical information processing, imaging,
holography etc. While a simple thin lens is capable of resolving Fourier
components of an arbitrary optical wavefront, its operation is limited to near
normal light incidence, i.e. the paraxial approximation, which put a severe
constraint on the resolvable Fourier domain. As a result, high-order Fourier
components are lost, resulting in extinction of high-resolution information of
an image. Here, we experimentally demonstrate a dielectric metasurface
consisting of high-aspect-ratio silicon waveguide array, which is capable of
performing Fourier transform for a large incident angle range and a broad
operating bandwidth. Thus our device significantly expands the operational
Fourier space, benefitting from the large numerical aperture (NA), and
negligible angular dispersion at large incident angles. Our Fourier metasurface
will not only facilitate efficient manipulation of spatial spectrum of
free-space optical wavefront, but also be readily integrated into micro-optical
platforms due to its compact size.
| 0 | 1 | 0 | 0 | 0 | 0 |
Modal clustering asymptotics with applications to bandwidth selection | Density-based clustering relies on the idea of linking groups to some
specific features of the probability distribution underlying the data. The
reference to a true, yet unknown, population structure allows to frame the
clustering problem in a standard inferential setting, where the concept of
ideal population clustering is defined as the partition induced by the true
density function. The nonparametric formulation of this approach, known as
modal clustering, draws a correspondence between the groups and the domains of
attraction of the density modes. Operationally, a nonparametric density
estimate is required and a proper selection of the amount of smoothing,
governing the shape of the density and hence possibly the modal structure, is
crucial to identify the final partition. In this work, we address the issue of
density estimation for modal clustering from an asymptotic perspective. A
natural and easy to interpret metric to measure the distance between
density-based partitions is discussed, its asymptotic approximation explored,
and employed to study the problem of bandwidth selection for nonparametric
modal clustering.
| 0 | 0 | 0 | 1 | 0 | 0 |
Neural Networks for Beginners. A fast implementation in Matlab, Torch, TensorFlow | This report provides an introduction to some Machine Learning tools within
the most common development environments. It mainly focuses on practical
problems, skipping any theoretical introduction. It is oriented to both
students trying to approach Machine Learning and experts looking for new
frameworks.
| 1 | 0 | 0 | 1 | 0 | 0 |
Spectral determination of semi-regular polygons | Let us say that an $n$-sided polygon is semi-regular if it is
circumscriptible and its angles are all equal but possibly one, which is then
larger than the rest. Regular polygons, in particular, are semi-regular. We
prove that semi-regular polygons are spectrally determined in the class of
convex piecewise smooth domains. Specifically, we show that if $\Omega$ is a
convex piecewise smooth planar domain, possibly with straight corners, whose
Dirichlet or Neumann spectrum coincides with that of an $n$-sided semi-regular
polygon $P_n$, then $\Omega$ is congruent to $P_n$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Accuracy and validity of posterior distributions using the Cressie-Read empirical likelihoods | The class of Cressie-Read empirical likelihoods are constructed with weights
derived at a minimum distance from the empirical distribution in the
Cressie-Read family of divergences indexed by $\gamma$ under the constraint of
an unbiased set of $M$-estimating equations. At first order, they provide valid
posterior probability statements for any given prior, but the bias in coverage
of the resulting empirical quantile is inversely proportional to the asymptotic
efficiency of the corresponding $M$-estimator. The Cressie-Read empirical
likelihoods based on the maximum likelihood estimating equations bring about
quantiles covering with $O(n^{-1})$ accuracy at the underlying posterior
distribution. The choice of $\gamma$ has an impact on the variance in small
samples of the posterior quantile function. Examples are given for the $M$-type
estimating equations of location and for the quasi-likelihood functions in the
generalized linear models.
| 0 | 0 | 1 | 1 | 0 | 0 |
A Quillen's Theorem A for strict $\infty$-categories I: the simplicial proof | The aim of this paper is to prove a generalization of the famous Theorem A of
Quillen for strict $\infty$-categories. This result is central to the homotopy
theory of strict $\infty$-categories developed by the authors. The proof
presented here is of a simplicial nature and uses Steiner's theory of augmented
directed complexes. In a subsequent paper, we will prove the same result by
purely $\infty$-categorical methods.
| 0 | 0 | 1 | 0 | 0 | 0 |
Assessing Excited State Energy Gaps with Time-Dependent Density Functional Theory on Ru(II) Complexes | A set of density functionals coming from different rungs on Jacob's ladder
are employed to evaluate the electronic excited states of three Ru(II)
complexes. While most studies on the performance of density functionals compare
the vertical excitation energies, in this work we focus on the energy gaps
between the electronic excited states, of the same and different multiplicity.
Excited state energy gaps are important for example to determine radiationless
transition probabilities. Besides energies, a functional should deliver the
correct state character and state ordering. Therefore, wavefunction overlaps
are introduced to systematically evaluate the effect of different functionals
on the character of the excited states. As a reference, the energies and state
characters from multi-state second-order perturbation theory complete active
space (MS-CASPT2) are used. In comparison to MS-CASPT2, it is found that while
hybrid functionals provide better vertical excitation energies, pure
functionals typically give more accurate excited state energy gaps. Pure
functionals are also found to reproduce the state character and ordering in
closer agreement to MS-CASPT2 than the hybrid functionals.
| 0 | 1 | 0 | 0 | 0 | 0 |
Description of the evolution of inhomogeneities on a Dark Matter halo with the Vlasov equation | We use a direct numerical integration of the Vlasov equation in spherical
symmetry with a background gravitational potential to determine the evolution
of a collection of particles in different models of a galactic halo. Such a
collection is assumed to represent a dark matter inhomogeneity which reaches a
stationary state determined by the virialization of the system. We describe
some features of the stationary states and, by using several halo models,
obtain distinctive signatures for the evolution of the inhomogeneities in each
of the models.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fast generation of isotropic Gaussian random fields on the sphere | The efficient simulation of isotropic Gaussian random fields on the unit
sphere is a task encountered frequently in numerical applications. A fast
algorithm based on Markov properties and fast Fourier Transforms in 1d is
presented that generates samples on an n x n grid in O(n^2 log n). Furthermore,
an efficient method to set up the necessary conditional covariance matrices is
derived and simulations demonstrate the performance of the algorithm. An open
source implementation of the code has been made available at
this https URL .
| 0 | 0 | 1 | 1 | 0 | 0 |
Initial-boundary value problem to 2D Boussinesq equations for MHD convection with stratification effects | This paper is concerned with the initial-boundary value problem to 2D
magnetohydrodynamics-Boussinesq system with the temperature-dependent
viscosity, thermal diffusivity and electrical conductivity. First, we establish
the global weak solutions under the minimal initial assumption. Then by
imposing higher regularity assumption on the initial data, we obtain the global
strong solution with uniqueness. Moreover, the exponential decay estimate of
the solution is obtained.
| 0 | 0 | 1 | 0 | 0 | 0 |
Profile of a coherent vortex in two-dimensional turbulence at static pumping | We examine the velocity profile of coherent vortices appearing as a
consequence of the inverse cascade of two-dimensional turbulence in a finite
box in the case of static pumping. We demonstrate that in the passive regime
the flat velocity profile is realized, as in the case of pumping short
correlated in time. However, in the static case the energy flux to large scales
is dependent on the system parameters. We demonstrate that it is proportional
to $f^{4/3}$ where $f$ is the characteristic force exciting turbulence.
| 0 | 1 | 0 | 0 | 0 | 0 |
Computation of life expectancy from incomplete data | Estimating the human longevity and computing of life expectancy are central
to the population dynamics. These aspects were studied seriously by scientists
since fifteenth century, including renowned astronomer Edmund Halley. From
basic principles of population dynamics, we propose a method to compute life
expectancy from incomplete data.
| 0 | 0 | 0 | 0 | 1 | 0 |
Residual Squeeze VGG16 | Deep learning has given way to a new era of machine learning, apart from
computer vision. Convolutional neural networks have been implemented in image
classification, segmentation and object detection. Despite recent advancements,
we are still in the very early stages and have yet to settle on best practices
for network architecture in terms of deep design, small in size and a short
training time. In this work, we propose a very deep neural network comprised of
16 Convolutional layers compressed with the Fire Module adapted from the
SQUEEZENET model. We also call for the addition of residual connections to help
suppress degradation. This model can be implemented on almost every neural
network model with fully incorporated residual learning. This proposed model
Residual-Squeeze-VGG16 (ResSquVGG16) trained on the large-scale MIT
Places365-Standard scene dataset. In our tests, the model performed with
accuracy similar to the pre-trained VGG16 model in Top-1 and Top-5 validation
accuracy while also enjoying a 23.86% reduction in training time and an 88.4%
reduction in size. In our tests, this model was trained from scratch.
| 1 | 0 | 0 | 0 | 0 | 0 |
Generic partiality for $\frac{3}{2}$-institutions | $\frac{3}{2}$-institutions have been introduced as an extension of
institution theory that accommodates implicitly partiality of the signature
morphisms together with its syntactic and semantic effects. In this paper we
show that ordinary institutions that are equipped with an inclusion system for
their categories of signatures generate naturally $\frac{3}{2}$ -institutions
with explicit partiality for their signature morphisms. This provides a general
uniform way to build 3 -institutions for the foundations of conceptual blending
and software evolution. Moreover our general construction allows for an uniform
derivation of some useful technical properties.
| 0 | 0 | 1 | 0 | 0 | 0 |
Independently Controllable Factors | It has been postulated that a good representation is one that disentangles
the underlying explanatory factors of variation. However, it remains an open
question what kind of training framework could potentially achieve that.
Whereas most previous work focuses on the static setting (e.g., with images),
we postulate that some of the causal factors could be discovered if the learner
is allowed to interact with its environment. The agent can experiment with
different actions and observe their effects. More specifically, we hypothesize
that some of these factors correspond to aspects of the environment which are
independently controllable, i.e., that there exists a policy and a learnable
feature for each such aspect of the environment, such that this policy can
yield changes in that feature with minimal changes to other features that
explain the statistical variations in the observed data. We propose a specific
objective function to find such factors and verify experimentally that it can
indeed disentangle independently controllable aspects of the environment
without any extrinsic reward signal.
| 1 | 0 | 0 | 1 | 0 | 0 |
Determination and biological application of a time dependent thermal parameter and sensitivity analysis for a conduction problem with superficial evaporation | A boundary value problem, which could represent a transcendent temperature
conduction problem with evaporation in a part of the boundary, was studied to
determine unknown thermophysical parameters, which can be constants or time
dependent functions. The goal of this paper was elucidate which parameters may
be determined using only the measured superficial temperature in part of the
boundary of the domain. We formulated a nonlinear inverse problem to determine
the unknown parameters and a sensitivity analysis was also performed. In
particular, we introduced a new way of computing a sensitivity analysis of a
parameter which is variable in time. We applied the proposed method to model
tissue temperature changes under transient conditions in a biological problem:
the hamster cheek pouch. In this case, the time dependent unknown parameter can
be associated to the loss of heat due to water evaporation at the superficial
layer of the pouch. Finally, we performed the sensitivity analysis to determine
the most sensible parameters to variations of the superficial experimental data
in the hamster cheek pouch.
| 0 | 1 | 0 | 0 | 0 | 0 |
Setting Boundaries with Memory: Generation of Topological Boundary States in Floquet-Induced Synthetic Crystals | When a d-dimensional quantum system is subjected to a periodic drive, it may
be treated as a (d+1)-dimensional system, where the extra dimension is a
synthetic one. In this work, we take these ideas to the next level by showing
that non-uniform potentials, and particularly edges, in the synthetic dimension
are created whenever the dynamics of system has a memory component. We
demonstrate that topological states appear on the edges of these synthetic
dimensions and can be used as a basis for a wave packet construction. Such
systems may act as an optical isolator which allows transmission of light in a
directional way. We supplement our ideas by an example of a physical system
that shows this type of physics.
| 0 | 1 | 0 | 0 | 0 | 0 |
Collaborative Nested Sampling: Big Data vs. complex physical models | The data torrent unleashed by current and upcoming astronomical surveys
demands scalable analysis methods. Many machine learning approaches scale well,
but separating the instrument measurement from the physical effects of
interest, dealing with variable errors, and deriving parameter uncertainties is
often an after-thought. Classic forward-folding analyses with Markov Chain
Monte Carlo or Nested Sampling enable parameter estimation and model
comparison, even for complex and slow-to-evaluate physical models. However,
these approaches require independent runs for each data set, implying an
unfeasible number of model evaluations in the Big Data regime. Here I present a
new algorithm, collaborative nested sampling, for deriving parameter
probability distributions for each observation. Importantly, the number of
physical model evaluations scales sub-linearly with the number of data sets,
and no assumptions about homogeneous errors, Gaussianity, the form of the model
or heterogeneity/completeness of the observations need to be made.
Collaborative nested sampling has immediate application in speeding up analyses
of large surveys, integral-field-unit observations, and Monte Carlo
simulations.
| 0 | 1 | 0 | 1 | 0 | 0 |
Latent Room-Temperature T$_c$ in Cuprate Superconductors | The ancient phrase, "All roads lead to Rome" applies to Chemistry and
Physics. Both are highly evolved sciences, with their own history, traditions,
language, and approaches to problems. Despite all these differences, these two
roads generally lead to the same place. For high temperature cuprate
superconductors however, the Chemistry and Physics roads do not meet or even
come close to each other. In this paper, we analyze the physics and chemistry
approaches to the doped electronic structure of cuprates and find the chemistry
doped hole (out-of-the-CuO$\mathrm{_2}$-planes) leads to explanations of a vast
array of normal state cuprate phenomenology using simple counting arguments.
The chemistry picture suggests that phonons are responsible for
superconductivity in cuprates. We identify the important phonon modes, and show
that the observed T$\mathrm{_c} \sim 100$ K, the T$\mathrm{_c}$-dome as a
function of hole doping, the change in T$\mathrm{_c}$ as a function of the
number of CuO$\mathrm{_2}$ layers per unit cell, the lack of an isotope effect
at optimal T$\mathrm{_c}$ doping, and the D-wave symmetry of the
superconducting Cooper pair wavefunction are all explained by the chemistry
picture. Finally, we show that "crowding" the dopants in cuprates leads to a
pair wavefunction with S-wave symmetry and T$\mathrm{_c}\approx280-390$ K.
Hence, we believe there is enormous "latent" T$\mathrm{_c}$ remaining in the
cuprate class of superconductors.
| 0 | 1 | 0 | 0 | 0 | 0 |
Characterization of minimizers of an anisotropic variant of the Rudin-Osher-Fatemi functional with $L^1$ fidelity term | In this paper we study an anisotropic variant of the Rudin-Osher-Fatemi
functional with $L^1$ fidelity term of the form \[ E(u) = \int_{\mathbb{R}^n}
\phi(\nabla u) + \lambda \| u -f \|_{L^1(\mathbb{R}^n)}. \] We will
characterize the minimizers of $E$ in terms of the Wulff shape of $\phi$ and
the dual anisotropy. In particular we will calculate the subdifferential of
$E$. We will apply this characterization to the special case $\phi = |\cdot|_1$
and $n=2$, which has been used in the denoising of 2D bar codes. In this case,
we determine the shape of a minimizer $u$ when $f$ is the characteristic
function of a circle.
| 0 | 0 | 1 | 0 | 0 | 0 |
Partial regularity of weak solutions and life-span of smooth solutions to a biological network formulation model | In this paper we first study partial regularity of weak solutions to the
initial boundary value problem for the system
$-\mbox{div}\left[(I+\mathbf{m}\otimes \mathbf{m})\nabla p\right]=S(x),\ \
\partial_t\mathbf{m}-D^2\Delta \mathbf{m}-E^2(\mathbf{m}\cdot\nabla p)\nabla
p+|\mathbf{m}|^{2(\gamma-1)}\mathbf{m}=0$, where $S(x)$ is a given function and
$D, E, \gamma$ are given numbers. This problem has been proposed as a PDE model
for biological transportation networks. Mathematically, it seems to have a
connection to a conjecture by De Giorgi \cite{DE}. Then we investigate the
life-span of classical solutions. Our results show that local existence of a
classical solution can always be obtained and the life-span of such a solution
can be extended as far away as one wishes as long as the term $\|{\bf
m}(x,0)\|_{\infty, \Omega}+\|S(x)\|_{\frac{2N}{3}, \Omega}$ is made suitably
small, where $N$ is the space dimension and $\|\cdot\|_{q,\Omega}$ denotes the
norm in $L^q(\Omega)$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Distributed control of vehicle strings under finite-time and safety specifications | This paper studies an optimal control problem for a string of vehicles with
safety requirements and finite-time specifications on the approach time to a
target region. Our problem formulation is motivated by scenarios involving
autonomous vehicles circulating on arterial roads with intelligent management
at traffic intersections. We propose a provably correct distributed control
algorithm that ensures that the vehicles satisfy the finite-time specifications
under speed limits, acceleration saturation, and safety requirements. The
safety specifications are such that collisions can be avoided even in cases of
communication failure. We also discuss how the proposed distributed algorithm
can be integrated with an intelligent intersection manager to provide
information about the feasible approach times of the vehicle string and a
guaranteed bound of its time of occupancy of the intersection. Our simulation
study illustrates the algorithm and its properties regarding approach time,
occupancy time, and fuel and time cost.
| 1 | 0 | 1 | 0 | 0 | 0 |
Random Feature-based Online Multi-kernel Learning in Environments with Unknown Dynamics | Kernel-based methods exhibit well-documented performance in various nonlinear
learning tasks. Most of them rely on a preselected kernel, whose prudent choice
presumes task-specific prior information. Especially when the latter is not
available, multi-kernel learning has gained popularity thanks to its
flexibility in choosing kernels from a prescribed kernel dictionary. Leveraging
the random feature approximation and its recent orthogonality-promoting
variant, the present contribution develops a scalable multi-kernel learning
scheme (termed Raker) to obtain the sought nonlinear learning function `on the
fly,' first for static environments. To further boost performance in dynamic
environments, an adaptive multi-kernel learning scheme (termed AdaRaker) is
developed. AdaRaker accounts not only for data-driven learning of kernel
combination, but also for the unknown dynamics. Performance is analyzed in
terms of both static and dynamic regrets. AdaRaker is uniquely capable of
tracking nonlinear learning functions in environments with unknown dynamics,
and with with analytic performance guarantees. Tests with synthetic and real
datasets are carried out to showcase the effectiveness of the novel algorithms.
| 1 | 0 | 0 | 1 | 0 | 0 |
Faster Convergence & Generalization in DNNs | Deep neural networks have gained tremendous popularity in last few years.
They have been applied for the task of classification in almost every domain.
Despite the success, deep networks can be incredibly slow to train for even
moderate sized models on sufficiently large datasets. Additionally, these
networks require large amounts of data to be able to generalize. The importance
of speeding up convergence, and generalization in deep networks can not be
overstated. In this work, we develop an optimization algorithm based on
generalized-optimal updates derived from minibatches that lead to faster
convergence. Towards the end, we demonstrate on two benchmark datasets that the
proposed method achieves two orders of magnitude speed up over traditional
back-propagation, and is more robust to noise/over-fitting.
| 0 | 0 | 0 | 1 | 0 | 0 |
Regular Sequences from Determinantal Conditions | In this paper we construct some regular sequences which arise naturally from
determinantal conditions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Nonlinear Zeeman effect, line shapes and optical pumping in electromagnetically induced transparency | We perform Zeeman spectroscopy on a Rydberg electromagnetically induced
transparency (EIT) system in a room-temperature Cs vapor cell, in magnetic
fields up to 50~Gauss and for several polarization configurations. The magnetic
interactions of the $\vert 6S_{1/2}, F_g=4 \rangle$ ground, $\vert 6P_{3/2},
F_e=5 \rangle$ intermediate, and $\vert 33S_{1/2} \rangle$ Rydberg states that
form the ladder-type EIT system are in the linear Zeeman, quadratic Zeeman, and
the deep hyperfine Paschen-Back regimes, respectively. Starting in magnetic
fields of about 5~Gauss, the spectra develop an asymmetry that becomes
paramount in fields $\gtrsim40$~Gauss. We use a quantum Monte Carlo
wave-function approach to quantitatively model the spectra. Simulated spectra
are in good agreement with experimental data. The asymmetry in the spectra is,
in part, due to level shifts caused by the quadratic Zeeman effect, but it also
reflects the complicated interplay between optical pumping and EIT in the
magnetic field. Relevance to measurement applications is discussed. %The
simulations are also used to study optical pumping in the magnetic field and to
investigate the interplay between optical pumping and EIT, which reduces photon
scattering and optical pumping.
| 0 | 1 | 0 | 0 | 0 | 0 |
On gradient regularizers for MMD GANs | We propose a principled method for gradient-based regularization of the
critic of GAN-like models trained by adversarially optimizing the kernel of a
Maximum Mean Discrepancy (MMD). We show that controlling the gradient of the
critic is vital to having a sensible loss function, and devise a method to
enforce exact, analytical gradient constraints at no additional cost compared
to existing approximate techniques based on additive regularizers. The new loss
function is provably continuous, and experiments show that it stabilizes and
accelerates training, giving image generation models that outperform
state-of-the art methods on $160 \times 160$ CelebA and $64 \times 64$
unconditional ImageNet.
| 0 | 0 | 0 | 1 | 0 | 0 |
Backprop with Approximate Activations for Memory-efficient Network Training | Larger and deeper neural network architectures deliver improved accuracy on a
variety of tasks, but also require a large amount of memory for training to
store intermediate activations for back-propagation. We introduce an
approximation strategy to significantly reduce this memory footprint, with
minimal effect on training performance and negligible computational cost. Our
method replaces intermediate activations with lower-precision approximations to
free up memory, after the full-precision versions have been used for
computation in subsequent layers in the forward pass. Only these approximate
activations are retained for use in the backward pass. Compared to naive
low-precision computation, our approach limits the accumulation of errors
across layers and allows the use of much lower-precision approximations without
affecting training accuracy. Experiments on CIFAR and ImageNet show that our
method yields performance comparable to full-precision training, while storing
activations at a fraction of the memory cost with 8- and even 4-bit fixed-point
precision.
| 1 | 0 | 0 | 1 | 0 | 0 |
Testing SPARUS II AUV, an open platform for industrial, scientific and academic applications | This paper describes the experience of preparing and testing the SPARUS II
AUV in different applications. The AUV was designed as a lightweight vehicle
combining the classical torpedo-shape features with the hovering capability.
The robot has a payload area to allow the integration of different equipment
depending on the application. The software architecture is based on ROS, an
open framework that allows an easy integration of many devices and systems. Its
flexibility, easy operation and openness makes the SPARUS II AUV a multipurpose
platform that can adapt to industrial, scientific and academic applications.
Five units were developed in 2014, and different teams used and adapted the
platform for different applications. The paper describes some of the
experiences in preparing and testing this open platform to different
applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.