title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Properties of Quasi-Assouad dimension | It is shown that for controlled Moran constructions in $\mathbb{R}$,
including the (sub) self-similar and more generally, (sub) self-conformal sets,
the quasi-Assouad dimension coincides with the upper box dimension. This can be
extended to some special classes of self-similar sets in higher dimensions. The
connections between quasi-Assouad dimension and tangents are studied. We show
that sets with decreasing gaps have quasi-Assouad dimension $0$ or $1$ and we
exhibit an example of a set in the plane whose quasi-Assouad dimension is
smaller than that of its projection onto the $x$-axis, showing that
quasi-Assouad dimension may increase under Lipschitz mappings.
| 0 | 0 | 1 | 0 | 0 | 0 |
PFAx: Predictable Feature Analysis to Perform Control | Predictable Feature Analysis (PFA) (Richthofer, Wiskott, ICMLA 2015) is an
algorithm that performs dimensionality reduction on high dimensional input
signal. It extracts those subsignals that are most predictable according to a
certain prediction model. We refer to these extracted signals as predictable
features.
In this work we extend the notion of PFA to take supplementary information
into account for improving its predictions. Such information can be a
multidimensional signal like the main input to PFA, but is regarded external.
That means it won't participate in the feature extraction - no features get
extracted or composed of it. Features will be exclusively extracted from the
main input such that they are most predictable based on themselves and the
supplementary information. We refer to this enhanced PFA as PFAx (PFA
extended).
Even more important than improving prediction quality is to observe the
effect of supplementary information on feature selection. PFAx transparently
provides insight how the supplementary information adds to prediction quality
and whether it is valuable at all. Finally we show how to invert that relation
and can generate the supplementary information such that it would yield a
certain desired outcome of the main signal.
We apply this to a setting inspired by reinforcement learning and let the
algorithm learn how to control an agent in an environment. With this method it
is feasible to locally optimize the agent's state, i.e. reach a certain goal
that is near enough. We are preparing a follow-up paper that extends this
method such that also global optimization is feasible.
| 1 | 0 | 0 | 0 | 0 | 0 |
Multiplex decomposition of non-Markovian dynamics and the hidden layer reconstruction problem | Elements composing complex systems usually interact in several different ways
and as such the interaction architecture is well modelled by a multiplex
network. However often this architecture is hidden, as one usually only has
experimental access to an aggregated projection. A fundamental challenge is
thus to determine whether the hidden underlying architecture of complex systems
is better modelled as a single interaction layer or results from the
aggregation and interplay of multiple layers. Here we show that using local
information provided by a random walker navigating the aggregated network one
can decide in a robust way if the underlying structure is a multiplex or not
and, in the former case, to determine the most probable number of hidden
layers. As a byproduct, we show that the mathematical formalism also provides a
principled solution for the optimal decomposition and projection of complex,
non-Markovian dynamics into a Markov switching combination of diffusive modes.
We validate the proposed methodology with numerical simulations of both (i)
random walks navigating hidden multiplex networks (thereby reconstructing the
true hidden architecture) and (ii) Markovian and non-Markovian continuous
stochastic processes (thereby reconstructing an effective multiplex
decomposition where each layer accounts for a different diffusive mode). We
also state and prove two existence theorems guaranteeing that an exact
reconstruction of the dynamics in terms of these hidden jump-Markov models is
always possible for arbitrary finite-order Markovian and fully non-Markovian
processes. Finally, we showcase the applicability of the method to experimental
recordings from (i) the mobility dynamics of human players in an online
multiplayer game and (ii) the dynamics of RNA polymerases at the
single-molecule level.
| 0 | 1 | 0 | 1 | 0 | 0 |
Model of knowledge transfer within an organisation | Many studies show that the acquisition of knowledge is the key to build
competitive advantage of companies. We propose a simple model of knowledge
transfer within the organization and we implement the proposed model using
cellular automata technique. In this paper the organisation is considered in
the context of complex systems. In this perspective, the main role in
organisation is played by the network of informal contacts and the distributed
leadership. The goal of this paper is to check which factors influence the
efficiency and effectiveness of knowledge transfer. Our studies indicate a
significant role of initial concentration of chunks of knowledge for knowledge
transfer process, and the results suggest taking action in the organisation to
shorten the distance (social distance) between people with different levels of
knowledge, or working out incentives to share knowledge.
| 1 | 1 | 0 | 0 | 0 | 0 |
Efficient Propagation of Uncertainties in Manufacturing Supply Chains: Time Buckets, L-leap and Multilevel Monte Carlo | Uncertainty propagation of large scale discrete supply chains can be
prohibitive when a large number of events occur during the simulated period and
discrete event simulations (DES) are costly. We present a time bucket method to
approximate and accelerate the DES of supply chains. Its stochastic version,
which we call the L(logistic)-leap method, can be viewed as an extension of the
leap methods, e.g., tau-leap, D-leap, developed in the chemical engineering
community for the acceleration of stochastic DES of chemical reactions. The
L-leap method instantaneously updates the system state vector at discrete time
points and the production rates and policies of a supply chain are assumed to
be stationary during each time bucket. We propose to use Multilevel Monte Carlo
(MLMC) to efficiently propagate the uncertainties in a supply chain network,
where the levels are naturally defined by the sizes of the time buckets of the
simulations. We demonstrate the efficiency and accuracy of our methods using
four numerical examples derived from a real world manufacturing material flow.
In these examples, our multilevel L-leap approach can be faster than the
standard Monte Carlo (MC) method by one or two orders of magnitudes without
compromising the accuracy.
| 0 | 0 | 0 | 1 | 0 | 0 |
Semi-automated Signal Surveying Using Smartphones and Floorplans | Location fingerprinting locates devices based on pattern matching signal
observations to a pre-defined signal map. This paper introduces a technique to
enable fast signal map creation given a dedicated surveyor with a smartphone
and floorplan. Our technique (PFSurvey) uses accelerometer, gyroscope and
magnetometer data to estimate the surveyor's trajectory post-hoc using
Simultaneous Localisation and Mapping and particle filtering to incorporate a
building floorplan. We demonstrate conventional methods can fail to recover the
survey path robustly and determine the room unambiguously. To counter this we
use a novel loop closure detection method based on magnetic field signals and
propose to incorporate the magnetic loop closures and straight-line constraints
into the filtering process to ensure robust trajectory recovery. We show this
allows room ambiguities to be resolved.
An entire building can be surveyed by the proposed system in minutes rather
than days. We evaluate in a large office space and compare to state-of-the-art
approaches. We achieve trajectories within 1.1 m of the ground truth 90% of the
time. Output signal maps well approximate those built from conventional,
laborious manual survey. We also demonstrate that the signal maps built by
PFSurvey provide similar or even better positioning performance than the manual
signal maps.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the n-th row of the graded Betti table of an n-dimensional toric variety | We prove an explicit formula for the first non-zero entry in the n-th row of
the graded Betti table of an n-dimensional projective toric variety associated
to a normal polytope with at least one interior lattice point. This applies to
Veronese embeddings of projective space where we prove a special case of a
conjecture of Ein and Lazarsfeld. We also prove an explicit formula for the
entire n-th row when the interior of the polytope is one-dimensional. All
results are valid over an arbitrary field k.
| 0 | 0 | 1 | 0 | 0 | 0 |
Topological conjugacy of topological Markov shifts and Ruelle algebras | We will characterize topologically conjugate two-sided topological Markov
shifts $(\bar{X}_A,\bar{\sigma}_A)$ in terms of the associated asymptotic
Ruelle $C^*$-algebras ${\mathcal{R}}_A$ with its commutative $C^*$-subalgebras
$C(\bar{X}_A)$ and the canonical circle actions. We will also show that
extended Ruelle algebras ${\widetilde{\mathcal{R}}}_A$, which are purely
infinite version of the asymptotic Ruelle algebras, with its commutative
$C^*$-subalgebras $C(\bar{X}_A)$ and the canonical torus actions $\gamma^A$ are
complete invariants for topological conjugacy of two-sided topological Markov
shifts. We then have a computable topological conjugacy invariant, written in
terms of the underlying matrix, of a two-sided topological Markov shift by
using K-theory of the extended Ruelle algebra. The diagonal action of
$\gamma^A$ has a unique KMS-state on ${\widetilde{\mathcal{R}}}_A$, which is an
extension of the Parry measure on $\bar{X}_A$.
| 0 | 0 | 1 | 0 | 0 | 0 |
StealthDB: a Scalable Encrypted Database with Full SQL Query Support | Encrypted database systems provide a great method for protecting sensitive
data in untrusted infrastructures. These systems are built using either
special-purpose cryptographic algorithms that support operations over encrypted
data, or by leveraging trusted computing co-processors. Strong cryptographic
algorithms usually result in high performance overheads (e.g., public-key
encryptions, garbled circuits), while weaker algorithms (e.g., order-preserving
encryption) result in large leakage profiles. On the other hand, some encrypted
database systems (e.g., Cipherbase, TrustedDB) leverage non-standard trusted
computing devices, and are designed to work around their specific architectural
limitations.
In this work we build StealthDB -- an encrypted database system from Intel
SGX. Our system can run on any newer generation Intel CPU. StealthDB has a very
small trusted computing base, scales to large datasets, requires no DBMS
changes, and provides strong security guarantees at steady state and during
query execution.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the conformal duality between constant mean curvature surfaces in $\mathbb{E}(κ,τ)$ and $\mathbb{L}(κ,τ)$ | The main aim of this survey paper is to gather together some results
concerning the Calabi type duality discovered by Hojoo Lee between certain
families of (spacelike) graphs with constant mean curvature in Riemannian and
Lorentzian homogeneous 3-manifolds with isometry group of dimension 4. The
duality is conformal and swaps mean curvature and bundle curvature, and we will
revisit it by giving a more general statement in terms of conformal immersions.
This will show that some features in the theory of surfaces with mean curvature
$\frac{1}{2}$ in $\mathbb{H}^2\times\mathbb{R}$ or minimal surfaces in the
Heisenberg space have nice geometric interpretations in terms of their dual
Lorentzian counterparts. We will briefly discuss some applications such as
gradient estimates for entire minimal graphs in Heisenberg space or the
existence of complete spacelike surfaces, and we will also give an uniform
treatment to the behavior of the duality with respect to ambient isometries.
Finally, some open questions are posed in the last section.
| 0 | 0 | 1 | 0 | 0 | 0 |
Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks | Contact-rich manipulation tasks in unstructured environments often require
both haptic and visual feedback. However, it is non-trivial to manually design
a robot controller that combines modalities with very different
characteristics. While deep reinforcement learning has shown success in
learning control policies for high-dimensional inputs, these algorithms are
generally intractable to deploy on real robots due to sample complexity. We use
self-supervision to learn a compact and multimodal representation of our
sensory inputs, which can then be used to improve the sample efficiency of our
policy learning. We evaluate our method on a peg insertion task, generalizing
over different geometry, configurations, and clearances, while being robust to
external perturbations. Results for simulated and real robot experiments are
presented.
| 1 | 0 | 0 | 0 | 0 | 0 |
Positive solutions for nonlinear problems involving the one-dimensional ϕ-Laplacian | Let $\Omega:=\left( a,b\right) \subset\mathbb{R}$, $m\in L^{1}\left(
\Omega\right) $ and $\lambda>0$ be a real parameter. Let $\mathcal{L}$ be the
differential operator given by $\mathcal{L}u:=-\phi\left( u^{\prime}\right)
^{\prime}+r\left( x\right) \phi\left( u\right) $, where $\phi
:\mathbb{R\rightarrow R}$ is an odd increasing homeomorphism and $0\leq r\in
L^{1}\left( \Omega\right) $. We study the existence of positive solutions for
problems of the form $\mathcal{L}u=\lambda m\left( x\right) f\left( u\right)$
in $\Omega,$ $u=0$ on $\partial\Omega$, where $f:\left[ 0,\infty\right)
\rightarrow\left[ 0,\infty\right) $ is a continuos function which is, roughly
speaking, sublinear with respect to $\phi$. Our approach combines the sub and
supersolution method with some estimates on related nonlinear problems. We
point out that our results are new even in the cases $r\equiv0$ and/or
$m\geq0$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Vortex lattices in binary Bose-Einstein condensates with dipole-dipole interactions | We study the structure and stability of vortex lattices in two-component
rotating Bose-Einstein condensates with intrinsic dipole-dipole interactions
(DDIs) and contact interactions. To address experimentally accessible coupled
systems, we consider $^{164}$Dy-$^{162}$Dy and $^{168}$Er-$^{164}$Dy mixtures,
which feature different miscibilities. The corresponding dipole moments are
$\mu_{\mathrm{Dy}}=10\mu_{\mathrm{B}}$ and $\mu_{\mathrm{Er}}=
7\mu_{\mathrm{B}}$, where $\mu_{\mathrm{B}}$ is the Bohr magneton. For
comparison, we also discuss a case where one of the species is non dipolar.
Under a large aspect ratio of the trap, we consider mixtures in the
pancake-shaped format, which are modeled by effective two-dimensional coupled
Gross-Pitaevskii equations, with a fixed polarization of the magnetic dipoles.
Then, the miscibility and vortex-lattice structures are studied, by varying the
coefficients of the contact interactions (assuming the use of the
Feshbach-resonance mechanism) and the rotation frequency. We present phase
diagrams for several types of lattices in the parameter plane of the rotation
frequency and ratio of inter- and intra-species scattering lengths. The vortex
structures are found to be diverse for the more miscible $^{164}$Dy-$^{162}$Dy
mixture, with a variety of shapes, whereas, for the less miscible case of
$^{168}$Er-$^{164}$Dy, the lattice patterns mainly feature circular or square
formats.
| 0 | 1 | 0 | 0 | 0 | 0 |
Regression estimator for the tail index | Estimating the tail index parameter is one of the primal objectives in
extreme value theory. For heavy-tailed distributions the Hill estimator is the
most popular way to estimate the tail index parameter. Improving the Hill
estimator was aimed by recent works with different methods, for example by
using bootstrap, or Kolmogorov-Smirnov metric. These methods are asymptotically
consistent, but for tail index $\xi >1$ and smaller sample sizes the estimation
fails to approach the theoretical value for realistic sample sizes. In this
paper, we introduce a new empirical method, which can estimate high tail index
parameters well and might also be useful for relatively small sample sizes.
| 0 | 0 | 1 | 1 | 0 | 0 |
Active modulation of electromagnetically induced transparency analogue in terahertz hybrid metal-graphene metamaterials | Metamaterial analogues of electromagnetically induced transparency (EIT) have
been intensively studied and widely employed for slow light and enhanced
nonlinear effects. In particular, the active modulation of the EIT analogue and
well-controlled group delay in metamaterials have shown great prospects in
optical communication networks. Previous studies have focused on the optical
control of the EIT analogue by integrating the photoactive materials into the
unit cell, however, the response time is limited by the recovery time of the
excited carriers in these bulk materials. Graphene has recently emerged as an
exceptional optoelectronic material. It shows an ultrafast relaxation time on
the order of picosecond and its conductivity can be tuned via manipulating the
Fermi energy. Here we integrate a monolayer graphene into metal-based terahertz
(THz) metamaterials, and realize a complete modulation in the resonance
strength of the EIT analogue at the accessible Fermi energy. The physical
mechanism lies in the active tuning the damping rate of the dark mode resonator
through the recombination effect of the conductive graphene. Note that the
monolayer morphology in our work is easier to fabricate and manipulate than
isolated fashion. This work presents a novel modulation strategy of the EIT
analogue in the hybrid metamaterials, and pave the way towards designing very
compact slow light devices to meet future demand of ultrafast optical signal
processing.
| 0 | 1 | 0 | 0 | 0 | 0 |
The continuity equation with cusp singularities | In this paper we study a special case of the completion of cusp
Kähler-Einstein metric on the regular part of varieties by taking the
continuity method proposed by La Nave and Tian. The differential geometric and
algebro-geometric properties of the noncollapsing limit in the continuity
method with cusp singularities will be investigated.
| 0 | 0 | 1 | 0 | 0 | 0 |
Simplicial Structures for Higher Order Hochschild Homology over the $d$-Sphere | For $d\geq1$, we study the simplicial structure of the chain complex
associated to the higher order Hochschild homology over the $d$-sphere. We
discuss $H_\bullet^{S^d}(A,M)$ by way of a bar-like resolution
$\mathcal{B}^d(A)$ in the context of simplicial modules. Besides the general
case, we give explicit detail corresponding to $S^3$. We also present a
description of what can replace these bar-like resolutions in order to aid with
computation. The cohomology version can be done following a similar
construction, of which we make mention.
| 0 | 0 | 1 | 0 | 0 | 0 |
Destination-Directed Trajectory Modeling and Prediction Using Conditionally Markov Sequences | In some problems there is information about the destination of a moving
object. An example is an airliner flying from an origin to a destination. Such
problems have three main components: an origin, a destination, and motion in
between. To emphasize that the motion trajectories end up at the destination,
we call them \textit{destination-directed trajectories}. The Markov sequence is
not flexible enough to model such trajectories. Given an initial density and an
evolution law, the future of a Markov sequence is determined probabilistically.
One class of conditionally Markov (CM) sequences, called the $CM_L$ sequence
(including the Markov sequence as a special case), has the following main
components: a joint endpoint density (i.e., an initial density and a final
density conditioned on the initial) and a Markov-like evolution law. This paper
proposes using the $CM_L$ sequence for modeling destination-directed
trajectories. It is demonstrated how the $CM_L$ sequence enjoys several
desirable properties for destination-directed trajectory modeling. Some
simulations of trajectory modeling and prediction are presented for
illustration.
| 1 | 0 | 0 | 0 | 0 | 0 |
Monadic Second Order Logic with Measure and Category Quantifiers | We investigate the extension of Monadic Second Order logic, interpreted over
infinite words and trees, with generalized "for almost all" quantifiers
interpreted using the notions of Baire category and Lebesgue measure.
| 1 | 0 | 1 | 0 | 0 | 0 |
Two-point correlation in wall turbulence according to the attached-eddy hypothesis | For the constant-stress layer of wall turbulence, two-point correlations of
velocity fluctuations are studied theoretically by using the attached-eddy
hypothesis, i.e., a phenomenological model of a random superposition of
energy-containing eddies that are attached to the wall. While the previous
studies had invoked additional assumptions, we focus on the minimum assumptions
of the hypothesis to derive its most general forms of the correlation
functions. They would allow us to use or assess the hypothesis without any
effect of those additional assumptions. We also study the energy spectra and
the two-point correlations of the rate of momentum transfer and of the rate of
energy dissipation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Variational problems with long-range interaction | We consider a class of variational problems for densities that repel each
other at distance. Typical examples are given by the Dirichlet functional and
the Rayleigh functional \[
D(\mathbf{u}) = \sum_{i=1}^k \int_{\Omega} |\nabla u_i|^2 \quad \text{or}
\quad R(\mathbf{u}) = \sum_{i=1}^k \frac{\int_{\Omega} |\nabla
u_i|^2}{\int_{\Omega} u_i^2} \] minimized in the class of
$H^1(\Omega,\mathbb{R}^k)$ functions attaining some boundary conditions on
$\partial \Omega$, and subjected to the constraint \[
\mathrm{dist} (\{u_i > 0\}, \{u_j > 0\}) \ge 1 \qquad \forall i \neq j. \]
For these problems, we investigate the optimal regularity of the solutions,
prove a free-boundary condition, and derive some preliminary results
characterizing the free boundary $\partial \{\sum_{i=1}^k u_i > 0\}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
On Breast Cancer Detection: An Application of Machine Learning Algorithms on the Wisconsin Diagnostic Dataset | This paper presents a comparison of six machine learning (ML) algorithms:
GRU-SVM (Agarap, 2017), Linear Regression, Multilayer Perceptron (MLP), Nearest
Neighbor (NN) search, Softmax Regression, and Support Vector Machine (SVM) on
the Wisconsin Diagnostic Breast Cancer (WDBC) dataset (Wolberg, Street, &
Mangasarian, 1992) by measuring their classification test accuracy and their
sensitivity and specificity values. The said dataset consists of features which
were computed from digitized images of FNA tests on a breast mass (Wolberg,
Street, & Mangasarian, 1992). For the implementation of the ML algorithms, the
dataset was partitioned in the following fashion: 70% for training phase, and
30% for the testing phase. The hyper-parameters used for all the classifiers
were manually assigned. Results show that all the presented ML algorithms
performed well (all exceeded 90% test accuracy) on the classification task. The
MLP algorithm stands out among the implemented algorithms with a test accuracy
of ~99.04%.
| 1 | 0 | 0 | 1 | 0 | 0 |
Loss Landscapes of Regularized Linear Autoencoders | Autoencoders are a deep learning model for representation learning. When
trained to minimize the Euclidean distance between the data and its
reconstruction, linear autoencoders (LAEs) learn the subspace spanned by the
top principal directions but cannot learn the principal directions themselves.
In this paper, we prove that $L_2$-regularized LAEs learn the principal
directions as the left singular vectors of the decoder, providing an extremely
simple and scalable algorithm for rank-$k$ SVD. More generally, we consider
LAEs with (i) no regularization, (ii) regularization of the composition of the
encoder and decoder, and (iii) regularization of the encoder and decoder
separately. We relate the minimum of (iii) to the MAP estimate of probabilistic
PCA and show that for all critical points the encoder and decoder are
transposes. Building on topological intuition, we smoothly parameterize the
critical manifolds for all three losses via a novel unified framework and
illustrate these results empirically. Overall, this work clarifies the
relationship between autoencoders and Bayesian models and between
regularization and orthogonality.
| 1 | 0 | 0 | 1 | 0 | 0 |
Enduring Lagrangian coherence of a Loop Current ring assessed using independent observations | Ocean flows are routinely inferred from low-resolution satellite altimetry
measurements of sea surface height assuming a geostrophic balance. Recent
nonlinear dynamical systems techniques have revealed that surface currents
derived from altimetry can support mesoscale eddies with material boundaries
that do not filament for many months, thereby representing effective transport
mechanisms. However, the long-range Lagrangian coherence assessed for mesoscale
eddy boundaries detected from altimetry is constrained by the impossibility of
current altimeters to resolve ageostrophic submesoscale motions. These may act
to prevent Lagrangian coherence from manifesting in the rigorous form described
by the nonlinear dynamical systems theories. Here we use a combination of
satellite ocean color and surface drifter trajectory data, rarely available
simultaneously over an extended period of time, to provide observational
evidence for the enduring Lagrangian coherence of a Loop Current ring detected
from altimetry. We also seek indications of this behavior in the flow produced
by a data-assimilative system which demonstrated ability to reproduce observed
relative dispersion statistics down into the marginally submesoscale range.
However, the simulated flow, total surface and subsurface or subsampled
emulating altimetry, is not found to support the long-lasting Lagrangian
coherence that characterizes the observed ring. This highlights the importance
of the Lagrangian metrics produced by the nonlinear dynamical systems tools
employed here in assessing model performance.
| 0 | 1 | 0 | 0 | 0 | 0 |
Nonparametric inference for continuous-time event counting and link-based dynamic network models | A flexible approach for modeling both dynamic event counting and dynamic
link-based networks based on counting processes is proposed, and estimation in
these models is studied. We consider nonparametric likelihood based estimation
of parameter functions via kernel smoothing. The asymptotic behavior of these
estimators is rigorously analyzed by allowing the number of nodes to tend to
infinity. The finite sample performance of the estimators is illustrated
through an empirical analysis of bike share data.
| 0 | 0 | 1 | 1 | 0 | 0 |
Optimal Pricing-Based Edge Computing Resource Management in Mobile Blockchain | As the core issue of blockchain, the mining requires solving a proof-of-work
puzzle, which is resource expensive to implement in mobile devices due to high
computing power needed. Thus, the development of blockchain in mobile
applications is restricted. In this paper, we consider the edge computing as
the network enabler for mobile blockchain. In particular, we study optimal
pricing-based edge computing resource management to support mobile blockchain
applications where the mining process can be offloaded to an Edge computing
Service Provider (ESP). We adopt a two-stage Stackelberg game to jointly
maximize the profit of the ESP and the individual utilities of different
miners. In Stage I, the ESP sets the price of edge computing services. In Stage
II, the miners decide on the service demand to purchase based on the observed
prices. We apply the backward induction to analyze the sub-game perfect
equilibrium in each stage for uniform and discriminatory pricing schemes.
Further, the existence and uniqueness of Stackelberg game are validated for
both pricing schemes. At last, the performance evaluation shows that the ESP
intends to set the maximum possible value as the optimal price for profit
maximization under uniform pricing. In addition, the discriminatory pricing
helps the ESP encourage higher total service demand from miners and achieve
greater profit correspondingly.
| 1 | 0 | 0 | 0 | 0 | 0 |
Automorphic vector bundles with global sections on $G$-${\tt Zip}^{\mathcal Z}$-schemes | A general conjecture is stated on the cone of automorphic vector bundles
admitting nonzero global sections on schemes endowed with a smooth, surjective
morphism to a stack of $G$-zips of connected-Hodge-type; such schemes should
include all Hodge-type Shimura varieties with hyperspecial level. We prove our
conjecture for groups of type $A_1^n$, $C_2$ and $\mathbf F_p$-split groups of
type $A_2$ (this includes all Hilbert-Blumenthal varieties and should also
apply to Siegel modular threefolds and Picard modular surfaces). An example is
given to show that our conjecture can fail for zip data not of
connected-Hodge-type.
| 0 | 0 | 1 | 0 | 0 | 0 |
Composition and decomposition of GANs | In this work, we propose a composition/decomposition framework for
adversarially training generative models on composed data - data where each
sample can be thought of as being constructed from a fixed number of
components. In our framework, samples are generated by sampling components from
component generators and feeding these components to a composition function
which combines them into a "composed sample". This compositional training
approach improves the modularity, extensibility and interpretability of
Generative Adversarial Networks (GANs) - providing a principled way to
incrementally construct complex models out of simpler component models, and
allowing for explicit "division of responsibility" between these components.
Using this framework, we define a family of learning tasks and evaluate their
feasibility on two datasets in two different data modalities (image and text).
Lastly, we derive sufficient conditions such that these compositional
generative models are identifiable. Our work provides a principled approach to
building on pre-trained generative models or for exploiting the compositional
nature of data distributions to train extensible and interpretable models.
| 1 | 0 | 0 | 1 | 0 | 0 |
Quasiconformal mappings and Hölder continuity | We establish that every $K$-quasiconformal mapping $w$ of the unit ball $\IB$
onto a $C^2$-Jordan domain $\Omega$ is Hölder continuous with constant
$\alpha= 2-\frac{n}{p}$, provided that its weak Laplacean $\Delta w$ is in $
L^p(\IB)$ for some $n/2<p<n$. In particular it is Hölder continuous for every
$0<\alpha<1$ provided that $\Delta w\in L^n(\IB)$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Boundedness of averaging operators on geometrically doubling metric spaces | We prove that averaging operators are uniformly bounded on $L^1$ for all
geometrically doubling metric measure spaces, with bounds independent of the
measure. From this result, the $L^1$ convergence of averages as $r \to 0$
immediately follows.
| 0 | 0 | 1 | 0 | 0 | 0 |
Efficient Low-Order Approximation of First-Passage Time Distributions | We consider the problem of computing first-passage time distributions for
reaction processes modelled by master equations. We show that this generally
intractable class of problems is equivalent to a sequential Bayesian inference
problem for an auxiliary observation process. The solution can be approximated
efficiently by solving a closed set of coupled ordinary differential equations
(for the low-order moments of the process) whose size scales with the number of
species. We apply it to an epidemic model and a trimerisation process, and show
good agreement with stochastic simulations.
| 0 | 1 | 0 | 1 | 0 | 0 |
Delayed avalanches in Multi-Pixel Photon Counters | Hamamatsu Photonics introduced a new generation of their Multi-Pixel Photon
Counters in 2013 with significantly reduced after-pulsing rate. In this paper,
we investigate the causes of after-pulsing by testing pre-2013 and post-2013
devices using laser light ranging from 405 to 820nm. Doing so we investigate
the possibility that afterpulsing is also due to optical photons produced in
the avalanche rather than to impurities trapping charged carriers produced in
the avalanches and releasing them at a later time. For pre-2013 devices, we
observe avalanches delayed by ns to several 100~ns at 637, 777nm and 820 nm
demonstrating that holes created in the zero field region of the silicon bulk
can diffuse back to the high field region triggering delayed avalanches. On the
other hand post-2013 exhibit no delayed avalanches beyond 100~ns at 777nm. We
also confirm that post-2013 devices exhibit about 25 times lower after-pulsing.
Taken together, our measurements show that the absorption of photons from the
avalanche in the bulk of the silicon and the subsequent hole diffusion back to
the junction was a significant source of after-pulse for the pre-2013 devices.
Hamamatsu appears to have fixed this problem in 2013 following the preliminary
release of our results. We also show that even at short wavelength the timing
distribution exhibit tails in the sub-nanosecond range that may impair the MPPC
timing performances.
| 0 | 1 | 0 | 0 | 0 | 0 |
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs.
| 1 | 0 | 0 | 1 | 0 | 0 |
Linear-Cost Covariance Functions for Gaussian Random Fields | Gaussian random fields (GRF) are a fundamental stochastic model for
spatiotemporal data analysis. An essential ingredient of GRF is the covariance
function that characterizes the joint Gaussian distribution of the field.
Commonly used covariance functions give rise to fully dense and unstructured
covariance matrices, for which required calculations are notoriously expensive
to carry out for large data. In this work, we propose a construction of
covariance functions that result in matrices with a hierarchical structure.
Empowered by matrix algorithms that scale linearly with the matrix dimension,
the hierarchical structure is proved to be efficient for a variety of random
field computations, including sampling, kriging, and likelihood evaluation.
Specifically, with $n$ scattered sites, sampling and likelihood evaluation has
an $O(n)$ cost and kriging has an $O(\log n)$ cost after preprocessing,
particularly favorable for the kriging of an extremely large number of sites
(e.g., predicting on more sites than observed). We demonstrate comprehensive
numerical experiments to show the use of the constructed covariance functions
and their appealing computation time. Numerical examples on a laptop include
simulated data of size up to one million, as well as a climate data product
with over two million observations.
| 0 | 0 | 0 | 1 | 0 | 0 |
Models of fault-tolerant distributed computation via dynamic epistemic logic | The computability power of a distributed computing model is determined by the
communication media available to the processes, the timing assumptions about
processes and communication, and the nature of failures that processes can
suffer. In a companion paper we showed how dynamic epistemic logic can be used
to give a formal semantics to a given distributed computing model, to capture
precisely the knowledge needed to solve a distributed task, such as consensus.
Furthermore, by moving to a dual model of epistemic logic defined by simplicial
complexes, topological invariants are exposed, which determine task
solvability. In this paper we show how to extend the setting above to include
in the knowledge of the processes, knowledge about the model of computation
itself. The extension describes the knowledge processes gain about the current
execution, in problems where processes have no input values at all.
| 1 | 0 | 1 | 0 | 0 | 0 |
Erratum: Higher Order Elicitability and Osband's Principle | This note corrects conditions in Proposition 3.4 and Theorem 5.2(ii) and
comments on imprecisions in Propositions 4.2 and 4.4 in Fissler and Ziegel
(2016).
| 0 | 0 | 1 | 1 | 0 | 1 |
Staging Human-computer Dialogs: An Application of the Futamura Projections | We demonstrate an application of the Futamura Projections to human-computer
interaction, and particularly to staging human-computer dialogs. Specifically,
by providing staging analogs to the classical Futamura Projections, we
demonstrate that the Futamura Projections can be applied to the staging of
human-computer dialogs in addition to the execution of programs.
| 1 | 0 | 0 | 0 | 0 | 0 |
Enhancing synchronization in chaotic oscillators by induced heterogeneity | We report enhancing of complete synchronization in identical chaotic
oscillators when their interaction is mediated by a mismatched oscillator. The
identical oscillators now interact indirectly through the intermediate relay
oscillator. The induced heterogeneity in the intermediate oscillator plays a
constructive role in reducing the critical coupling for a transition to
complete synchronization. A common lag synchronization emerges between the
mismatched relay oscillator and its neighboring identical oscillators that
leads to this enhancing effect. We present examples of one-dimensional open
array, a ring, a star network and a two-dimensional lattice of dynamical
systems to demonstrate how this enhancing effect occurs. The paradigmatic
Rössler oscillator is used as a dynamical unit, in our numerical experiment,
for different networks to reveal the enhancing phenomenon.
| 0 | 1 | 0 | 0 | 0 | 0 |
Compressed Sensing with Deep Image Prior and Learned Regularization | We propose a novel method for compressed sensing recovery using untrained
deep generative models. Our method is based on the recently proposed Deep Image
Prior (DIP), wherein the convolutional weights of the network are optimized to
match the observed measurements. We show that this approach can be applied to
solve any differentiable inverse problem. We also introduce a novel learned
regularization technique which incorporates a small amount of prior
information, further reducing the number of measurements required for a given
reconstruction error. Our algorithm requires approximately 4-6x fewer
measurements than classical Lasso methods. Unlike previous approaches based on
generative models, our method does not require the model to be pre-trained. As
such, we can apply our method to various medical imaging datasets for which
data acquisition is expensive and no known generative models exist.
| 0 | 0 | 0 | 1 | 0 | 0 |
Beamforming and Power Splitting Designs for AN-aided Secure Multi-user MIMO SWIPT Systems | In this paper, an energy harvesting scheme for a multi-user
multiple-input-multiple-output (MIMO) secrecy channel with artificial noise
(AN) transmission is investigated. Joint optimization of the transmit
beamforming matrix, the AN covariance matrix, and the power splitting ratio is
conducted to minimize the transmit power under the target secrecy rate, the
total transmit power, and the harvested energy constraints. The original
problem is shown to be non-convex, which is tackled by a two-layer
decomposition approach. The inner layer problem is solved through semi-definite
relaxation, and the outer problem, on the other hand, is shown to be a single-
variable optimization that can be solved by one-dimensional (1- D) line search.
To reduce computational complexity, a sequential parametric convex
approximation (SPCA) method is proposed to find a near-optimal solution. The
work is then extended to the imperfect channel state information case with
norm-bounded channel errors. Furthermore, tightness of the relaxation for the
proposed schemes are validated by showing that the optimal solution of the
relaxed problem is rank-one. Simulation results demonstrate that the proposed
SPCA method achieves the same performance as the scheme based on 1-D but with
much lower complexity.
| 1 | 0 | 0 | 0 | 0 | 0 |
Spectral Properties of Tensor Products of Channels | We investigate spectral properties of the tensor products of two quantum
channels defined on matrix algebras. This leads to the important question of
when an arbitrary subalgebra can split into the tensor product of two
subalgebras. We show that for two unital quantum channels $\mathcal{E}_1$ and
$\mathcal{E}_2$ the multiplicative domain of
$\mathcal{E}_1\otimes\mathcal{E}_2$ splits into the tensor product of the
individual multiplicative domains. Consequently, we fully describe the fixed
points and peripheral eigen operators of the tensor product of channels.
Through a structure theorem of maximal unital proper $^*$-subalgebras (MUPSA)
of a matrix algebra we provide a non-trivial upper bound of the 'multiplicative
index' of a unital channel which was recently introduced. This bound gives a
criteria on when a channel cannot be factored into a product of two different
channels. We construct examples of channels which can not be realized as a
tensor product of two channels in any way. With these techniques and results,
we found some applications in quantum error correction.
| 0 | 0 | 1 | 0 | 0 | 0 |
Integral Chow motives of threefolds with $K$-motives of unit type | We prove that if a smooth projective algebraic variety of dimension less or
equal to three has a unit type integral $K$-motive, then its integral Chow
motive is of Lefschetz type. As a consequence, the integral Chow motive is of
Lefschetz type for a smooth projective variety of dimension less or equal to
three that admits a full exceptional collection.
| 0 | 0 | 1 | 0 | 0 | 0 |
Efficient Recurrent Neural Networks using Structured Matrices in FPGAs | Recurrent Neural Networks (RNNs) are becoming increasingly important for time
series-related applications which require efficient and real-time
implementations. The recent pruning based work ESE suffers from degradation of
performance/energy efficiency due to the irregular network structure after
pruning. We propose block-circulant matrices for weight matrix representation
in RNNs, thereby achieving simultaneous model compression and acceleration. We
aim to implement RNNs in FPGA with highest performance and energy efficiency,
with certain accuracy requirement (negligible accuracy degradation).
Experimental results on actual FPGA deployments shows that the proposed
framework achieves a maximum energy efficiency improvement of 35.7$\times$
compared with ESE.
| 1 | 0 | 0 | 1 | 0 | 0 |
Neural Architecture Search: A Survey | Deep Learning has enabled remarkable progress over the last years on a
variety of tasks, such as image recognition, speech recognition, and machine
translation. One crucial aspect for this progress are novel neural
architectures. Currently employed architectures have mostly been developed
manually by human experts, which is a time-consuming and error-prone process.
Because of this, there is growing interest in automated neural architecture
search methods. We provide an overview of existing work in this field of
research and categorize them according to three dimensions: search space,
search strategy, and performance estimation strategy.
| 0 | 0 | 0 | 1 | 0 | 0 |
High Efficiency Power Side-Channel Attack Immunity using Noise Injection in Attenuated Signature Domain | With the advancement of technology in the last few decades, leading to the
widespread availability of miniaturized sensors and internet-connected things
(IoT), security of electronic devices has become a top priority. Side-channel
attack (SCA) is one of the prominent methods to break the security of an
encryption system by exploiting the information leaked from the physical
devices. Correlational power attack (CPA) is an efficient power side-channel
attack technique, which analyses the correlation between the estimated and
measured supply current traces to extract the secret key. The existing
countermeasures to the power attacks are mainly based on reducing the SNR of
the leaked data, or introducing large overhead using techniques like power
balancing. This paper presents an attenuated signature AES (AS-AES), which
resists SCA with minimal noise current overhead. AS-AES uses a shunt
low-drop-out (LDO) regulator to suppress the AES current signature by 400x in
the supply current traces. The shunt LDO has been fabricated and validated in
130 nm CMOS technology. System-level implementation of the AS-AES along with
noise injection, shows that the system remains secure even after 50K
encryptions, with 10x reduction in power overhead compared to that of noise
addition alone.
| 1 | 0 | 0 | 0 | 0 | 0 |
Real-time monitoring of the structure of ultra thin Fe$_3$O$_4$ films during growth on Nb-doped SrTiO$_3$(001) | In this work thin magnetite films were deposited on SrTiO$_3$ via reactive
molecular beam epitaxy at different substrate temperatures. The growth process
was monitored in-situ during deposition by means of x-ray diffraction. While
the magnetite film grown at 400$^\circ$C shows a fully relaxed vertical lattice
constant already in the early growth stages, the film deposited at 270$^\circ$C
exhibits a strong vertical compressive strain and relaxes towards the bulk
value with increasing film thickness. Furthermore, a lateral tensile strain was
observed under these growth conditions although the inverse behavior is
expected due to the lattice mismatch of -7.5%. Additionally, the occupancy of
the A and B sublattices of magnetite with tetrahedral and octahedral sites was
investigated showing a lower occupancy of the A sites compared to an ideal
inverse spinel structure. The occupation of A sites decreases for a higher
growth temperature. Thus, we assume a relocation of the iron ions from
tetrahedral sites to octahedral vacancies forming a deficient rock salt
lattice.
| 0 | 1 | 0 | 0 | 0 | 0 |
Harnessing Flexible and Reliable Demand Response Under Customer Uncertainties | Demand response (DR) is a cost-effective and environmentally friendly
approach for mitigating the uncertainties in renewable energy integration by
taking advantage of the flexibility of customers' demands. However, existing DR
programs suffer from either low participation due to strict commitment
requirements or not being reliable in voluntary programs. In addition, the
capacity planning for energy storage/reserves is traditionally done separately
from the demand response program design, which incurs inefficiencies. Moreover,
customers often face high uncertainties in their costs in providing demand
response, which is not well studied in literature.
This paper first models the problem of joint capacity planning and demand
response program design by a stochastic optimization problem, which
incorporates the uncertainties from renewable energy generation, customer power
demands, as well as the customers' costs in providing DR. We propose online DR
control policies based on the optimal structures of the offline solution. A
distributed algorithm is then developed for implementing the control policies
without efficiency loss. We further offer enhanced policy design by allowing
flexibilities into the commitment level. We perform real world trace based
numerical simulations. Results demonstrate that the proposed algorithms can
achieve near optimal social costs, and significant social cost savings compared
to baseline methods.
| 0 | 0 | 1 | 0 | 0 | 0 |
GraphGAN: Graph Representation Learning with Generative Adversarial Nets | The goal of graph representation learning is to embed each vertex in a graph
into a low-dimensional vector space. Existing graph representation learning
methods can be classified into two categories: generative models that learn the
underlying connectivity distribution in the graph, and discriminative models
that predict the probability of edge existence between a pair of vertices. In
this paper, we propose GraphGAN, an innovative graph representation learning
framework unifying above two classes of methods, in which the generative model
and discriminative model play a game-theoretical minimax game. Specifically,
for a given vertex, the generative model tries to fit its underlying true
connectivity distribution over all other vertices and produces "fake" samples
to fool the discriminative model, while the discriminative model tries to
detect whether the sampled vertex is from ground truth or generated by the
generative model. With the competition between these two models, both of them
can alternately and iteratively boost their performance. Moreover, when
considering the implementation of generative model, we propose a novel graph
softmax to overcome the limitations of traditional softmax function, which can
be proven satisfying desirable properties of normalization, graph structure
awareness, and computational efficiency. Through extensive experiments on
real-world datasets, we demonstrate that GraphGAN achieves substantial gains in
a variety of applications, including link prediction, node classification, and
recommendation, over state-of-the-art baselines.
| 1 | 0 | 0 | 1 | 0 | 0 |
Accelerating Stochastic Gradient Descent For Least Squares Regression | There is widespread sentiment that it is not possible to effectively utilize
fast gradient methods (e.g. Nesterov's acceleration, conjugate gradient, heavy
ball) for the purposes of stochastic optimization due to their instability and
error accumulation, a notion made precise in d'Aspremont 2008 and Devolder,
Glineur, and Nesterov 2014. This work considers these issues for the special
case of stochastic approximation for the least squares regression problem, and
our main result refutes the conventional wisdom by showing that acceleration
can be made robust to statistical errors. In particular, this work introduces
an accelerated stochastic gradient method that provably achieves the minimax
optimal statistical risk faster than stochastic gradient descent. Critical to
the analysis is a sharp characterization of accelerated stochastic gradient
descent as a stochastic process. We hope this characterization gives insights
towards the broader question of designing simple and effective accelerated
stochastic methods for more general convex and non-convex optimization
problems.
| 0 | 0 | 1 | 1 | 0 | 0 |
Topological Kondo insulators in one dimension: Continuous Haldane-type ground-state evolution from the strongly-interacting to the non-interacting limit | We study, by means of the density-matrix renormalization group (DMRG)
technique, the evolution of the ground state in a one-dimensional topological
insulator, from the non-interacting to the strongly-interacting limit, where
the system can be mapped onto a topological Kondo-insulator model. We focus on
a toy model Hamiltonian (i.e., the interacting "$sp$-ladder" model), which
could be experimentally realized in optical lattices with higher orbitals
loaded with ultra-cold fermionic atoms. Our goal is to shed light on the
emergence of the strongly-interacting ground state and its topological
classification as the Hubbard-$U$ interaction parameter of the model is
increased. Our numerical results show that the ground state can be generically
classified as a symmetry-protected topological phase of the Haldane-type, even
in the non-interacting case $U=0$ where the system can be additionally
classified as a time-reversal $\mathbb{Z}_{2}$-topological insulator, and
evolves adiabatically between the non-interacting and strongly interacting
limits.
| 0 | 1 | 0 | 0 | 0 | 0 |
A novel improved fuzzy support vector machine based stock price trend forecast model | Application of fuzzy support vector machine in stock price forecast. Support
vector machine is a new type of machine learning method proposed in 1990s. It
can deal with classification and regression problems very successfully. Due to
the excellent learning performance of support vector machine, the technology
has become a hot research topic in the field of machine learning, and it has
been successfully applied in many fields. However, as a new technology, there
are many limitations to support vector machines. There is a large amount of
fuzzy information in the objective world. If the training of support vector
machine contains noise and fuzzy information, the performance of the support
vector machine will become very weak and powerless. As the complexity of many
factors influence the stock price prediction, the prediction results of
traditional support vector machine cannot meet people with precision, this
study improved the traditional support vector machine fuzzy prediction
algorithm is proposed to improve the new model precision. NASDAQ Stock Market,
Standard & Poor's (S&P) Stock market are considered. Novel advanced- fuzzy
support vector machine (NA-FSVM) is the proposed methodology.
| 0 | 0 | 0 | 1 | 0 | 1 |
Deformed Heisenberg Algebra with a minimal length: Application to some molecular potentials | We review the essentials of the formalism of quantum mechanics based on a
deformed Heisenbeg algebra, leading to the existence of a minimal length scale.
We compute in this context, the energy spectra of the pseudoharmonic oscillator
and Kratzer potentials by using a perturbative approach. We derive the
molecular constants, which characterize the vibration--rotation energy levels
of diatomic molecules, and investigate the effect of the minimal length on each
of these parameters for both potentials. We confront our result to experimental
data for the hydrogen molecule to estimate an order of magnitude of this
fundamental scale in molecular physics.
| 0 | 1 | 0 | 0 | 0 | 0 |
Protonation induced high-Tc phases in iron-based superconductors evidenced by NMR and magnetization measurements | Chemical substitution during growth is a well-established method to
manipulate electronic states of quantum materials, and leads to rich spectra of
phase diagrams in cuprate and iron-based superconductors. Here we report a
novel and generic strategy to achieve nonvolatile electron doping in series of
(i.e. 11 and 122 structures) Fe-based superconductors by ionic liquid gating
induced protonation at room temperature. Accumulation of protons in bulk
compounds induces superconductivity in the parent compounds, and enhances the
Tc largely in some superconducting ones. Furthermore, the existence of proton
in the lattice enables the first proton nuclear magnetic resonance (NMR) study
to probe directly superconductivity. Using FeS as a model system, our NMR study
reveals an emergent high-Tc phase with no coherence peak which is hard to
measure by NMR with other isotopes. This novel electric-field-induced proton
evolution opens up an avenue for manipulation of competing electronic states
(e.g. Mott insulators), and may provide an innovative way for a broad
perspective of NMR measurements with greatly enhanced detecting resolution.
| 0 | 1 | 0 | 0 | 0 | 0 |
Spectral Estimation of Plasma Fluctuations I: Comparison of Methods | The relative root mean squared errors (RMSE) of nonparametric methods for
spectral estimation is compared for microwave scattering data of plasma
fluctuations. These methods reduce the variance of the periodogram estimate by
averaging the spectrum over a frequency bandwidth. As the bandwidth increases,
the variance decreases, but the bias error increases. The plasma spectra vary
by over four orders of magnitude, and therefore, using a spectral window is
necessary. We compare the smoothed tapered periodogram with the adaptive
multiple taper methods and hybrid methods. We find that a hybrid method, which
uses four orthogonal tapers and then applies a kernel smoother, performs best.
For 300 point data segments, even an optimized smoothed tapered periodogram has
a 24 \% larger relative RMSE than the hybrid method. We present two new
adaptive multi-taper weightings which outperform Thomson's original adaptive
weighting.
| 0 | 0 | 0 | 1 | 0 | 0 |
Angles between curves in metric measure spaces | The goal of the paper is to study the angle between two curves in the
framework of metric (and metric measure) spaces. More precisely, we give a new
notion of angle between two curves in a metric space. Such a notion has a
natural interplay with optimal transportation and is particularly well suited
for metric measure spaces satisfying the curvature-dimension condition. Indeed
one of the main results is the validity of the cosine formula on $RCD^{*}(K,N)$
metric measure spaces. As a consequence, the new introduced notions are
compatible with the corresponding classical ones for Riemannian manifolds,
Ricci limit spaces and Alexandrov spaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Hybrid Multiscale Model for Cancer Invasion of the Extracellular Matrix | The ability to locally degrade the extracellular matrix (ECM) and interact
with the tumour microenvironment is a key process distinguishing cancer from
normal cells, and is a critical step in the metastatic spread of the tumour.
The invasion of the surrounding tissue involves the coordinated action between
cancer cells, the ECM, the matrix degrading enzymes, and the
epithelial-to-mesenchymal transition (EMT). This is a regulatory process
through which epithelial cells (ECs) acquire mesenchymal characteristics and
transform to mesenchymal-like cells (MCs). In this paper, we present a new
mathematical model which describes the transition from a collective invasion
strategy for the ECs to an individual invasion strategy for the MCs. We achieve
this by formulating a coupled hybrid system consisting of partial and
stochastic differential equations that describe the evolution of the ECs and
the MCs, respectively. This approach allows one to reproduce in a very natural
way fundamental qualitative features of the current biomedical understanding of
cancer invasion that are not easily captured by classical modelling approaches,
for example, the invasion of the ECM by self-generated gradients and the
appearance of EC invasion islands outside of the main body of the tumour.
| 0 | 0 | 0 | 0 | 1 | 0 |
Differential quadrature method for space-fractional diffusion equations on 2D irregular domains | In mathematical physics, the space-fractional diffusion equations are of
particular interest in the studies of physical phenomena modelled by Lévy
processes, which are sometimes called super-diffusion equations. In this
article, we develop the differential quadrature (DQ) methods for solving the 2D
space-fractional diffusion equations on irregular domains. The methods in
presence reduce the original equation into a set of ordinary differential
equations (ODEs) by introducing valid DQ formulations to fractional directional
derivatives based on the functional values at scattered nodal points on problem
domain. The required weighted coefficients are calculated by using radial basis
functions (RBFs) as trial functions, and the resultant ODEs are discretized by
the Crank-Nicolson scheme. The main advantages of our methods lie in their
flexibility and applicability to arbitrary domains. A series of illustrated
examples are finally provided to support these points.
| 0 | 0 | 1 | 0 | 0 | 0 |
Positivstellensatzë for noncommutative rational expressions | We derive some Positivstellensatzë for noncommutative rational expressions
from the Positivstellensatzë for noncommutative polynomials. Specifically, we
show that if a noncommutative rational expression is positive on a polynomially
convex set, then there is an algebraic certificate witnessing that fact. As in
the case of noncommutative polynomials, our results are nicer when we
additionally assume positivity on a convex set-- that is, we obtain a so-called
"perfect Positivstellensatz" on convex sets.
| 0 | 0 | 1 | 0 | 0 | 0 |
Cohesion-based Online Actor-Critic Reinforcement Learning for mHealth Intervention | In the wake of the vast population of smart device users worldwide, mobile
health (mHealth) technologies are hopeful to generate positive and wide
influence on people's health. They are able to provide flexible, affordable and
portable health guides to device users. Current online decision-making methods
for mHealth assume that the users are completely heterogeneous. They share no
information among users and learn a separate policy for each user. However,
data for each user is very limited in size to support the separate online
learning, leading to unstable policies that contain lots of variances. Besides,
we find the truth that a user may be similar with some, but not all, users, and
connected users tend to have similar behaviors. In this paper, we propose a
network cohesion constrained (actor-critic) Reinforcement Learning (RL) method
for mHealth. The goal is to explore how to share information among similar
users to better convert the limited user information into sharper learned
policies. To the best of our knowledge, this is the first online actor-critic
RL for mHealth and first network cohesion constrained (actor-critic) RL method
in all applications. The network cohesion is important to derive effective
policies. We come up with a novel method to learn the network by using the warm
start trajectory, which directly reflects the users' property. The optimization
of our model is difficult and very different from the general supervised
learning due to the indirect observation of values. As a contribution, we
propose two algorithms for the proposed online RLs. Apart from mHealth, the
proposed methods can be easily applied or adapted to other health-related
tasks. Extensive experiment results on the HeartSteps dataset demonstrates that
in a variety of parameter settings, the proposed two methods obtain obvious
improvements over the state-of-the-art methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
The minus order and range additivity | We study the minus order on the algebra of bounded linear operators on a
Hilbert space. By giving a characterization in terms of range additivity, we
show that the intrinsic nature of the minus order is algebraic. Applications to
generalized inverses of the sum of two operators, to systems of operator
equations and to optimization problems are also presented.
| 0 | 0 | 1 | 0 | 0 | 0 |
Rigidity-induced scale invariance in polymer ejection from capsid | While the dynamics of a fully flexible polymer ejecting a capsid through a
nanopore has been extensively studied, the ejection dynamics of semiflexible
polymers has not been properly characterized. Here we report results from
simulations of ejection dynamics of semiflexible polymers ejecting from
spherical capsids. Ejections start from strongly confined polymer conformations
of constant initial monomer density. We find that, unlike for fully flexible
polymers, for semiflexible polymers the force measured at the pore does not
show a direct relation to the instantaneous ejection velocity. The cumulative
waiting time $t(s)$, that is, the time at which a monomer $s$ exits the capsid
the last time, shows a clear change when increasing the polymer rigidity
$\kappa$. Major part of an ejecting polymer is driven out of the capsid by
internal pressure. At the final stage the polymer escapes the capsid by
diffusion. For the driven part there is a cross-over from essentially
exponential growth of $t$ with $s$ of the fully flexible polymers to a
scale-invariant form. In addition, a clear dependence of $t$ on $N_0$ was
found. These findings combined give the dependence $t(s) \propto N_0^{0.55}
s^{1.33}$ for the strongly rigid polymers. This cross-over in dynamics where
$\kappa$ acts as a control parameter is reminiscent of a phase transition. This
analogy is further enhanced by our finding a perfect data collapse of $t$ for
polymers of different $N_0$ and any constant $\kappa$.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Cloud-based Service for Real-Time Performance Evaluation of NoSQL Databases | We have created a cloud-based service that allows the end users to run tests
on multiple different databases to find which databases are most suitable for
their project. From our research, we could not find another application that
enables the user to test several databases to gauge the difference between
them. This application allows the user to choose which type of test to perform
and which databases to target. The application also displays the results of
different tests that were run by other users previously. There is also a map to
show the location where all the tests are run to give the user an estimate of
the location. Unlike the orthodox static tests and reports conducted to
evaluate NoSQL databases, we have created a web application to run and analyze
these tests in real time. This web application evaluates the performance of
several NoSQL databases. The databases covered are MongoDB, DynamoDB, CouchDB,
and Firebase. The web service is accessible from: nosqldb.nextproject.ca.
| 1 | 0 | 0 | 0 | 0 | 0 |
On hyperballeans of bounded geometry | A ballean (or coarse structure) is a set endowed with some family of subsets,
the balls, is such a way that balleans with corresponding morphisms can be
considered as asymptotic counterparts of uniform topological spaces. For a
ballean $\mathcal{B}$ on a set $X$, the hyperballean $\mathcal{B}^{\flat}$ is a
ballean naturally defined on the set $X^{\flat}$ of all bounded subsets of $X$.
We describe all balleans with hyperballeans of bounded geometry and analyze the
structure of these hyperballeans.
| 0 | 0 | 1 | 0 | 0 | 0 |
Bayesian model selection consistency and oracle inequality with intractable marginal likelihood | In this article, we investigate large sample properties of model selection
procedures in a general Bayesian framework when a closed form expression of the
marginal likelihood function is not available or a local asymptotic quadratic
approximation of the log-likelihood function does not exist. Under appropriate
identifiability assumptions on the true model, we provide sufficient conditions
for a Bayesian model selection procedure to be consistent and obey the Occam's
razor phenomenon, i.e., the probability of selecting the "smallest" model that
contains the truth tends to one as the sample size goes to infinity. In order
to show that a Bayesian model selection procedure selects the smallest model
containing the truth, we impose a prior anti-concentration condition, requiring
the prior mass assigned by large models to a neighborhood of the truth to be
sufficiently small. In a more general setting where the strong model
identifiability assumption may not hold, we introduce the notion of local
Bayesian complexity and develop oracle inequalities for Bayesian model
selection procedures. Our Bayesian oracle inequality characterizes a trade-off
between the approximation error and a Bayesian characterization of the local
complexity of the model, illustrating the adaptive nature of averaging-based
Bayesian procedures towards achieving an optimal rate of posterior convergence.
Specific applications of the model selection theory are discussed in the
context of high-dimensional nonparametric regression and density regression
where the regression function or the conditional density is assumed to depend
on a fixed subset of predictors. As a result of independent interest, we
propose a general technique for obtaining upper bounds of certain small ball
probability of stationary Gaussian processes.
| 0 | 0 | 1 | 1 | 0 | 0 |
ChimpCheck: Property-Based Randomized Test Generation for Interactive Apps | We consider the problem of generating relevant execution traces to test rich
interactive applications. Rich interactive applications, such as apps on mobile
platforms, are complex stateful and often distributed systems where
sufficiently exercising the app with user-interaction (UI) event sequences to
expose defects is both hard and time-consuming. In particular, there is a
fundamental tension between brute-force random UI exercising tools, which are
fully-automated but offer low relevance, and UI test scripts, which are manual
but offer high relevance. In this paper, we consider a middle way---enabling a
seamless fusion of scripted and randomized UI testing. This fusion is
prototyped in a testing tool called ChimpCheck for programming, generating, and
executing property-based randomized test cases for Android apps. Our approach
realizes this fusion by offering a high-level, embedded domain-specific
language for defining custom generators of simulated user-interaction event
sequences. What follows is a combinator library built on industrial strength
frameworks for property-based testing (ScalaCheck) and Android testing (Android
JUnit and Espresso) to implement property-based randomized testing for Android
development. Driven by real, reported issues in open source Android apps, we
show, through case studies, how ChimpCheck enables expressing effective testing
patterns in a compact manner.
| 1 | 0 | 0 | 0 | 0 | 0 |
Mathematical analysis of pulsatile flow, vortex breakdown and instantaneous blow-up for the axisymmetric Euler equations | The dynamics along the particle trajectories for the 3D axisymmetric Euler
equations are considered. It is shown that if the inflow is rapidly increasing
(pushy) in time, the corresponding laminar profile of the incompressible Euler
flow is not (in some sense) stable provided that the swirling component is not
zero. It is also shown that if the vorticity on the axis is not zero (with some
extra assumptions), then there is no steady flow. We can rephrase these
instability to an instantaneous blow-up. In the proof, Frenet-Serret formulas
and orthonormal moving frame are essentially used.
| 0 | 0 | 1 | 0 | 0 | 0 |
Resilient Feedback Controller Design For Linear Model of Power Grids | In this paper, a resilient controller is designed for the linear
time-invariant (LTI) systems subject to attacks on the sensors and the
actuators. A novel probabilistic attack model is proposed to capture
vulnerabilities of the communication links from sensors to the controller and
from the controller to actuators. The observer and the controller formulation
under the attack are derived. Thereafter, By leveraging Lyapunov functional
methods, it is shown that exponential mean-square stability of the system under
the output feedback controller is guaranteed if a certain LMI is feasible. The
simulation results show the effectiveness and applicability of the proposed
controller design approach.
| 1 | 0 | 0 | 0 | 0 | 0 |
Integrability of dispersionless Hirota type equations in 4D and the symplectic Monge-Ampere property | We prove that integrability of a dispersionless Hirota type equation implies
the symplectic Monge-Ampere property in any dimension $\geq 4$. In 4D this
yields a complete classification of integrable dispersionless PDEs of Hirota
type through a list of heavenly type equations arising in self-dual gravity. As
a by-product of our approach we derive an involutive system of relations
characterising symplectic Monge-Ampere equations in any dimension.
Moreover, we demonstrate that in 4D the requirement of integrability is
equivalent to self-duality of the conformal structure defined by the
characteristic variety of the equation on every solution, which is in turn
equivalent to the existence of a dispersionless Lax pair. We also give a
criterion of linerisability of a Hirota type equation via flatness of the
corresponding conformal structure, and study symmetry properties of integrable
equations.
| 0 | 1 | 1 | 0 | 0 | 0 |
Optimal stopping via reinforced regression | In this note we propose a new approach towards solving numerically optimal
stopping problems via reinforced regression based Monte Carlo algorithms. The
main idea of the method is to reinforce standard linear regression algorithms
in each backward induction step by adding new basis functions based on
previously estimated continuation values. The proposed methodology is
illustrated by a numerical example from mathematical finance.
| 0 | 0 | 0 | 1 | 0 | 0 |
Least informative distributions in Maximum q-log-likelihood estimation | We use the Maximum $q$-log-likelihood estimation for Least informative
distributions (LID) in order to estimate the parameters in probability density
functions (PDFs) efficiently and robustly when data include outlier(s). LIDs
are derived by using convex combinations of two PDFs,
$f_\epsilon=(1-\epsilon)f_0+\epsilon f_1$. A convex combination of two PDFs is
considered as a contamination $f_1$ as outlier(s) to underlying $f_0$
distributions and $f_\epsilon$ is a contaminated distribution. The optimal
criterion is obtained by minimizing the change of Maximum q-log-likelihood
function when the data have slightly more contamination. In this paper, we make
a comparison among ordinary Maximum likelihood, Maximum q-likelihood
estimations, LIDs based on $\log_q$ and Huber M-estimation. Akaike and Bayesian
information criterions (AIC and BIC) based on $\log_q$ and LID are proposed to
assess the fitting performance of functions. Real data sets are applied to test
the fitting performance of estimating functions that include shape, scale and
location parameters.
| 0 | 0 | 1 | 1 | 0 | 0 |
Visual analytics for loan guarantee network risk management | Groups of enterprises guarantee each other and form complex guarantee
networks when they try to obtain loans from banks. Such secured loan can
enhance the solvency and promote the rapid growth in the economic upturn
period. However, potential systemic risk may happen within the risk binding
community. Especially, during the economic down period, the crisis may spread
in the guarantee network like a domino. Monitoring the financial status,
preventing or reducing systematic risk when crisis happens is highly concerned
by the regulatory commission and banks. We propose visual analytics approach
for loan guarantee network risk management, and consolidate the five analysis
tasks with financial experts: i) visual analytics for enterprises default risk,
whereby a hybrid representation is devised to predict the default risk and
developed an interface to visualize key indicators; ii) visual analytics for
high default groups, whereby a community detection based interactive approach
is presented; iii) visual analytics for high defaults pattern, whereby a motif
detection based interactive approach is described, and we adopt a Shneiderman
Mantra strategy to reduce the computation complexity. iv) visual analytics for
evolving guarantee network, whereby animation is used to help understanding the
guarantee dynamic; v) visual analytics approach and interface for default
diffusion path. The temporal diffusion path analysis can be useful for the
government and bank to monitor the default spread status. It also provides
insight for taking precautionary measures to prevent and dissolve systemic
financial risk. We implement the system with case studies on a real-world
guarantee network. Two financial experts are consulted with endorsement on the
developed tool. To the best of our knowledge, this is the first visual
analytics tool to explore the guarantee network risks in a systematic manner.
| 1 | 0 | 0 | 0 | 0 | 0 |
Pronunciation recognition of English phonemes /\textipa{@}/, /æ/, /\textipa{A}:/ and /\textipa{2}/ using Formants and Mel Frequency Cepstral Coefficients | The Vocal Joystick Vowel Corpus, by Washington University, was used to study
monophthongs pronounced by native English speakers. The objective of this study
was to quantitatively measure the extent at which speech recognition methods
can distinguish between similar sounding vowels. In particular, the phonemes
/\textipa{@}/, /{\ae}/, /\textipa{A}:/ and /\textipa{2}/ were analysed. 748
sound files from the corpus were used and subjected to Linear Predictive Coding
(LPC) to compute their formants, and to Mel Frequency Cepstral Coefficients
(MFCC) algorithm, to compute the cepstral coefficients. A Decision Tree
Classifier was used to build a predictive model that learnt the patterns of the
two first formants measured in the data set, as well as the patterns of the 13
cepstral coefficients. An accuracy of 70\% was achieved using formants for the
mentioned phonemes. For the MFCC analysis an accuracy of 52 \% was achieved and
an accuracy of 71\% when /\textipa{@}/ was ignored. The results obtained show
that the studied algorithms are far from mimicking the ability of
distinguishing subtle differences in sounds like human hearing does.
| 1 | 0 | 0 | 0 | 0 | 0 |
Vulnerability to pandemics in a rapidly urbanizing society | We examine salient trends of influenza pandemics in Australia, a rapidly
urbanizing nation. To do so, we implement state-of-the-art influenza
transmission and progression models within a large-scale stochastic computer
simulation, generated using comprehensive Australian census datasets from 2006,
2011, and 2016. Our results offer the first simulation-based investigation of a
population's sensitivity to pandemics across multiple historical time points,
and highlight three significant trends in pandemic patterns over the years:
increased peak prevalence, faster spreading rates, and decreasing
spatiotemporal bimodality. We attribute these pandemic trends to increases in
two key quantities indicative of urbanization: population fraction residing in
major cities, and international air traffic. In addition, we identify features
of the pandemic's geographic spread that can only be attributed to changes in
the commuter mobility network. The generic nature of our model and the ubiquity
of urbanization trends around the world make it likely for our results to be
applicable in other rapidly urbanizing nations.
| 0 | 0 | 0 | 0 | 1 | 0 |
Theoretical aspects of microscale acoustofluidics | Henrik Bruus is professor of lab-chip systems and theoretical physics at the
Technical University of Denmark. In this contribution, he summarizes some of
the recent results within theory and simulation of microscale acoustofluidic
systems that he has obtained in collaboration with his students and
international colleagues. The main emphasis is on three dynamical effects
induced by external ultrasound fields acting on aqueous solutions and particle
suspensions: The acoustic radiation force acting on suspended micro- and
nanoparticles, the acoustic streaming appearing in the fluid, and the newly
discovered acoustic body force acting on inhomogeneous solutions.
| 0 | 0 | 0 | 0 | 1 | 0 |
Context Generation from Formal Specifications for C Analysis Tools | Analysis tools like abstract interpreters, symbolic execution tools and
testing tools usually require a proper context to give useful results when
analyzing a particular function. Such a context initializes the function
parameters and global variables to comply with function requirements. However
it may be error-prone to write it by hand: the handwritten context might
contain bugs or not match the intended specification. A more robust approach is
to specify the context in a dedicated specification language, and hold the
analysis tools to support it properly. This may mean to put significant
development efforts for enhancing the tools, something that is often not
feasible if ever possible.
This paper presents a way to systematically generate such a context from a
formal specification of a C function. This is applied to a subset of the ACSL
specification language in order to generate suitable contexts for the abstract
interpretation-based value analysis plug-ins of Frama-C, a framework for
analysis of code written in C. The idea here presented has been implemented in
a new Frama-C plug-in which is currently in use in an operational industrial
setting.
| 1 | 0 | 0 | 0 | 0 | 0 |
Gradient Descent with Random Initialization: Fast Global Convergence for Nonconvex Phase Retrieval | This paper considers the problem of solving systems of quadratic equations,
namely, recovering an object of interest
$\mathbf{x}^{\natural}\in\mathbb{R}^{n}$ from $m$ quadratic equations/samples
$y_{i}=(\mathbf{a}_{i}^{\top}\mathbf{x}^{\natural})^{2}$, $1\leq i\leq m$. This
problem, also dubbed as phase retrieval, spans multiple domains including
physical sciences and machine learning.
We investigate the efficiency of gradient descent (or Wirtinger flow)
designed for the nonconvex least squares problem. We prove that under Gaussian
designs, gradient descent --- when randomly initialized --- yields an
$\epsilon$-accurate solution in $O\big(\log n+\log(1/\epsilon)\big)$ iterations
given nearly minimal samples, thus achieving near-optimal computational and
sample complexities at once. This provides the first global convergence
guarantee concerning vanilla gradient descent for phase retrieval, without the
need of (i) carefully-designed initialization, (ii) sample splitting, or (iii)
sophisticated saddle-point escaping schemes. All of these are achieved by
exploiting the statistical models in analyzing optimization algorithms, via a
leave-one-out approach that enables the decoupling of certain statistical
dependency between the gradient descent iterates and the data.
| 0 | 0 | 0 | 1 | 0 | 0 |
Unifying and Generalizing Methods for Removing Unwanted Variation Based on Negative Controls | Unwanted variation, including hidden confounding, is a well-known problem in
many fields, particularly large-scale gene expression studies. Recent proposals
to use control genes --- genes assumed to be unassociated with the covariates
of interest --- have led to new methods to deal with this problem. Going by the
moniker Removing Unwanted Variation (RUV), there are many versions --- RUV1,
RUV2, RUV4, RUVinv, RUVrinv, RUVfun. In this paper, we introduce a general
framework, RUV*, that both unites and generalizes these approaches. This
unifying framework helps clarify connections between existing methods. In
particular we provide conditions under which RUV2 and RUV4 are equivalent. The
RUV* framework also preserves an advantage of RUV approaches --- their
modularity --- which facilitates the development of novel methods based on
existing matrix imputation algorithms. We illustrate this by implementing RUVB,
a version of RUV* based on Bayesian factor analysis. In realistic simulations
based on real data we found that RUVB is competitive with existing methods in
terms of both power and calibration, although we also highlight the challenges
of providing consistently reliable calibration among data sets.
| 0 | 0 | 1 | 1 | 0 | 0 |
Quantum-Accurate Molecular Dynamics Potential for Tungsten | The purpose of this short contribution is to report on the development of a
Spectral Neighbor Analysis Potential (SNAP) for tungsten. We have focused on
the characterization of elastic and defect properties of the pure material in
order to support molecular dynamics simulations of plasma-facing materials in
fusion reactors. A parallel genetic algorithm approach was used to efficiently
search for fitting parameters optimized against a large number of objective
functions. In addition, we have shown that this many-body tungsten potential
can be used in conjunction with a simple helium pair potential to produce
accurate defect formation energies for the W-He binary system.
| 0 | 1 | 0 | 0 | 0 | 0 |
An empirical evaluation of alternative methods of estimation for Permutation Entropy in time series with tied values | Bandt and Pompe introduced Permutation Entropy in 2002 for Time Series where
equal values, xt1 = xt2, t1 = t2, were neglected and only inequalities between
the xt were considered. Since then, this measure has been modified and
extended, in particular in cases when the amount of equal values in the series
can not be neglected, (i.e. heart rate variability (HRV) time series). We
review the different existing methodologies that treats this subject by
classifying them according to their different strategies. In addition, a novel
Bayesian Missing Data Imputation is presented that proves to outperform the
existing methodologies that deals with type of time series. All this facts are
illustrated by simulations and also by distinguishing patients suffering from
Congestive Heart Failure from a (healthy) control group using HRV time series
| 0 | 0 | 0 | 1 | 0 | 0 |
MIMIC-CXR: A large publicly available database of labeled chest radiographs | Chest radiography is an extremely powerful imaging modality, allowing for a
detailed inspection of a patient's thorax, but requiring specialized training
for proper interpretation. With the advent of high performance general purpose
computer vision algorithms, the accurate automated analysis of chest
radiographs is becoming increasingly of interest to researchers. However, a key
challenge in the development of these techniques is the lack of sufficient
data. Here we describe MIMIC-CXR, a large dataset of 371,920 chest x-rays
associated with 227,943 imaging studies sourced from the Beth Israel Deaconess
Medical Center between 2011 - 2016. Each imaging study can pertain to one or
more images, but most often are associated with two images: a frontal view and
a lateral view. Images are provided with 14 labels derived from a natural
language processing tool applied to the corresponding free-text radiology
reports. All images have been de-identified to protect patient privacy. The
dataset is made freely available to facilitate and encourage a wide range of
research in medical computer vision.
| 1 | 0 | 0 | 0 | 0 | 0 |
Convolutional Neural Networks for Page Segmentation of Historical Document Images | This paper presents a Convolutional Neural Network (CNN) based page
segmentation method for handwritten historical document images. We consider
page segmentation as a pixel labeling problem, i.e., each pixel is classified
as one of the predefined classes. Traditional methods in this area rely on
carefully hand-crafted features or large amounts of prior knowledge. In
contrast, we propose to learn features from raw image pixels using a CNN. While
many researchers focus on developing deep CNN architectures to solve different
problems, we train a simple CNN with only one convolution layer. We show that
the simple architecture achieves competitive results against other deep
architectures on different public datasets. Experiments also demonstrate the
effectiveness and superiority of the proposed method compared to previous
methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
Localized Quantitative Criteria for Equidistribution | Let $(x_n)_{n=1}^{\infty}$ be a sequence on the torus $\mathbb{T}$
(normalized to length 1). We show that if there exists a sequence of positive
real numbers $(t_n)_{n=1}^{\infty}$ converging to 0 such that $$ \lim_{N
\rightarrow \infty}{\frac{1}{N^2} \sum_{m,n = 1}^{N}{\frac{1}{\sqrt{t_N}}
\exp{\left(- \frac{1}{t_N} (x_m - x_n)^2 \right)}}}
= \sqrt{\pi},$$ then $(x_n)_{n=1}^{\infty}$ is uniformly distributed. This is
especially interesting when $t_N \sim N^{-2}$ since the size of the sum is then
essentially determined exclusively by local gaps at scale $\sim N^{-1}$. This
can be used to show equidistribution of sequences with Poissonian pair
correlation, which recovers a recent result of Aistleitner, Lachmann &
Pausinger and Grepstad & Larcher. The general form of the result is proven on
arbitrary compact manifolds $(M,g)$ where the role of the exponential function
is played by the heat kernel $e^{t\Delta}$: for all $x_1, \dots, x_N \in M$ and
all $t>0$ $$ \frac{1}{N^2}\sum_{m,n=1}^{N}{[e^{t\Delta}\delta_{x_m}](x_n)} \geq
\frac{1}{vol(M)}$$ and equality is attained as $N \rightarrow \infty$ if and
only if $(x_n)_{n=1}^{\infty}$ equidistributes.
| 0 | 0 | 1 | 0 | 0 | 0 |
Disturbance propagation, inertia location and slow modes in large-scale high voltage power grids | Conventional generators in power grids are steadily substituted with new
renewable sources of electric power. The latter are connected to the grid via
inverters and as such have little, if any rotational inertia. The resulting
reduction of total inertia raises important issues of power grid stability,
especially over short-time scales. We have constructed a model of the
synchronous grid of continental Europe with which we numerically investigate
frequency deviations as well as rates of change of frequency (RoCoF) following
abrupt power losses. The magnitude of RoCoF's and frequency deviations strongly
depend on the fault location, and we find the largest effects for faults
located on the slowest mode - the Fiedler mode - of the network Laplacian
matrix. This mode essentially vanishes over Belgium, Eastern France, Western
Germany, northern Italy and Switzerland. Buses inside these regions are only
weakly affected by faults occuring outside. Conversely, faults inside these
regions have only a local effect and disturb only weakly outside buses.
Following this observation, we reduce rotational inertia through three
different procedures by either (i) reducing inertia on the Fiedler mode, (ii)
reducing inertia homogeneously and (iii) reducing inertia outside the Fiedler
mode. We find that procedure (iii) has little effect on disturbance
propagation, while procedure (i) leads to the strongest increase of RoCoF and
frequency deviations. These results for our model of the European transmission
grid are corroborated by numerical investigations on the ERCOT transmission
grid.
| 1 | 0 | 0 | 0 | 0 | 0 |
Investigating the Characteristics of One-Sided Matching Mechanisms Under Various Preferences and Risk Attitudes | One-sided matching mechanisms are fundamental for assigning a set of
indivisible objects to a set of self-interested agents when monetary transfers
are not allowed. Two widely-studied randomized mechanisms in multiagent
settings are the Random Serial Dictatorship (RSD) and the Probabilistic Serial
Rule (PS). Both mechanisms require only that agents specify ordinal preferences
and have a number of desirable economic and computational properties. However,
the induced outcomes of the mechanisms are often incomparable and thus there
are challenges when it comes to deciding which mechanism to adopt in practice.
In this paper, we first consider the space of general ordinal preferences and
provide empirical results on the (in)comparability of RSD and PS. We analyze
their respective economic properties under general and lexicographic
preferences. We then instantiate utility functions with the goal of gaining
insights on the manipulability, efficiency, and envyfreeness of the mechanisms
under different risk-attitude models. Our results hold under various preference
distribution models, which further confirm the broad use of RSD in most
practical applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
MIDI-VAE: Modeling Dynamics and Instrumentation of Music with Applications to Style Transfer | We introduce MIDI-VAE, a neural network model based on Variational
Autoencoders that is capable of handling polyphonic music with multiple
instrument tracks, as well as modeling the dynamics of music by incorporating
note durations and velocities. We show that MIDI-VAE can perform style transfer
on symbolic music by automatically changing pitches, dynamics and instruments
of a music piece from, e.g., a Classical to a Jazz style. We evaluate the
efficacy of the style transfer by training separate style validation
classifiers. Our model can also interpolate between short pieces of music,
produce medleys and create mixtures of entire songs. The interpolations
smoothly change pitches, dynamics and instrumentation to create a harmonic
bridge between two music pieces. To the best of our knowledge, this work
represents the first successful attempt at applying neural style transfer to
complete musical compositions.
| 1 | 0 | 0 | 0 | 0 | 0 |
A General Deep Learning Framework for Structure and Dynamics Reconstruction from Time Series Data | In this work, we present Gumbel Graph Network, a model-free deep learning
framework for dynamics learning and network reconstruction from the observed
time series data. Our method requires no prior knowledge about underlying
dynamics and has shown the state-of-the-art performance in three typical
dynamical systems on complex networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Disorder robustness and protection of Majorana bound states in ferromagnetic chains on conventional superconductors | Majorana bound states (MBS) are well-established in the clean limit in chains
of ferromagnetically aligned impurities deposited on conventional
superconductors with finite spin-orbit coupling. Here we show that these MBS
are very robust against disorder. By performing self-consistent calculations we
find that the MBS are protected as long as the surrounding superconductor show
no large signs of inhomogeneity. We find that longer chains offer more
stability against disorder for the MBS, albeit the minigap decreases, as do
increasing strengths of spin-orbit coupling and superconductivity.
| 0 | 1 | 0 | 0 | 0 | 0 |
Characterisation of novel prototypes of monolithic HV-CMOS pixel detectors for high energy physics experiments | An upgrade of the ATLAS experiment for the High Luminosity phase of LHC is
planned for 2024 and foresees the replacement of the present Inner Detector
(ID) with a new Inner Tracker (ITk) completely made of silicon devices.
Depleted active pixel sensors built with the High Voltage CMOS (HV-CMOS)
technology are investigated as an option to cover large areas in the outermost
layers of the pixel detector and are especially interesting for the development
of monolithic devices which will reduce the production costs and the material
budget with respect to the present hybrid assemblies. For this purpose the
H35DEMO, a large area HV-CMOS demonstrator chip, was designed by KIT, IFAE and
University of Liverpool, and produced in AMS 350 nm CMOS technology. It
consists of four pixel matrices and additional test structures. Two of the
matrices include amplifiers and discriminator stages and are thus designed to
be operated as monolithic detectors. In these devices the signal is mainly
produced by charge drift in a small depleted volume obtained by applying a bias
voltage of the order of 100 V. Moreover, to enhance the radiation hardness of
the chip, this technology allows to enclose the electronics in the same deep
N-WELLs which are also used as collecting electrodes. In this contribution the
characterisation of H35DEMO chips and results of the very first beam test
measurements of the monolithic CMOS matrices with high energetic pions at CERN
SPS will be presented.
| 0 | 1 | 0 | 0 | 0 | 0 |
End-to-end DNN Based Speaker Recognition Inspired by i-vector and PLDA | Recently several end-to-end speaker verification systems based on deep neural
networks (DNNs) have been proposed. These systems have been proven to be
competitive for text-dependent tasks as well as for text-independent tasks with
short utterances. However, for text-independent tasks with longer utterances,
end-to-end systems are still outperformed by standard i-vector + PLDA systems.
In this work, we develop an end-to-end speaker verification system that is
initialized to mimic an i-vector + PLDA baseline. The system is then further
trained in an end-to-end manner but regularized so that it does not deviate too
far from the initial system. In this way we mitigate overfitting which normally
limits the performance of end-to-end systems. The proposed system outperforms
the i-vector + PLDA baseline on both long and short duration utterances.
| 1 | 0 | 0 | 0 | 0 | 0 |
Motivic infinite loop spaces | We prove a recognition principle for motivic infinite P1-loop spaces over a
perfect field. This is achieved by developing a theory of framed motivic
spaces, which is a motivic analogue of the theory of E-infinity-spaces. A
framed motivic space is a motivic space equipped with transfers along finite
syntomic morphisms with trivialized cotangent complex in K-theory. Our main
result is that grouplike framed motivic spaces are equivalent to the full
subcategory of motivic spectra generated under colimits by suspension spectra.
As a consequence, we deduce some representability results for suspension
spectra of smooth varieties, and in particular for the motivic sphere spectrum,
in terms of Hilbert schemes of points in affine spaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
Dimer correlation amplitudes and dimer excitation gap in spin-1/2 XXZ and Heisenberg chains | Correlation functions of dimer operators, the product operators of spins on
two adjacent sites, are studied in the spin-$\frac{1}{2}$ XXZ chain in the
critical regime. The amplitudes of the leading oscillating terms in the dimer
correlation functions are determined with high accuracy as functions of the
exchange anisotropy parameter and the external magnetic field, through the
combined use of bosonization and density-matrix renormalization group methods.
In particular, for the antiferromagnetic Heisenberg model with SU(2) symmetry,
logarithmic corrections to the dimer correlations due to the
marginally-irrelevant operator are studied, and the asymptotic form of the
dimer correlation function is obtained. The asymptotic form of the spin-Peierls
excitation gap including logarithmic corrections is also derived.
| 0 | 1 | 0 | 0 | 0 | 0 |
Spectrum Access In Cognitive Radio Using A Two Stage Reinforcement Learning Approach | With the advent of the 5th generation of wireless standards and an increasing
demand for higher throughput, methods to improve the spectral efficiency of
wireless systems have become very important. In the context of cognitive radio,
a substantial increase in throughput is possible if the secondary user can make
smart decisions regarding which channel to sense and when or how often to
sense. Here, we propose an algorithm to not only select a channel for data
transmission but also to predict how long the channel will remain unoccupied so
that the time spent on channel sensing can be minimized. Our algorithm learns
in two stages - a reinforcement learning approach for channel selection and a
Bayesian approach to determine the optimal duration for which sensing can be
skipped. Comparisons with other learning methods are provided through extensive
simulations. We show that the number of sensing is minimized with negligible
increase in primary interference; this implies that lesser energy is spent by
the secondary user in sensing and also higher throughput is achieved by saving
on sensing.
| 1 | 0 | 0 | 0 | 0 | 0 |
Charge transport through a single molecule of trans-1-bis-diazofluorene [60]fullerene | Fullerenes have attracted interest for their possible applications in various
electronic, biological, and optoelectronic devices. However, for efficient use
in such devices, a suitable anchoring group has to be employed that forms
well-defined and stable contacts with the electrodes. In this work, we propose
a novel fullerene tetramalonate derivate functionalized with trans-1
4,5-diazafluorene anchoring groups. The conductance of single-molecule
junctions, investigated in two different setups with the mechanically
controlled break junction technique, reveals the formation of molecular
junctions at three conductance levels. We attribute the conductance peaks to
three binding modes of the anchoring groups to the gold electrodes. Density
functional theory calculations confirm the existence of multiple binding
configurations and calculated transmission functions are consistent with
experimentally determined conductance values.
| 0 | 1 | 0 | 0 | 0 | 0 |
Blind Spots for Direct Detection with Simplified DM Models and the LHC | Using the existing simplified model framework, we build several dark matter
models which have suppressed spin-independent scattering cross section. We show
that the scattering cross section can vanish due to interference effects with
models obtained by simple combinations of simplified models. For weakly
interacting massive particle (WIMP) masses $\gtrsim$10 GeV, collider limits are
usually much weaker than the direct detection limits coming from LUX or
XENON100. However, for our model combinations, LHC analyses are more
competitive for some parts of the parameter space. The regions with direct
detection blind spots can be strongly constrained from the complementary use of
several Large Hadron Collider (LHC) searches like mono-jet, jets + missing
transverse energy, heavy vector resonance searches, etc. We evaluate the
strongest limits for combinations of scalar + vector, "squark" + vector, and
scalar + "squark" mediator, and present the LHC 14 TeV projections.
| 0 | 1 | 0 | 0 | 0 | 0 |
Experiments of posture estimation on vehicles using wearable acceleration sensors | In this paper, we study methods to estimate drivers' posture in vehicles
using acceleration data of wearable sensor and conduct a field test. Recently,
sensor technologies have been progressed. Solutions of safety management to
analyze vital data acquired from wearable sensor and judge work status are
proposed. To prevent huge accidents, demands for safety management of bus and
taxi are high. However, acceleration of vehicles is added to wearable sensor in
vehicles, and there is no guarantee to estimate drivers' posture accurately.
Therefore, in this paper, we study methods to estimate driving posture using
acceleration data acquired from T-shirt type wearable sensor hitoe, conduct
field tests and implement a sample application.
| 1 | 0 | 0 | 0 | 0 | 0 |
Matrix Completion Based Localization in the Internet of Things Network | In order to make a proper reaction to the collected information from internet
of things (IoT) devices, location information of things should be available at
the data center. One challenge for the massive IoT networks is to identify the
location map of whole sensor nodes from partially observed distance
information. In this paper, we propose a matrix completion based localization
algorithm to reconstruct the location map of sensors using partially observed
distance information. From the numerical experiments, we show that the proposed
method based on the modified conjugate gradient is effective in recovering the
Euclidean distance matrix.
| 1 | 0 | 0 | 0 | 0 | 0 |
Hypersurfaces with nonnegative Ricci curvature in hyperbolic space | Based on properties of n-subharmonic functions we show that a complete,
noncompact, properly embedded hypersurface with nonnegative Ricci curvature in
hyperbolic space has an asymptotic boundary at infinity of at most two points.
Moreover, the presence of two points in the asymptotic boundary is a rigidity
condition that forces the hypersurface to be an equidistant hypersurface about
a geodesic line in hyperbolic space. This gives an affirmative answer to the
question raised by Alexander and Currier in 1990.
| 0 | 0 | 1 | 0 | 0 | 0 |
Achieving the time of $1$-NN, but the accuracy of $k$-NN | We propose a simple approach which, given distributed computing resources,
can nearly achieve the accuracy of $k$-NN prediction, while matching (or
improving) the faster prediction time of $1$-NN. The approach consists of
aggregating denoised $1$-NN predictors over a small number of distributed
subsamples. We show, both theoretically and experimentally, that small
subsample sizes suffice to attain similar performance as $k$-NN, without
sacrificing the computational efficiency of $1$-NN.
| 0 | 0 | 1 | 1 | 0 | 0 |
A note on unitizations of generalized effect algebras | There is a forgetful functor from the category of generalized effect algebras
to the category of effect algebras. We prove that this functor is a right
adjoint and that the corresponding left adjoint is the well-known unitization
construction by Hedlíková and Pulmannová. Moreover, this adjunction is
monadic.
| 0 | 0 | 1 | 0 | 0 | 0 |
Real and Complex Integrals on Spheres and Balls | We evaluate integrals of certain polynomials over spheres and balls in real
or complex spaces. We also promote the use of the Pochhammer symbol which gives
the values of our integrals in compact forms.
| 0 | 0 | 1 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.