title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Propagation of self-localised Q-ball solitons in the $^3$He universe | In relativistic quantum field theories, compact objects of interacting bosons
can become stable owing to conservation of an additive quantum number $Q$.
Discovering such $Q$-balls propagating in the Universe would confirm
supersymmetric extensions of the standard model and may shed light on the
mysteries of dark matter, but no unambiguous experimental evidence exists. We
report observation of a propagating long-lived $Q$-ball in superfluid $^3$He,
where the role of $Q$-ball is played by a Bose-Einstein condensate of magnon
quasiparticles. We achieve accurate representation of the $Q$-ball Hamiltonian
using the influence of the number of magnons, corresponding to the charge $Q$,
on the orbital structure of the superfluid $^3$He order parameter. This
realisation supports multiple coexisting $Q$-balls which in future allows
studies of $Q$-ball dynamics, interactions, and collisions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Improving the staggered grid Lagrangian hydrodynamics for modeling multi-material flows | In this work, we make two improvements on the staggered grid hydrodynamics
(SGH) Lagrangian scheme for modeling 2-dimensional compressible multi-material
flows on triangular mesh. The first improvement is the construction of a
dynamic local remeshing scheme for preventing mesh distortion. The remeshing
scheme is similar to many published algorithms except that it introduces some
special operations for treating grids around multi-material interfaces. This
makes the simulation of extremely deforming and topology-variable
multi-material processes possible, such as the complete process of a heavy
fluid dipping into a light fluid. The second improvement is the construction of
an Euler-like flow on each edge of the mesh to count for the "edge-bending"
effect, so as to mitigate the "checkerboard" oscillation that commonly exists
in Lagrangian simulations, especially the triangular mesh based simulations.
Several typical hydrodynamic problems are simulated by the improved staggered
grid Lagrangian hydrodynamic method to test its performance.
| 0 | 1 | 0 | 0 | 0 | 0 |
Reinforcement Learning-based Thermal Comfort Control for Vehicle Cabins | Vehicle climate control systems aim to keep passengers thermally comfortable.
However, current systems control temperature rather than thermal comfort and
tend to be energy hungry, which is of particular concern when considering
electric vehicles. This paper poses energy-efficient vehicle comfort control as
a Markov Decision Process, which is then solved numerically using
Sarsa({\lambda}) and an empirically validated, single-zone, 1D thermal model of
the cabin. The resulting controller was tested in simulation using 200 randomly
selected scenarios and found to exceed the performance of bang-bang,
proportional, simple fuzzy logic, and commercial controllers with 23%, 43%,
40%, 56% increase, respectively. Compared to the next best performing
controller, energy consumption is reduced by 13% while the proportion of time
spent thermally comfortable is increased by 23%. These results indicate that
this is a viable approach that promises to translate into substantial comfort
and energy improvements in the car.
| 1 | 0 | 0 | 0 | 0 | 0 |
A functional model for the Fourier--Plancherel operator truncated on the positive half-axis | The truncated Fourier operator $\mathscr{F}_{\mathbb{R^{+}}}$, $$
(\mathscr{F}_{\mathbb{R^{+}}}x)(t)=\frac{1}{\sqrt{2\pi}}
\int\limits_{\mathbb{R^{+}}}x(\xi)e^{it\xi}\,d\xi\,,\ \ \
t\in{}{\mathbb{R^{+}}}, $$ is studied. The operator
$\mathscr{F}_{\mathbb{R^{+}}}$ is considered as an operator acting in the space
$L^2(\mathbb{R^{+}})$. The functional model for the operator
$\mathscr{F}_{\mathbb{R^{+}}}$ is constructed. This functional model is the
multiplication operator on the appropriate $2\times2$ matrix function acting in
the space $L^2(\mathbb{R^{+}})\oplus{}L^2(\mathbb{R^{+}})$. Using this
functional model, the spectrum of the operator $\mathscr{F}_{\mathbb{R^{+}}}$
is found. The resolvent of the operator $\mathscr{F}_{\mathbb{R^{+}}}$ is
estimated near its spectrum.
| 0 | 0 | 1 | 0 | 0 | 0 |
Mapping $n$ grid points onto a square forces an arbitrarily large Lipschitz constant | We prove that the regular $n\times n$ square grid of points in the integer
lattice $\mathbb{Z}^{2}$ cannot be recovered from an arbitrary $n^{2}$-element
subset of $\mathbb{Z}^{2}$ via a mapping with prescribed Lipschitz constant
(independent of $n$). This answers negatively a question of Feige from 2002.
Our resolution of Feige's question takes place largely in a continuous setting
and is based on some new results for Lipschitz mappings falling into two broad
areas of interest, which we study independently. Firstly the present work
contains a detailed investigation of Lipschitz regular mappings on Euclidean
spaces, with emphasis on their bilipschitz decomposability in a sense
comparable to that of the well known result of Jones. Secondly, we build on
work of Burago and Kleiner and McMullen on non-realisable densities. We verify
the existence, and further prevalence, of strongly non-realisable densities
inside spaces of continuous functions.
| 1 | 0 | 1 | 0 | 0 | 0 |
An FPT algorithm for planar multicuts with sources and sinks on the outer face | Given a list of k source-sink pairs in an edge-weighted graph G, the minimum
multicut problem consists in selecting a set of edges of minimum total weight
in G, such that removing these edges leaves no path from each source to its
corresponding sink. To the best of our knowledge, no non-trivial FPT result for
special cases of this problem, which is APX-hard in general graphs for any
fixed k>2, is known with respect to k only. When the graph G is planar, this
problem is known to be polynomial-time solvable if k=O(1), but cannot be FPT
with respect to k under the Exponential Time Hypothesis.
In this paper, we show that, if G is planar and in addition all sources and
sinks lie on the outer face, then this problem does admit an FPT algorithm when
parameterized by k (although it remains APX-hard when k is part of the input,
even in stars). To do this, we provide a new characterization of optimal
solutions in this case, and then use it to design a "divide-and-conquer"
approach: namely, some edges that are part of any such solution actually define
an optimal solution for a polynomial-time solvable multiterminal variant of the
problem on some of the sources and sinks (which can be identified thanks to a
reduced enumeration phase). Removing these edges from the graph cuts it into
several smaller instances, which can then be solved recursively.
| 1 | 0 | 0 | 0 | 0 | 0 |
Calibration with Bias-Corrected Temperature Scaling Improves Domain Adaptation Under Label Shift in Modern Neural Networks | Label shift refers to the phenomenon where the marginal probability p(y) of
observing a particular class changes between the training and test
distributions while the conditional probability p(x|y) stays fixed. This is
relevant in settings such as medical diagnosis, where a classifier trained to
predict disease based on observed symptoms may need to be adapted to a
different distribution where the baseline frequency of the disease is higher.
Given calibrated estimates of p(y|x), one can apply an EM algorithm to correct
for the shift in class imbalance between the training and test distributions
without ever needing to calculate p(x|y). Unfortunately, modern neural networks
typically fail to produce well-calibrated probabilities, compromising the
effectiveness of this approach. Although Temperature Scaling can greatly reduce
miscalibration in these networks, it can leave behind a systematic bias in the
probabilities that still poses a problem. To address this, we extend
Temperature Scaling with class-specific bias parameters, which largely
eliminates systematic bias in the calibrated probabilities and allows for
effective domain adaptation under label shift. We term our calibration approach
"Bias-Corrected Temperature Scaling". On experiments with CIFAR10, we find that
EM with Bias-Corrected Temperature Scaling significantly outperforms both EM
with Temperature Scaling and the recently-proposed Black-Box Shift Estimation.
| 1 | 0 | 0 | 1 | 0 | 0 |
Propagation Networks for Model-Based Control Under Partial Observation | There has been an increasing interest in learning dynamics simulators for
model-based control. Compared with off-the-shelf physics engines, a learnable
simulator can quickly adapt to unseen objects, scenes, and tasks. However,
existing models like interaction networks only work for fully observable
systems; they also only consider pairwise interactions within a single time
step, both restricting their use in practical systems. We introduce Propagation
Networks (PropNet), a differentiable, learnable dynamics model that handles
partially observable scenarios and enables instantaneous propagation of signals
beyond pairwise interactions. With these innovations, our propagation networks
not only outperform current learnable physics engines in forward simulation,
but also achieves superior performance on various control tasks. Compared with
existing deep reinforcement learning algorithms, model-based control with
propagation networks is more accurate, efficient, and generalizable to novel,
partially observable scenes and tasks.
| 1 | 0 | 0 | 0 | 0 | 0 |
High-redshift galaxies and black holes in the eyes of JWST: a population synthesis model from infrared to X-rays | The first billion years of the Universe is a pivotal time: stars, black holes
(BHs) and galaxies form and assemble, sowing the seeds of galaxies as we know
them today. Detecting, identifying and understand- ing the first galaxies and
BHs is one of the current observational and theoretical challenges in galaxy
formation. In this paper we present a population synthesis model aimed at
galaxies, BHs and Active Galactic Nuclei (AGNs) at high redshift. The model
builds a population based on empirical relations. Galaxies are characterized by
a spectral energy distribution determined by age and metallicity, and AGNs by a
spectral energy distribution determined by BH mass and accretion rate. We
validate the model against observational constraints, and then predict
properties of galaxies and AGN in other wavelength and/or luminosity ranges,
estimating the contamination of stellar populations (normal stars and high-mass
X-ray binaries) for AGN searches from the infrared to X-rays, and vice-versa
for galaxy searches. For high-redshift galaxies, with stellar ages < 1 Gyr, we
find that disentangling stellar and AGN emission is challenging at restframe
UV/optical wavelengths, while high-mass X-ray binaries become more important
sources of confusion in X-rays. We propose a color-color selection in JWST
bands to separate AGN vs star-dominated galaxies in photometric observations.
We also esti- mate the AGN contribution, with respect to massive, hot,
metal-poor stars, at driving high ionization lines, such as C IV and He II.
Finally, we test the influence of the minimum BH mass and occupa- tion fraction
of BHs in low mass galaxies on the restframe UV/near-IR and X-ray AGN
luminosity function.
| 0 | 1 | 0 | 0 | 0 | 0 |
Soft Label Memorization-Generalization for Natural Language Inference | Often when multiple labels are obtained for a training example it is assumed
that there is an element of noise that must be accounted for. It has been shown
that this disagreement can be considered signal instead of noise. In this work
we investigate using soft labels for training data to improve generalization in
machine learning models. However, using soft labels for training Deep Neural
Networks (DNNs) is not practical due to the costs involved in obtaining
multiple labels for large data sets. We propose soft label
memorization-generalization (SLMG), a fine-tuning approach to using soft labels
for training DNNs. We assume that differences in labels provided by human
annotators represent ambiguity about the true label instead of noise.
Experiments with SLMG demonstrate improved generalization performance on the
Natural Language Inference (NLI) task. Our experiments show that by injecting a
small percentage of soft label training data (0.03% of training set size) we
can improve generalization performance over several baselines.
| 1 | 0 | 0 | 0 | 0 | 0 |
Floquet Topological Magnons | We introduce the concept of Floquet topological magnons --- a mechanism by
which a synthetic tunable Dzyaloshinskii-Moriya interaction (DMI) can be
generated in quantum magnets using circularly polarized electric (laser) field.
The resulting effect is that Dirac magnons and nodal magnons in two-dimensional
(2D) and three-dimensional (3D) quantum magnets can be tuned to magnon Chern
insulators and Weyl magnons respectively under circularly polarized laser
field. The Floquet formalism also yields a tunable intrinsic DMI in insulating
quantum magnets without an inversion center. We demonstrate that the Floquet
topological magnons possess a finite thermal Hall conductivity tunable by the
laser field.
| 0 | 1 | 0 | 0 | 0 | 0 |
Wasserstein Soft Label Propagation on Hypergraphs: Algorithm and Generalization Error Bounds | Inspired by recent interests of developing machine learning and data mining
algorithms on hypergraphs, we investigate in this paper the semi-supervised
learning algorithm of propagating "soft labels" (e.g. probability
distributions, class membership scores) over hypergraphs, by means of optimal
transportation. Borrowing insights from Wasserstein propagation on graphs
[Solomon et al. 2014], we re-formulate the label propagation procedure as a
message-passing algorithm, which renders itself naturally to a generalization
applicable to hypergraphs through Wasserstein barycenters. Furthermore, in a
PAC learning framework, we provide generalization error bounds for propagating
one-dimensional distributions on graphs and hypergraphs using 2-Wasserstein
distance, by establishing the \textit{algorithmic stability} of the proposed
semi-supervised learning algorithm. These theoretical results also shed new
lights upon deeper understandings of the Wasserstein propagation on graphs.
| 0 | 0 | 0 | 1 | 0 | 0 |
Psychological and Personality Profiles of Political Extremists | Global recruitment into radical Islamic movements has spurred renewed
interest in the appeal of political extremism. Is the appeal a rational
response to material conditions or is it the expression of psychological and
personality disorders associated with aggressive behavior, intolerance,
conspiratorial imagination, and paranoia? Empirical answers using surveys have
been limited by lack of access to extremist groups, while field studies have
lacked psychological measures and failed to compare extremists with contrast
groups. We revisit the debate over the appeal of extremism in the U.S. context
by comparing publicly available Twitter messages written by over 355,000
political extremist followers with messages written by non-extremist U.S.
users. Analysis of text-based psychological indicators supports the moral
foundation theory which identifies emotion as a critical factor in determining
political orientation of individuals. Extremist followers also differ from
others in four of the Big Five personality traits.
| 1 | 1 | 0 | 0 | 0 | 0 |
Techniques for Interpretable Machine Learning | Interpretable machine learning tackles the important problem that humans
cannot understand the behaviors of complex machine learning models and how
these models arrive at a particular decision. Although many approaches have
been proposed, a comprehensive understanding of the achievements and challenges
is still lacking. We provide a survey covering existing techniques to increase
the interpretability of machine learning models. We also discuss crucial issues
that the community should consider in future work such as designing
user-friendly explanations and developing comprehensive evaluation metrics to
further push forward the area of interpretable machine learning.
| 0 | 0 | 0 | 1 | 0 | 0 |
New Models and Methods for Formation and Analysis of Social Networks | This doctoral work focuses on three main problems related to social networks:
(1) Orchestrating Network Formation: We consider the problem of orchestrating
formation of a social network having a certain given topology that may be
desirable for the intended usecases. Assuming the social network nodes to be
strategic in forming relationships, we derive conditions under which a given
topology can be uniquely obtained. We also study the efficiency and robustness
of the derived conditions. (2) Multi-phase Influence Maximization: We propose
that information diffusion be carried out in multiple phases rather than in a
single instalment. With the objective of achieving better diffusion, we
discover optimal ways of splitting the available budget among the phases,
determining the time delay between consecutive phases, and also finding the
individuals to be targeted for initiating the diffusion process. (3) Scalable
Preference Aggregation: It is extremely useful to determine a small number of
representatives of a social network such that the individual preferences of
these nodes, when aggregated, reflect the aggregate preference of the entire
network. Using real-world data collected from Facebook with human subjects, we
discover a model that faithfully captures the spread of preferences in a social
network. We hence propose fast and reliable ways of computing a truly
representative aggregate preference of the entire network. In particular, we
develop models and methods for solving the above problems, which primarily deal
with formation and analysis of social networks.
| 1 | 1 | 0 | 0 | 0 | 0 |
Identifying networks with common organizational principles | Many complex systems can be represented as networks, and the problem of
network comparison is becoming increasingly relevant. There are many techniques
for network comparison, from simply comparing network summary statistics to
sophisticated but computationally costly alignment-based approaches. Yet it
remains challenging to accurately cluster networks that are of a different size
and density, but hypothesized to be structurally similar. In this paper, we
address this problem by introducing a new network comparison methodology that
is aimed at identifying common organizational principles in networks. The
methodology is simple, intuitive and applicable in a wide variety of settings
ranging from the functional classification of proteins to tracking the
evolution of a world trade network.
| 1 | 1 | 0 | 1 | 0 | 0 |
Image-based Proof of Work Algorithm for the Incentivization of Blockchain Archival of Interesting Images | A new variation of blockchain proof of work algorithm is proposed to
incentivize the timely execution of image processing algorithms. A sample image
processing algorithm is proposed to determine interesting images using analysis
of the entropy of pixel subsets within images. The efficacy of the image
processing algorithm is examined using two small sets of training and test
data. The interesting image algorithm is then integrated into a simplified
blockchain mining proof of work algorithm based on Bitcoin. The incentive of
cryptocurrency mining is theorized to incentivize the execution of the
algorithm and thus the retrieval of images that satisfy a minimum requirement
set forth by the interesting image algorithm. The digital storage implications
of running an image- based blockchain are then examined mathematically.
| 1 | 0 | 0 | 0 | 0 | 0 |
Multi-Lane Perception Using Feature Fusion Based on GraphSLAM | An extensive, precise and robust recognition and modeling of the environment
is a key factor for next generations of Advanced Driver Assistance Systems and
development of autonomous vehicles. In this paper, a real-time approach for the
perception of multiple lanes on highways is proposed. Lane markings detected by
camera systems and observations of other traffic participants provide the input
data for the algorithm. The information is accumulated and fused using
GraphSLAM and the result constitutes the basis for a multilane clothoid model.
To allow incorporation of additional information sources, input data is
processed in a generic format. Evaluation of the method is performed by
comparing real data, collected with an experimental vehicle on highways, to a
ground truth map. The results show that ego and adjacent lanes are robustly
detected with high quality up to a distance of 120 m. In comparison to serial
lane detection, an increase in the detection range of the ego lane and a
continuous perception of neighboring lanes is achieved. The method can
potentially be utilized for the longitudinal and lateral control of
self-driving vehicles.
| 1 | 0 | 0 | 0 | 0 | 0 |
Stability analysis and stabilization of LPV systems with jumps and (piecewise) differentiable parameters using continuous and sampled-data controllers | Linear Parameter-Varying (LPV) systems with jumps and piecewise
differentiable parameters is a class of hybrid LPV systems for which no
tailored stability analysis and stabilization conditions have been obtained so
far. We fill this gap here by proposing an approach relying on the
reformulation of the considered LPV system as an extended equivalent hybrid
system that will incorporate, through a suitable state augmentation,
information on both the dynamics of the state of the system and the considered
class of parameter trajectories. Two stability conditions are established using
a result pertaining on the stability of hybrid systems and shown to naturally
generalize and unify the well-known quadratic and robust stability criteria
together. The obtained conditions being infinite-dimensional semidefinite
programming problems, a relaxation approach based on sum of squares programming
is used in order to obtain tractable finite-dimensional conditions. The
conditions are then losslessly extended to solve two control problems, namely,
the stabilization by continuous and sampled-data gain-scheduled state-feedback
controllers. The approach is finally illustrated on several examples from the
literature.
| 1 | 0 | 1 | 0 | 0 | 0 |
Identity Testing and Interpolation from High Powers of Polynomials of Large Degree over Finite Fields | We consider the problem of identity testing and recovering (that is,
interpolating) of a "hidden" monic polynomials $f$, given an oracle access to
$f(x)^e$ for $x\in\mathbb F_q$, where $\mathbb F_q$ is the finite field of $q$
elements and an extension fields access is not permitted.
The naive interpolation algorithm needs $de+1$ queries, where $d =\max\{{\rm
deg}\ f, {\rm deg }\ g\}$ and thus requires $ de<q$. For a prime $q = p$, we
design an algorithm that is asymptotically better in certain cases, especially
when $d$ is large. The algorithm is based on a result of independent interest
in spirit of additive combinatorics. It gives an upper bound on the number of
values of a rational function of large degree, evaluated on a short sequence of
consecutive integers, that belong to a small subgroup of $\mathbb F_p^*$.
| 1 | 0 | 1 | 0 | 0 | 0 |
A bird's eye view on the flat and conic band world of the honeycomb and Kagome lattices: Towards an understanding of 2D Metal-Organic Frameworks electronic structure | We present a thorough tight-binding analysis of the band structure of a wide
variety of lattices belonging to the class of honeycomb and Kagome systems
including several mixed forms combining both lattices. The band structure of
these systems are made of a combination of dispersive and flat bands. The
dispersive bands possess Dirac cones (linear dispersion) at the six corners (K
points) of the Brillouin zone although in peculiar cases Dirac cones at the
center of the zone $(\Gamma$ point) appear. The flat bands can be of different
nature. Most of them are tangent to the dispersive bands at the center of the
zone but some, for symmetry reasons, do not hybridize with other states. The
objective of our work is to provide an analysis of a wide class of so-called
ligand-decorated honeycomb Kagome lattices that are observed in 2D
metal-organic framework (MOF) where the ligand occupy honeycomb sites and the
metallic atoms the Kagome sites. We show that the $p_x$-$p_y$ graphene model is
relevant in these systems and there exists four types of flat bands: Kagome
flat (singly degenerate) bands, two kinds of ligand-centered flat bands (A$_2$
like and E like, respectively doubly and singly degenerate) and metal-centered
(three fold degenerate) flat bands.
| 0 | 1 | 0 | 0 | 0 | 0 |
Replacement AutoEncoder: A Privacy-Preserving Algorithm for Sensory Data Analysis | An increasing number of sensors on mobile, Internet of things (IoT), and
wearable devices generate time-series measurements of physical activities.
Though access to the sensory data is critical to the success of many beneficial
applications such as health monitoring or activity recognition, a wide range of
potentially sensitive information about the individuals can also be discovered
through access to sensory data and this cannot easily be protected using
traditional privacy approaches.
In this paper, we propose a privacy-preserving sensing framework for managing
access to time-series data in order to provide utility while protecting
individuals' privacy. We introduce Replacement AutoEncoder, a novel algorithm
which learns how to transform discriminative features of data that correspond
to sensitive inferences, into some features that have been more observed in
non-sensitive inferences, to protect users' privacy. This efficiency is
achieved by defining a user-customized objective function for deep
autoencoders. Our replacement method will not only eliminate the possibility of
recognizing sensitive inferences, it also eliminates the possibility of
detecting the occurrence of them. That is the main weakness of other approaches
such as filtering or randomization. We evaluate the efficacy of the algorithm
with an activity recognition task in a multi-sensing environment using
extensive experiments on three benchmark datasets. We show that it can retain
the recognition accuracy of state-of-the-art techniques while simultaneously
preserving the privacy of sensitive information. Finally, we utilize the GANs
for detecting the occurrence of replacement, after releasing data, and show
that this can be done only if the adversarial network is trained on the users'
original data.
| 1 | 0 | 0 | 1 | 0 | 0 |
Application of the Mixed Time-averaging Semiclassical Initial Value Representation method to Complex Molecular Spectra | The recently introduced mixed time-averaging semiclassical initial value
representation molecular dynamics method for spectroscopic calculations [M.
Buchholz, F. Grossmann, and M. Ceotto, J. Chem. Phys. 144, 094102 (2016)] is
applied to systems with up to 61 dimensions, ruled by a condensed phase
Caldeira-Leggett model potential. By calculating the ground state as well as
the first few excited states of the system Morse oscillator, changes of both
the harmonic frequency and the anharmonicity are determined. The method
faithfully reproduces blueshift and redshift effects and the importance of the
counter term, as previously suggested by other methods. Differently from
previous methods, the present semiclassical method does not take advantage of
the specific form of the potential and it can represent a practical tool that
opens the route to direct ab initio semiclassical simulation of condensed phase
systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
Bounds on layer potentials with rough inputs for higher order elliptic equations | In this paper we establish square-function estimates on the double and single
layer potentials with rough inputs for divergence form elliptic operators, of
arbitrary even order 2m, with variable t-independent coefficients in the upper
half-space.
| 0 | 0 | 1 | 0 | 0 | 0 |
From support $τ$-tilting posets to algebras | The aim of this paper is to study a poset isomorphism between two support
$\tau$-tilting posets. We take several algebraic information from combinatorial
properties of support $\tau$-tilting posets. As an application, we treat a
certain class of basic algebras which contains preprojective algebras of type
$A$, Nakayama algebras, and generalized Brauer tree algebras. We provide a
necessary condition for that an algebra $\Lambda$ share the same support
$\tau$-tilting poset with a given algebra $\Gamma$ in this class. Furthermore,
we see that this necessary condition is also a sufficient condition if $\Gamma$
is either a preprojective algebra of type $A$, a Nakayama algebra, or a
generalized Brauer tree algebra.
| 0 | 0 | 1 | 0 | 0 | 0 |
Measuring the reionization 21 cm fluctuations using clustering wedges | One of the main challenges in probing the reionization epoch using the
redshifted 21 cm line is that the magnitude of the signal is several orders
smaller than the astrophysical foregrounds. One of the methods to deal with the
problem is to avoid a wedge-shaped region in the Fourier $k_{\perp} -
k_{\parallel}$ space which contains the signal from the spectrally smooth
foregrounds. However, measuring the spherically averaged power spectrum using
only modes outside this wedge (i.e., in the reionization window), leads to a
bias. We provide a prescription, based on expanding the power spectrum in terms
of the shifted Legendre polynomials, which can be used to compute the angular
moments of the power spectrum in the reionization window. The prescription
requires computation of the monopole, quadrupole and hexadecapole moments of
the power spectrum using the theoretical model under consideration and also the
knowledge of the effective extent of the foreground wedge in the $k_{\perp} -
k_{\parallel}$ plane. One can then calculate the theoretical power spectrum in
the window which can be directly compared with observations. The analysis
should have implications for avoiding any bias in the parameter constraints
using 21 cm power spectrum data.
| 0 | 1 | 0 | 0 | 0 | 0 |
Sensing-Constrained LQG Control | Linear-Quadratic-Gaussian (LQG) control is concerned with the design of an
optimal controller and estimator for linear Gaussian systems with imperfect
state information. Standard LQG assumes the set of sensor measurements, to be
fed to the estimator, to be given. However, in many problems, arising in
networked systems and robotics, one may not be able to use all the available
sensors, due to power or payload constraints, or may be interested in using the
smallest subset of sensors that guarantees the attainment of a desired control
goal. In this paper, we introduce the sensing-constrained LQG control problem,
in which one has to jointly design sensing, estimation, and control, under
given constraints on the resources spent for sensing. We focus on the realistic
case in which the sensing strategy has to be selected among a finite set of
possible sensing modalities. While the computation of the optimal sensing
strategy is intractable, we present the first scalable algorithm that computes
a near-optimal sensing strategy with provable sub-optimality guarantees. To
this end, we show that a separation principle holds, which allows the design of
sensing, estimation, and control policies in isolation. We conclude the paper
by discussing two applications of sensing-constrained LQG control, namely,
sensing-constrained formation control and resource-constrained robot
navigation.
| 1 | 0 | 0 | 0 | 0 | 0 |
Deep Residual Learning for Instrument Segmentation in Robotic Surgery | Detection, tracking, and pose estimation of surgical instruments are crucial
tasks for computer assistance during minimally invasive robotic surgery. In the
majority of cases, the first step is the automatic segmentation of surgical
tools. Prior work has focused on binary segmentation, where the objective is to
label every pixel in an image as tool or background. We improve upon previous
work in two major ways. First, we leverage recent techniques such as deep
residual learning and dilated convolutions to advance binary-segmentation
performance. Second, we extend the approach to multi-class segmentation, which
lets us segment different parts of the tool, in addition to background. We
demonstrate the performance of this method on the MICCAI Endoscopic Vision
Challenge Robotic Instruments dataset.
| 1 | 0 | 0 | 0 | 0 | 0 |
Computing Constrained Approximate Equilibria in Polymatrix Games | This paper is about computing constrained approximate Nash equilibria in
polymatrix games, which are succinctly represented many-player games defined by
an interaction graph between the players. In a recent breakthrough, Rubinstein
showed that there exists a small constant $\epsilon$, such that it is
PPAD-complete to find an (unconstrained) $\epsilon$-Nash equilibrium of a
polymatrix game. In the first part of the paper, we show that is NP-hard to
decide if a polymatrix game has a constrained approximate equilibrium for 9
natural constraints and any non-trivial approximation guarantee. These results
hold even for planar bipartite polymatrix games with degree 3 and at most 7
strategies per player, and all non-trivial approximation guarantees. These
results stand in contrast to similar results for bimatrix games, which
obviously need a non-constant number of actions, and which rely on stronger
complexity-theoretic conjectures such as the exponential time hypothesis. In
the second part, we provide a deterministic QPTAS for interaction graphs with
bounded treewidth and with logarithmically many actions per player that can
compute constrained approximate equilibria for a wide family of constraints
that cover many of the constraints dealt with in the first part.
| 1 | 0 | 0 | 0 | 0 | 0 |
Evolution in Groups: A deeper look at synaptic cluster driven evolution of deep neural networks | A promising paradigm for achieving highly efficient deep neural networks is
the idea of evolutionary deep intelligence, which mimics biological evolution
processes to progressively synthesize more efficient networks. A crucial design
factor in evolutionary deep intelligence is the genetic encoding scheme used to
simulate heredity and determine the architectures of offspring networks. In
this study, we take a deeper look at the notion of synaptic cluster-driven
evolution of deep neural networks which guides the evolution process towards
the formation of a highly sparse set of synaptic clusters in offspring
networks. Utilizing a synaptic cluster-driven genetic encoding, the
probabilistic encoding of synaptic traits considers not only individual
synaptic properties but also inter-synaptic relationships within a deep neural
network. This process results in highly sparse offspring networks which are
particularly tailored for parallel computational devices such as GPUs and deep
neural network accelerator chips. Comprehensive experimental results using four
well-known deep neural network architectures (LeNet-5, AlexNet, ResNet-56, and
DetectNet) on two different tasks (object categorization and object detection)
demonstrate the efficiency of the proposed method. Cluster-driven genetic
encoding scheme synthesizes networks that can achieve state-of-the-art
performance with significantly smaller number of synapses than that of the
original ancestor network. ($\sim$125-fold decrease in synapses for MNIST).
Furthermore, the improved cluster efficiency in the generated offspring
networks ($\sim$9.71-fold decrease in clusters for MNIST and a $\sim$8.16-fold
decrease in clusters for KITTI) is particularly useful for accelerated
performance on parallel computing hardware architectures such as those in GPUs
and deep neural network accelerator chips.
| 1 | 0 | 0 | 1 | 0 | 0 |
Polynomial Time and Sample Complexity for Non-Gaussian Component Analysis: Spectral Methods | The problem of Non-Gaussian Component Analysis (NGCA) is about finding a
maximal low-dimensional subspace $E$ in $\mathbb{R}^n$ so that data points
projected onto $E$ follow a non-gaussian distribution. Although this is an
appropriate model for some real world data analysis problems, there has been
little progress on this problem over the last decade.
In this paper, we attempt to address this state of affairs in two ways.
First, we give a new characterization of standard gaussian distributions in
high-dimensions, which lead to effective tests for non-gaussianness. Second, we
propose a simple algorithm, \emph{Reweighted PCA}, as a method for solving the
NGCA problem. We prove that for a general unknown non-gaussian distribution,
this algorithm recovers at least one direction in $E$, with sample and time
complexity depending polynomially on the dimension of the ambient space. We
conjecture that the algorithm actually recovers the entire $E$.
| 1 | 0 | 0 | 1 | 0 | 0 |
ARABIS: an Asynchronous Acoustic Indoor Positioning System for Mobile Devices | Acoustic ranging based indoor positioning solutions have the advantage of
higher ranging accuracy and better compatibility with commercial-off-the-self
consumer devices. However, similar to other time-domain based approaches using
Time-of-Arrival and Time-Difference-of-Arrival, they suffer from performance
degradation in presence of multi-path propagation and low received
signal-to-noise ratio (SNR) in indoor environments. In this paper, we improve
upon our previous work on asynchronous acoustic indoor positioning and develop
ARABIS, a robust and low-cost acoustic positioning system (IPS) for mobile
devices. We develop a low-cost acoustic board custom-designed to support large
operational ranges and extensibility. To mitigate the effects of low SNR and
multi-path propagation, we devise a robust algorithm that iteratively removes
possible outliers by taking advantage of redundant TDoA estimates. Experiments
have been carried in two testbeds of sizes 10.67m*7.76m and 15m*15m, one in an
academic building and one in a convention center. The proposed system achieves
average and 95% quantile localization errors of 7.4cm and 16.0cm in the first
testbed with 8 anchor nodes and average and 95% quantile localization errors of
20.4cm and 40.0cm in the second testbed with 4 anchor nodes only.
| 1 | 0 | 0 | 0 | 0 | 0 |
Realistic Evaluation of Deep Semi-Supervised Learning Algorithms | Semi-supervised learning (SSL) provides a powerful framework for leveraging
unlabeled data when labels are limited or expensive to obtain. SSL algorithms
based on deep neural networks have recently proven successful on standard
benchmark tasks. However, we argue that these benchmarks fail to address many
issues that these algorithms would face in real-world applications. After
creating a unified reimplementation of various widely-used SSL techniques, we
test them in a suite of experiments designed to address these issues. We find
that the performance of simple baselines which do not use unlabeled data is
often underreported, that SSL methods differ in sensitivity to the amount of
labeled and unlabeled data, and that performance can degrade substantially when
the unlabeled dataset contains out-of-class examples. To help guide SSL
research towards real-world applicability, we make our unified reimplemention
and evaluation platform publicly available.
| 0 | 0 | 0 | 1 | 0 | 0 |
Recovering piecewise constant refractive indices by a single far-field pattern | We are concerned with the inverse scattering problem of recovering an
inhomogeneous medium by the associated acoustic wave measurement. We prove that
under certain assumptions, a single far-field pattern determines the values of
a perturbation to the refractive index on the corners of its support. These
assumptions are satisfied for example in the low acoustic frequency regime. As
a consequence if the perturbation is piecewise constant with either a
polyhedral nest geometry or a known polyhedral cell geometry, such as a pixel
or voxel array, we establish the injectivity of the perturbation to far-field
map given a fixed incident wave. This is the first unique determinancy result
of its type in the literature, and all of the existing results essentially make
use of infinitely many measurements.
| 0 | 0 | 1 | 0 | 0 | 0 |
On Bezout Inequalities for non-homogeneous Polynomial Ideals | We introduce a "workable" notion of degree for non-homogeneous polynomial
ideals and formulate and prove ideal theoretic Bézout Inequalities for the
sum of two ideals in terms of this notion of degree and the degree of
generators. We compute probabilistically the degree of an equidimensional
ideal.
| 1 | 0 | 1 | 0 | 0 | 0 |
Joint Mixability of Elliptical Distributions and Related Families | In this paper, we further develop the theory of complete mixability and joint
mixability for some distribution families. We generalize a result of
Rüschendorf and Uckelmann (2002) related to complete mixability of continuous
distribution function having a symmetric and unimodal density. Two different
proofs to a result of Wang and Wang (2016) which related to the joint
mixability of elliptical distributions with the same characteristic generator
are present. We solve the Open Problem 7 in Wang (2015) by constructing a
bimodal-symmetric distribution. The joint mixability of slash-elliptical
distributions and skew-elliptical distributions is studied and the extension to
multivariate distributions is also investigated.
| 0 | 0 | 1 | 1 | 0 | 0 |
Secure uniform random number extraction via incoherent strategies | To guarantee the security of uniform random numbers generated by a quantum
random number generator, we study secure extraction of uniform random numbers
when the environment of a given quantum state is controlled by the third party,
the eavesdropper. Here we restrict our operations to incoherent strategies that
are composed of the measurement on the computational basis and incoherent
operations (or incoherence-preserving operations). We show that the maximum
secure extraction rate is equal to the relative entropy of coherence. By
contrast, the coherence of formation gives the extraction rate when a certain
constraint is imposed on eavesdropper's operations. The condition under which
the two extraction rates coincide is then determined. Furthermore, we find that
the exponential decreasing rate of the leaked information is characterized by
Rényi relative entropies of coherence. These results clarify the power of
incoherent strategies in random number generation, and can be applied to
guarantee the quality of random numbers generated by a quantum random number
generator.
| 1 | 0 | 0 | 0 | 0 | 0 |
Mobile Encryption Gateway (MEG) for Email Encryption | Email cryptography applications often suffer from major problems that prevent
their widespread implementation. MEG, or the Mobile Encryption Gateway aims to
fix the issues associated with email encryption by ensuring that encryption is
easy to perform while still maintaining data security. MEG performs automatic
decryption and encryption of all emails using PGP. Users do not need to
understand the internal workings of the encryption process to use the
application. MEG is meant to be email-client-agnostic, enabling users to employ
virtually any email service to send messages. Encryption actions are performed
on the user's mobile device, which means their keys and data remain personal.
MEG can also tackle network effect problems by inviting non-users to join. Most
importantly, MEG uses end-to-end encryption, which ensures that all aspects of
the encrypted information remains private. As a result, we are hopeful that MEG
will finally solve the problem of practical email encryption.
| 1 | 0 | 0 | 0 | 0 | 0 |
Nematic Skyrmions in Odd-Parity Superconductors | We study topological excitations in two-component nematic superconductors,
with a particular focus on Cu$_x$Bi$_2$Se$_3$ as a candidate material. We find
that the lowest-energy topological excitations are coreless vortices: a bound
state of two spatially separated half-quantum vortices. These objects are
nematic Skyrmions, since they are characterized by an additional topological
charge. The inter-Skyrmion forces are dipolar in this model, i.e. attractive
for certain relative orientations of the Skyrmions, hence forming
multi-Skyrmion bound states.
| 0 | 1 | 0 | 0 | 0 | 0 |
Concentration and consistency results for canonical and curved exponential-family models of random graphs | Statistical inference for exponential-family models of random graphs with
dependent edges is challenging. We stress the importance of additional
structure and show that additional structure facilitates statistical inference.
A simple example of a random graph with additional structure is a random graph
with neighborhoods and local dependence within neighborhoods. We develop the
first concentration and consistency results for maximum likelihood and
$M$-estimators of a wide range of canonical and curved exponential-family
models of random graphs with local dependence. All results are non-asymptotic
and applicable to random graphs with finite populations of nodes, although
asymptotic consistency results can be obtained as well. In addition, we show
that additional structure can facilitate subgraph-to-graph estimation, and
present concentration results for subgraph-to-graph estimators. As an
application, we consider popular curved exponential-family models of random
graphs, with local dependence induced by transitivity and parameter vectors
whose dimensions depend on the number of nodes.
| 0 | 0 | 1 | 1 | 0 | 0 |
Bayesian Network Learning via Topological Order | We propose a mixed integer programming (MIP) model and iterative algorithms
based on topological orders to solve optimization problems with acyclic
constraints on a directed graph. The proposed MIP model has a significantly
lower number of constraints compared to popular MIP models based on cycle
elimination constraints and triangular inequalities. The proposed iterative
algorithms use gradient descent and iterative reordering approaches,
respectively, for searching topological orders. A computational experiment is
presented for the Gaussian Bayesian network learning problem, an optimization
problem minimizing the sum of squared errors of regression models with L1
penalty over a feature network with application of gene network inference in
bioinformatics.
| 1 | 0 | 0 | 1 | 0 | 0 |
The nature of the giant exomoon candidate Kepler-1625 b-i | The recent announcement of a Neptune-sized exomoon candidate around the
transiting Jupiter-sized object Kepler-1625 b could indicate the presence of a
hitherto unknown kind of gas giant moons, if confirmed. Three transits have
been observed, allowing radius estimates of both objects. Here we investigate
possible mass regimes of the transiting system that could produce the observed
signatures and study them in the context of moon formation in the solar system,
i.e. via impacts, capture, or in-situ accretion. The radius of Kepler-1625 b
suggests it could be anything from a gas giant planet somewhat more massive
than Saturn (0.4 M_Jup) to a brown dwarf (BD) (up to 75 M_Jup) or even a
very-low-mass star (VLMS) (112 M_Jup ~ 0.11 M_sun). The proposed companion
would certainly have a planetary mass. Possible extreme scenarios range from a
highly inflated Earth-mass gas satellite to an atmosphere-free water-rock
companion of about 180 M_Ear. Furthermore, the planet-moon dynamics during the
transits suggest a total system mass of 17.6_{-12.6}^{+19.2} M_Jup. A
Neptune-mass exomoon around a giant planet or low-mass BD would not be
compatible with the common mass scaling relation of the solar system moons
about gas giants. The case of a mini-Neptune around a high-mass BD or a VLMS,
however, would be located in a similar region of the satellite-to-host mass
ratio diagram as Proxima b, the TRAPPIST-1 system, and LHS 1140 b. The capture
of a Neptune-mass object around a 10 M_Jup planet during a close binary
encounter is possible in principle. The ejected object, however, would have had
to be a super-Earth object, raising further questions of how such a system
could have formed. In summary, this exomoon candidate is barely compatible with
established moon formation theories. If it can be validated as orbiting a
super-Jovian planet, then it would pose an exquisite riddle for formation
theorists to solve.
| 0 | 1 | 0 | 0 | 0 | 0 |
Generative Adversarial Privacy | We present a data-driven framework called generative adversarial privacy
(GAP). Inspired by recent advancements in generative adversarial networks
(GANs), GAP allows the data holder to learn the privatization mechanism
directly from the data. Under GAP, finding the optimal privacy mechanism is
formulated as a constrained minimax game between a privatizer and an adversary.
We show that for appropriately chosen adversarial loss functions, GAP provides
privacy guarantees against strong information-theoretic adversaries. We also
evaluate the performance of GAP on multi-dimensional Gaussian mixture models
and the GENKI face database.
| 0 | 0 | 0 | 1 | 0 | 0 |
Hierarchical Modeling of Seed Variety Yields and Decision Making for Future Planting Plans | Eradicating hunger and malnutrition is a key development goal of the 21st
century. We address the problem of optimally identifying seed varieties to
reliably increase crop yield within a risk-sensitive decision-making framework.
Specifically, we introduce a novel hierarchical machine learning mechanism for
predicting crop yield (the yield of different seed varieties of the same crop).
We integrate this prediction mechanism with a weather forecasting model, and
propose three different approaches for decision making under uncertainty to
select seed varieties for planting so as to balance yield maximization and
risk.We apply our model to the problem of soybean variety selection given in
the 2016 Syngenta Crop Challenge. Our prediction model achieves a median
absolute error of 3.74 bushels per acre and thus provides good estimates for
input into the decision models.Our decision models identify the selection of
soybean varieties that appropriately balance yield and risk as a function of
the farmer's risk aversion level. More generally, our models support farmers in
decision making about which seed varieties to plant.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Clinical and Finite Elements Study of Stress Urinary Incontinence in Women Using Fluid-Structure Interactions | Stress Urinary Incontinence (SUI) or urine leakage from urethra occurs due to
an increase in abdominal pressure resulting from stress like a cough or jumping
height. SUI is more frequent among post-menopausal women. In the absence of
bladder contraction, vesical pressure exceeds from urethral pressure leading to
urine leakage. Despite a large number of patients diagnosed with this problem,
few studies have investigated its function and mechanics. The main goal of this
study is to model bladder and urethra computationally under an external
pressure like sneezing. Finite Element Method and Fluid-Structure Interactions
are utilized for simulation. Linear mechanical properties assigned to the
bladder and urethra and pressure boundary conditions are indispensable in this
model. The results show good accordance between the clinical data and predicted
values of the computational models, such as the pressure at the center of the
bladder. This indicates that numerical methods and simplified physics of
biological systems like inferior urinary tract are helpful to achieve the
results similar to clinical results, in order to investigate pathological
conditions.
| 1 | 0 | 0 | 0 | 0 | 0 |
The n-term Approximation of Periodic Generalized Lévy Processes | In this paper, we study the compressibility of random processes and fields,
called generalized Lévy processes, that are solutions of stochastic
differential equations driven by $d$-dimensional periodic Lévy white noises.
Our results are based on the estimation of the Besov regularity of Lévy white
noises and generalized Lévy processes. We show in particular that
non-Gaussian generalized Lévy processes are more compressible in a wavelet
basis than the corresponding Gaussian processes, in the sense that their
$n$-term approximation error decays faster. We quantify this compressibility in
terms of the Blumenthal-Getoor index of the underlying Lévy white noise.
| 0 | 0 | 1 | 0 | 0 | 0 |
Sliced rotated sphere packing designs | Space-filling designs are popular choices for computer experiments. A sliced
design is a design that can be partitioned into several subdesigns. We propose
a new type of sliced space-filling design called sliced rotated sphere packing
designs. Their full designs and subdesigns are rotated sphere packing designs.
They are constructed by rescaling, rotating, translating and extracting the
points from a sliced lattice. We provide two fast algorithms to generate such
designs. Furthermore, we propose a strategy to use sliced rotated sphere
packing designs adaptively. Under this strategy, initial runs are uniformly
distributed in the design space, follow-up runs are added by incorporating
information gained from initial runs, and the combined design is space-filling
for any local region. Examples are given to illustrate its potential
application.
| 0 | 0 | 1 | 1 | 0 | 0 |
On the Convergence of Weighted AdaGrad with Momentum for Training Deep Neural Networks | Adaptive stochastic gradient descent methods, such as AdaGrad, RMSProp, Adam,
AMSGrad, etc., have been demonstrated efficacious in solving non-convex
stochastic optimization, such as training deep neural networks. However, their
convergence rates have not been touched under the non-convex stochastic
circumstance except recent breakthrough results on AdaGrad, perturbed AdaGrad
and AMSGrad. In this paper, we propose two new adaptive stochastic gradient
methods called AdaHB and AdaNAG which integrate a novel weighted
coordinate-wise AdaGrad with heavy ball momentum and Nesterov accelerated
gradient momentum, respectively. The $\mathcal{O}(\frac{\log{T}}{\sqrt{T}})$
non-asymptotic convergence rates of AdaHB and AdaNAG in non-convex stochastic
setting are also jointly established by leveraging a newly developed unified
formulation of these two momentum mechanisms. Moreover, comparisons have been
made between AdaHB, AdaNAG, Adam and RMSProp, which, to a certain extent,
explains the reasons why Adam and RMSProp are divergent. In particular, when
momentum term vanishes we obtain convergence rate of coordinate-wise AdaGrad in
non-convex stochastic setting as a byproduct.
| 1 | 0 | 0 | 1 | 0 | 0 |
Cyclic Dominance in the Spatial Coevolutionary Optional Prisoner's Dilemma Game | This paper studies scenarios of cyclic dominance in a coevolutionary spatial
model in which game strategies and links between agents adaptively evolve over
time. The Optional Prisoner's Dilemma (OPD) game is employed. The OPD is an
extended version of the traditional Prisoner's Dilemma where players have a
third option to abstain from playing the game. We adopt an agent-based
simulation approach and use Monte Carlo methods to perform the OPD with
coevolutionary rules. The necessary conditions to break the scenarios of cyclic
dominance are also investigated. This work highlights that cyclic dominance is
essential in the sustenance of biodiversity. Moreover, we also discuss the
importance of a spatial coevolutionary model in maintaining cyclic dominance in
adverse conditions.
| 1 | 1 | 1 | 0 | 0 | 0 |
Analysis of universal adversarial perturbations | Deep networks have recently been shown to be vulnerable to universal
perturbations: there exist very small image-agnostic perturbations that cause
most natural images to be misclassified by such classifiers. In this paper, we
propose the first quantitative analysis of the robustness of classifiers to
universal perturbations, and draw a formal link between the robustness to
universal perturbations, and the geometry of the decision boundary.
Specifically, we establish theoretical bounds on the robustness of classifiers
under two decision boundary models (flat and curved models). We show in
particular that the robustness of deep networks to universal perturbations is
driven by a key property of their curvature: there exists shared directions
along which the decision boundary of deep networks is systematically positively
curved. Under such conditions, we prove the existence of small universal
perturbations. Our analysis further provides a novel geometric method for
computing universal perturbations, in addition to explaining their properties.
| 1 | 0 | 0 | 1 | 0 | 0 |
Unsupervised and Semi-supervised Anomaly Detection with LSTM Neural Networks | We investigate anomaly detection in an unsupervised framework and introduce
Long Short Term Memory (LSTM) neural network based algorithms. In particular,
given variable length data sequences, we first pass these sequences through our
LSTM based structure and obtain fixed length sequences. We then find a decision
function for our anomaly detectors based on the One Class Support Vector
Machines (OC-SVM) and Support Vector Data Description (SVDD) algorithms. As the
first time in the literature, we jointly train and optimize the parameters of
the LSTM architecture and the OC-SVM (or SVDD) algorithm using highly effective
gradient and quadratic programming based training methods. To apply the
gradient based training method, we modify the original objective criteria of
the OC-SVM and SVDD algorithms, where we prove the convergence of the modified
objective criteria to the original criteria. We also provide extensions of our
unsupervised formulation to the semi-supervised and fully supervised
frameworks. Thus, we obtain anomaly detection algorithms that can process
variable length data sequences while providing high performance, especially for
time series data. Our approach is generic so that we also apply this approach
to the Gated Recurrent Unit (GRU) architecture by directly replacing our LSTM
based structure with the GRU based structure. In our experiments, we illustrate
significant performance gains achieved by our algorithms with respect to the
conventional methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
Wearable Health Monitoring Using Capacitive Voltage-Mode Human Body Communication | Rapid miniaturization and cost reduction of computing, along with the
availability of wearable and implantable physiological sensors have led to the
growth of human Body Area Network (BAN) formed by a network of such sensors and
computing devices. One promising application of such a network is wearable
health monitoring where the collected data from the sensors would be
transmitted and analyzed to assess the health of a person. Typically, the
devices in a BAN are connected through wireless (WBAN), which suffers from
energy inefficiency due to the high-energy consumption of wireless
transmission. Human Body Communication (HBC) uses the relatively low loss human
body as the communication medium to connect these devices, promising order(s)
of magnitude better energy-efficiency and built-in security compared to WBAN.
In this paper, we demonstrate a health monitoring device and system built using
Commercial-Off-The- Shelf (COTS) sensors and components, that can collect data
from physiological sensors and transmit it through a) intra-body HBC to another
device (hub) worn on the body or b) upload health data through HBC-based
human-machine interaction to an HBC capable machine. The system design
constraints and signal transfer characteristics for the implemented HBC-based
wearable health monitoring system are measured and analyzed, showing reliable
connectivity with >8x power savings compared to Bluetooth lowenergy (BTLE).
| 1 | 0 | 0 | 0 | 0 | 0 |
Electromagnetically Induced Transparency (EIT) and Autler-Townes (AT) splitting in the Presence of Band-Limited White Gaussian Noise | We investigate the effect of band-limited white Gaussian noise (BLWGN) on
electromagnetically induced transparency (EIT) and Autler-Townes (AT)
splitting, when performing atom-based continuous-wave (CW) radio-frequency (RF)
electric (E) field strength measurements with Rydberg atoms in an atomic vapor.
This EIT/AT-based E-field measurement approach is currently being investigated
by several groups around the world as a means to develop a new SI traceable RF
E-field measurement technique. For this to be a useful technique, it is
important to understand the influence of BLWGN. We perform EIT/AT based E-field
experiments with BLWGN centered on the RF transition frequency and for the
BLWGN blue-shifted and red-shifted relative to the RF transition frequency. The
EIT signal can be severely distorted for certain noise conditions (band-width,
center-frequency, and noise power), hence altering the ability to accurately
measure a CW RF E-field strength. We present a model to predict the changes in
the EIT signal in the presence of noise. This model includes AC Stark shifts
and on resonance transitions associated with the noise source. The results of
this model are compared to the experimental data and we find very good
agreement between the two.
| 0 | 1 | 0 | 0 | 0 | 0 |
SINR Outage Evaluation in Cellular Networks: Saddle Point Approximation (SPA) Using Normal Inverse Gaussian (NIG) Distribution | Signal-to-noise-plus-interference ratio (SINR) outage probability is among
one of the key performance metrics of a wireless cellular network. In this
paper, we propose a semi-analytical method based on saddle point approximation
(SPA) technique to calculate the SINR outage of a wireless system whose SINR
can be modeled in the form $\frac{\sum_{i=1}^M X_i}{\sum_{i=1}^N Y_i +1}$ where
$X_i$ denotes the useful signal power, $Y_i$ denotes the power of the
interference signal, and $\sum_{i=1}^M X_i$, $\sum_{i=1}^N Y_i$ are independent
random variables. Both $M$ and $N$ can also be random variables. The proposed
approach is based on the saddle point approximation to cumulative distribution
function (CDF) as given by \tit{Wood-Booth-Butler formula}. The approach is
applicable whenever the cumulant generating function (CGF) of the received
signal and interference exists, and it allows us to tackle distributions with
large skewness and kurtosis with higher accuracy. In this regard, we exploit a
four parameter \tit{normal-inverse Gaussian} (NIG) distribution as a base
distribution. Given that the skewness and kurtosis satisfy a specific
condition, NIG-based SPA works reliably. When this condition is violated, we
recommend SPA based on normal or symmetric NIG distribution, both special cases
of NIG distribution, at the expense of reduced accuracy. For the purpose of
demonstration, we apply SPA for the SINR outage evaluation of a typical user
experiencing a downlink coordinated multi-point transmission (CoMP) from the
base stations (BSs) that are modeled by homogeneous Poisson point process. We
characterize the outage of the typical user in scenarios such as (a)~when the
number and locations of interferers are random, and (b)~when the fading
channels and number of interferers are random. Numerical results are presented
to illustrate the accuracy of the proposed set of approximations.
| 1 | 0 | 1 | 0 | 0 | 0 |
Smart grid modeling and simulation - Comparing GridLAB-D and RAPSim via two Case studies | One of the most important tools for the development of the smart grid is
simulation. Therefore, analyzing, designing, modeling, and simulating the smart
grid will allow to explore future scenarios and support decision making for the
grid's development. In this paper, we compare two open source simulation tools
for the smart grid, GridLAB-Distribution (GridLAB-D) and Renewable Alternative
Power systems Simulation (RAPSim). The comparison is based on the
implementation of two case studies related to a power flow problem and the
integration of renewable energy resources to the grid. Results show that even
for very simple case studies, specific properties such as weather simulation or
load modeling are influencing the results in a way that they are not
reproducible with a different simulator.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Large-Scale CNN Ensemble for Medication Safety Analysis | Revealing Adverse Drug Reactions (ADR) is an essential part of post-marketing
drug surveillance, and data from health-related forums and medical communities
can be of a great significance for estimating such effects. In this paper, we
propose an end-to-end CNN-based method for predicting drug safety on user
comments from healthcare discussion forums. We present an architecture that is
based on a vast ensemble of CNNs with varied structural parameters, where the
prediction is determined by the majority vote. To evaluate the performance of
the proposed solution, we present a large-scale dataset collected from a
medical website that consists of over 50 thousand reviews for more than 4000
drugs. The results demonstrate that our model significantly outperforms
conventional approaches and predicts medicine safety with an accuracy of 87.17%
for binary and 62.88% for multi-classification tasks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Solutions to twisted word equations and equations in virtually free groups | It is well-known that the problem to solve equations in virtually free groups
can be reduced to the problem to solve twisted word equations with regular
constraints over free monoids with involution.
In a first part of the paper we prove that the set of all solutions of such a
twisted word equation is an EDT0L language and that the specification of that
EDT0L language can be computed in PSPACE. (We give a more precise bound in the
paper.) Within the same complexity bound we can decide whether the solution set
is empty, finite, or infinite. No PSPACE-algorithm, actually no concrete
complexity bound was known for deciding emptiness before. Decidability of
finiteness was considered to be an open problem.
In the second part we apply the results to the solution set of equations with
rational constraints in finitely generated virtually free groups. For each such
group we obtain the same results as above for the set of solutions in standard
normal forms with respect to some natural set of generators. In particular, for
a fixed group we can decide in PSPACE whether the solution set is empty,
finite, or infinite.
Our results generalize the work by Lohrey and Sénizergues (ICALP 2006) and
Dahmani and Guirardel (J. of Topology 2010) with respect to both complexity and
expressive power. Neither paper gave any concrete complexity bound and the
results in these papers are stated subsets of solutions only, whereas our
results concern all solutions. Moreover, we give a formal language
characterization of the full solution set as an EDT0L language.
| 1 | 0 | 1 | 0 | 0 | 0 |
DeepTransport: Learning Spatial-Temporal Dependency for Traffic Condition Forecasting | Predicting traffic conditions has been recently explored as a way to relieve
traffic congestion. Several pioneering approaches have been proposed based on
traffic observations of the target location as well as its adjacent regions,
but they obtain somewhat limited accuracy due to lack of mining road topology.
To address the effect attenuation problem, we propose to take account of the
traffic of surrounding locations(wider than adjacent range). We propose an
end-to-end framework called DeepTransport, in which Convolutional Neural
Networks (CNN) and Recurrent Neural Networks (RNN) are utilized to obtain
spatial-temporal traffic information within a transport network topology. In
addition, attention mechanism is introduced to align spatial and temporal
information. Moreover, we constructed and released a real-world large traffic
condition dataset with 5-minute resolution. Our experiments on this dataset
demonstrate our method captures the complex relationship in temporal and
spatial domain. It significantly outperforms traditional statistical methods
and a state-of-the-art deep learning method.
| 1 | 0 | 0 | 0 | 0 | 0 |
On Statistical Optimality of Variational Bayes | The article addresses a long-standing open problem on the justification of
using variational Bayes methods for parameter estimation. We provide general
conditions for obtaining optimal risk bounds for point estimates acquired from
mean-field variational Bayesian inference. The conditions pertain to the
existence of certain test functions for the distance metric on the parameter
space and minimal assumptions on the prior. A general recipe for verification
of the conditions is outlined which is broadly applicable to existing Bayesian
models with or without latent variables. As illustrations, specific
applications to Latent Dirichlet Allocation and Gaussian mixture models are
discussed.
| 0 | 0 | 1 | 1 | 0 | 0 |
The Teichmüller Stack | This paper is a comprehensive introduction to the results of [7]. It grew as
an expanded version of a talk given at INdAM Meeting Complex and Symplectic
Geometry, held at Cortona in June 12-18, 2016. It deals with the construction
of the Teichmüller space of a smooth compact manifold M (that is the space of
isomorphism classes of complex structures on M) in arbitrary dimension. The
main problem is that, whenever we leave the world of surfaces, the
Teichmüller space is no more a complex manifold or an analytic space but an
analytic Artin stack. We explain how to construct explicitly an atlas for this
stack using ideas coming from foliation theory. Throughout the article, we use
the case of $\mathbb{S}^3\times\mathbb{S}^1$ as a recurrent example.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the Impact of Micro-Packages: An Empirical Study of the npm JavaScript Ecosystem | The rise of user-contributed Open Source Software (OSS) ecosystems
demonstrate their prevalence in the software engineering discipline. Libraries
work together by depending on each other across the ecosystem. From these
ecosystems emerges a minimized library called a micro-package. Micro- packages
become problematic when breaks in a critical ecosystem dependency ripples its
effects to unsuspecting users. In this paper, we investigate the impact of
micro-packages in the npm JavaScript ecosystem. Specifically, we conducted an
empirical in- vestigation with 169,964 JavaScript npm packages to understand
(i) the widespread phenomena of micro-packages, (ii) the size dependencies
inherited by a micro-package and (iii) the developer usage cost (ie., fetch,
install, load times) of using a micro-package. Results of the study find that
micro-packages form a significant portion of the npm ecosystem. Apart from the
ease of readability and comprehension, we show that some micro-packages have
long dependency chains and incur just as much usage costs as other npm
packages. We envision that this work motivates the need for developers to be
aware of how sensitive their third-party dependencies are to critical changes
in the software ecosystem.
| 1 | 0 | 0 | 0 | 0 | 0 |
Location and Orientation Optimisation for Spatially Stretched Tripole Arrays Based on Compressive Sensing | The design of sparse spatially stretched tripole arrays is an important but
also challenging task and this paper proposes for the very first time efficient
solutions to this problem. Unlike for the design of traditional sparse antenna
arrays, the developed approaches optimise both the dipole locations and
orientations. The novelty of the paper consists in formulating these
optimisation problems into a form that can be solved by the proposed
compressive sensing and Bayesian compressive sensing based approaches. The
performance of the developed approaches is validated and it is shown that
accurate approximation of a reference response can be achieved with a 67%
reduction in the number of dipoles required as compared to an equivalent
uniform spatially stretched tripole array, leading to a significant reduction
in the cost associated with the resulting arrays.
| 1 | 0 | 0 | 0 | 0 | 0 |
Hausdorff dimension of limsup sets of random rectangles in products of regular spaces | The almost sure Hausdorff dimension of the limsup set of randomly distributed
rectangles in a product of Ahlfors regular metric spaces is computed in terms
of the singular value function of the rectangles.
| 0 | 0 | 1 | 0 | 0 | 0 |
Thermal Characterization of Microscale Heat Convection under Rare Gas Condition by a Modified Hot Wire Method | As power electronics shrinks down to sub-micron scale, the thermal transport
from a solid surface to environment becomes significant. Under circumstances
when the device works in rare gas environment, the scale for thermal transport
is comparable to the mean free path of molecules, and is difficult to
characterize. In this work, we present an experimental study about thermal
transport around a microwire in rare gas environment by using a steady state
hot wire method. Unlike conventional hot wire technique of using transient heat
transfer process, this method considers both the heat conduction along the wire
and convection effect from wire surface to surroundings. Convection heat
transfer coefficient from a platinum wire in diameter 25 um to air is
characterized under different heating power and air pressures to comprehend the
effect of temperature and density of gas molecules. It is observed that
convection heat transfer coefficient varies from 14 Wm-2K-1 at 7 Pa to 629
Wm-2K-1 at atmosphere pressure. In free molecule regime, Nusselt number has a
linear relationship with inverse Knudsen number and the slope of 0.274 is
employed to determined equivalent thermal dissipation boundary as 7.03E10-4 m.
In transition regime, the equivalent thermal dissipation boundary is obtained
as 5.02E10-4 m. Under a constant pressure, convection heat transfer coefficient
decreases with increasing temperature, and this correlation is more sensitive
to larger pressure. This work provides a pathway for studying both heat
conduction and heat convection effect at micro/nanoscale under rare gas
environment, the knowledge of which is essential for regulating heat
dissipation in various industrial applications.
| 0 | 1 | 0 | 0 | 0 | 0 |
A summation formula for triples of quadratic spaces | Let $V_1,V_2,V_3$ be a triple of even dimensional vector spaces over a number
field $F$ equipped with nondegenerate quadratic forms
$\mathcal{Q}_1,\mathcal{Q}_2,\mathcal{Q}_3$, respectively. Let \begin{align*} Y
\subset \prod_{i=1}V_i \end{align*} be the closed subscheme consisting of
$(v_1,v_2,v_3)$ on which
$\mathcal{Q}_1(v_1)=\mathcal{Q}_2(v_2)=\mathcal{Q}_3(v_3)$. Motivated by
conjectures of Braverman and Kazhdan and related work of Lafforgue, Ngô, and
Sakellaridis we prove an analogue of the Poisson summation formula for certain
functions on this space.
| 0 | 0 | 1 | 0 | 0 | 0 |
Designing Strassen's algorithm | In 1969, Strassen shocked the world by showing that two n x n matrices could
be multiplied in time asymptotically less than $O(n^3)$. While the recursive
construction in his algorithm is very clear, the key gain was made by showing
that 2 x 2 matrix multiplication could be performed with only 7 multiplications
instead of 8. The latter construction was arrived at by a process of
elimination and appears to come out of thin air. Here, we give the simplest and
most transparent proof of Strassen's algorithm that we are aware of, using only
a simple unitary 2-design and a few easy lines of calculation. Moreover, using
basic facts from the representation theory of finite groups, we use 2-designs
coming from group orbits to generalize our construction to all n (although the
resulting algorithms aren't optimal for n at least 3).
| 1 | 0 | 1 | 0 | 0 | 0 |
A Software-equivalent SNN Hardware using RRAM-array for Asynchronous Real-time Learning | Spiking Neural Network (SNN) naturally inspires hardware implementation as it
is based on biology. For learning, spike time dependent plasticity (STDP) may
be implemented using an energy efficient waveform superposition on memristor
based synapse. However, system level implementation has three challenges.
First, a classic dilemma is that recognition requires current reading for short
voltage$-$spikes which is disturbed by large voltage$-$waveforms that are
simultaneously applied on the same memristor for real$-$time learning i.e. the
simultaneous read$-$write dilemma. Second, the hardware needs to exactly
replicate software implementation for easy adaptation of algorithm to hardware.
Third, the devices used in hardware simulations must be realistic. In this
paper, we present an approach to address the above concerns. First, the
learning and recognition occurs in separate arrays simultaneously in
real$-$time, asynchronously $-$ avoiding non$-$biomimetic clocking based
complex signal management. Second, we show that the hardware emulates software
at every stage by comparison of SPICE (circuit$-$simulator) with MATLAB
(mathematical SNN algorithm implementation in software) implementations. As an
example, the hardware shows 97.5 per cent accuracy in classification which is
equivalent to software for a Fisher$-$Iris dataset. Third, the STDP is
implemented using a model of synaptic device implemented using HfO2 memristor.
We show that an increasingly realistic memristor model slightly reduces the
hardware performance (85 per cent), which highlights the need to engineer RRAM
characteristics specifically for SNN.
| 1 | 0 | 0 | 0 | 0 | 0 |
Max flow vitality in general and $st$-planar graphs | The \emph{vitality} of an arc/node of a graph with respect to the maximum
flow between two fixed nodes $s$ and $t$ is defined as the reduction of the
maximum flow caused by the removal of that arc/node. In this paper we address
the issue of determining the vitality of arcs and/or nodes for the maximum flow
problem. We show how to compute the vitality of all arcs in a general
undirected graph by solving only $2(n-1)$ max flow instances and, In
$st$-planar graphs (directed or undirected) we show how to compute the vitality
of all arcs and all nodes in $O(n)$ worst-case time. Moreover, after
determining the vitality of arcs and/or nodes, and given a planar embedding of
the graph, we can determine the vitality of a `contiguous' set of arcs/nodes in
time proportional to the size of the set.
| 1 | 0 | 0 | 0 | 0 | 0 |
Between Homomorphic Signal Processing and Deep Neural Networks: Constructing Deep Algorithms for Polyphonic Music Transcription | This paper presents a new approach in understanding how deep neural networks
(DNNs) work by applying homomorphic signal processing techniques. Focusing on
the task of multi-pitch estimation (MPE), this paper demonstrates the
equivalence relation between a generalized cepstrum and a DNN in terms of their
structures and functionality. Such an equivalence relation, together with pitch
perception theories and the recently established
rectified-correlations-on-a-sphere (RECOS) filter analysis, provide an
alternative way in explaining the role of the nonlinear activation function and
the multi-layer structure, both of which exist in a cepstrum and a DNN. To
validate the efficacy of this new approach, a new feature designed in the same
fashion is proposed for pitch salience function. The new feature outperforms
the one-layer spectrum in the MPE task and, as predicted, it addresses the
issue of the missing fundamental effect and also achieves better robustness to
noise.
| 1 | 0 | 0 | 0 | 0 | 0 |
Negative membrane capacitance of outer hair cells: electromechanical coupling near resonance | The ability of the mammalian ear in processing high frequency sounds, up to
$\sim$100 kHz, is based on the capability of outer hair cells (OHCs) responding
to stimulation at high frequencies. These cells show a unique motility in their
cell body coupled with charge movement. With this motile element, voltage
changes generated by stimuli at their hair bundles drives the cell body and
that, in turn, amplifies the stimuli. In vitro experiments show that the
movement of these charges significantly increases the membrane capacitance,
limiting the motile activity by additionally attenuating voltage changes. It
was found, however, that such an effect is due to the absence of mechanical
load. In the presence of mechanical resonance, such as in vivo conditions, the
movement of motile charges is expected to create negative capacitance near the
resonance frequency. Therefore this motile mechanism is effective at high
frequencies.
| 0 | 1 | 0 | 0 | 0 | 0 |
Resonant inelastic x-ray scattering operators for $t_{2g}$ orbital systems | We derive general expressions for resonant inelastic x-ray scattering (RIXS)
operators for $t_{2g}$ orbital systems, which exhibit a rich array of
unconventional magnetism arising from unquenched orbital moments. Within the
fast collision approximation, which is valid especially for 4$d$ and 5$d$
transition metal compounds with short core-hole lifetimes, the RIXS operators
are expressed in terms of total spin and orbital angular momenta of the
constituent ions. We then map these operators onto pseudospins that represent
spin-orbit entangled magnetic moments in systems with strong spin-orbit
coupling. Applications of our theory to such systems as iridates and ruthenates
are discussed, with a particular focus on compounds based on $d^4$ ions with
Van Vleck-type nonmagnetic ground state.
| 0 | 1 | 0 | 0 | 0 | 0 |
Hamiltonian Path in Split Graphs- a Dichotomy | In this paper, we investigate Hamiltonian path problem in the context of
split graphs, and produce a dichotomy result on the complexity of the problem.
Our main result is a deep investigation of the structure of $K_{1,4}$-free
split graphs in the context of Hamiltonian path problem, and as a consequence,
we obtain a polynomial-time algorithm to the Hamiltonian path problem in
$K_{1,4}$-free split graphs. We close this paper with the hardness result: we
show that, unless P=NP, Hamiltonian path problem is NP-complete in
$K_{1,5}$-free split graphs by reducing from Hamiltonian cycle problem in
$K_{1,5}$-free split graphs. Thus this paper establishes a "thin complexity
line" separating NP-complete instances and polynomial-time solvable instances.
| 1 | 0 | 0 | 0 | 0 | 0 |
Design Patterns for Fusion-Based Object Retrieval | We address the task of ranking objects (such as people, blogs, or verticals)
that, unlike documents, do not have direct term-based representations. To be
able to match them against keyword queries, evidence needs to be amassed from
documents that are associated with the given object. We present two design
patterns, i.e., general reusable retrieval strategies, which are able to
encompass most existing approaches from the past. One strategy combines
evidence on the term level (early fusion), while the other does it on the
document level (late fusion). We demonstrate the generality of these patterns
by applying them to three different object retrieval tasks: expert finding,
blog distillation, and vertical ranking.
| 1 | 0 | 0 | 0 | 0 | 0 |
Early Routability Assessment in VLSI Floorplans: A Generalized Routing Model | Multiple design iterations are inevitable in nanometer Integrated Circuit
(IC) design flow until desired printability and performance metrics are
achieved. This starts with placement optimization aimed at improving
routability, wirelength, congestion and timing in the design. Contrarily, no
such practice exists on a floorplanned layout, during the early stage of the
design flow. Recently, STAIRoute \cite{karb2} aimed to address that by
identifying the shortest routing path of a net through a set of routing regions
in the floorplan in multiple metal layers. Since the blocks in hierarchical
ASIC/SoC designs do not use all the permissible routing layers for the internal
routing corresponding to standard cell connectivity, the proposed STAIRoute
framework is not an effective for early global routability assessment. This
leads to improper utilization of routing area, specifically in higher routing
layers with fewer routing blockages, as the lack of placement of standard cells
does not facilitates any routing of their interconnections.
This paper presents a generalized model for early global routability
assessment, HGR, by utilizing the free regions over the blocks beyond certain
metal layers. The proposed (hybrid) routing model comprises of (a) the junction
graph model in STAIRoute routing through the block boundary regions in lower
routing layers, and (ii) the grid graph model for routing in higher layers over
the free regions of the blocks.
Experiment with the latest floorplanning benchmarks exhibit an average
reduction of $4\%$, $54\%$ and $70\%$ in netlength, via count, and congestion
respectively when HGR is used over STAIRoute. Further, we conducted another
experiment on an industrial design flow targeted for $45nm$ process, and the
results are encouraging with $~3$X runtime boost when early global routing is
used in conjunction with the existing physical design flow.
| 1 | 0 | 0 | 0 | 0 | 0 |
Inverse Ising problem in continuous time: A latent variable approach | We consider the inverse Ising problem, i.e. the inference of network
couplings from observed spin trajectories for a model with continuous time
Glauber dynamics. By introducing two sets of auxiliary latent random variables
we render the likelihood into a form, which allows for simple iterative
inference algorithms with analytical updates. The variables are: (1) Poisson
variables to linearise an exponential term which is typical for point process
likelihoods and (2) Pólya-Gamma variables, which make the likelihood
quadratic in the coupling parameters. Using the augmented likelihood, we derive
an expectation-maximization (EM) algorithm to obtain the maximum likelihood
estimate of network parameters. Using a third set of latent variables we extend
the EM algorithm to sparse couplings via L1 regularization. Finally, we develop
an efficient approximate Bayesian inference algorithm using a variational
approach. We demonstrate the performance of our algorithms on data simulated
from an Ising model. For data which are simulated from a more biologically
plausible network with spiking neurons, we show that the Ising model captures
well the low order statistics of the data and how the Ising couplings are
related to the underlying synaptic structure of the simulated network.
| 0 | 0 | 0 | 1 | 0 | 0 |
Analysis of the flux growth rate in emerging active regions on the Sun | We studied the emergence process of 42 active region (ARs) by analyzing the
time derivative, R(t), of the total unsigned flux. Line-of-sight magnetograms
acquired by the Helioseismic and Magnetic Imager (HMI) onboard the Solar
Dynamics Observatory (SDO) were used. A continuous piecewise linear fitting to
the R(t)-profile was applied to detect an interval, dt_2, of nearly-constant
R(t) covering one or several local maxima. The averaged over dt_2 magnitude of
R(t) was accepted as an estimate of the maximal value of the flux growth rate,
R_MAX, which varies in a range of (0.5-5)x10^20 Mx hour^-1 for active regions
with the maximal total unsigned flux of (0.5-3)x10^22 Mx. The normalized flux
growth rate, R_N, was defined under an assumption that the saturated total
unsigned flux, F_MAX, equals unity. Out of 42 ARs in our initial list, 36 event
were successfully fitted and they form two subsets (with a small overlap of 8
events): the ARs with a short (<13 hours) interval dt_2 and a high (>0.024
hour^-1) normalized flux emergence rate, R_N, form the "rapid" emergence event
subset. The second subset consists of "gradual" emergence events and it is
characterized by a long (>13 hours) interval dt_2 and a low R_N (<0.024
hour^-1). In diagrams of R_MAX plotted versus F_MAX, the events from different
subsets are not overlapped and each subset displays an individual power law.
The power law index derived from the entire ensemble of 36 events is
0.69+-0.10. The "rapid" emergence is consistent with a "two-step" emergence
process of a single twisted flux tube. The "gradual" emergence is possibly
related to a consecutive rising of several flux tubes emerging at nearly the
same location in the photosphere.
| 0 | 1 | 0 | 0 | 0 | 0 |
Porosity and Differentiability of Lipschitz Maps from Stratified Groups to Banach Homogeneous Groups | Let $f$ be a Lipschitz map from a subset $A$ of a stratified group to a
Banach homogeneous group. We show that directional derivatives of $f$ act as
homogeneous homomorphisms at density points of $A$ outside a $\sigma$-porous
set. At density points of $A$ we establish a pointwise characterization of
differentiability in terms of directional derivatives. We use these new results
to obtain an alternate proof of almost everywhere differentiability of
Lipschitz maps from subsets of stratified groups to Banach homogeneous groups
satisfying a suitably weakened Radon-Nikodym property. As a consequence we also
get an alternative proof of Pansu's Theorem.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Structured Learning Approach with Neural Conditional Random Fields for Sleep Staging | Sleep plays a vital role in human health, both mental and physical. Sleep
disorders like sleep apnea are increasing in prevalence, with the rapid
increase in factors like obesity. Sleep apnea is most commonly treated with
Continuous Positive Air Pressure (CPAP) therapy. Presently, however, there is
no mechanism to monitor a patient's progress with CPAP. Accurate detection of
sleep stages from CPAP flow signal is crucial for such a mechanism. We propose,
for the first time, an automated sleep staging model based only on the flow
signal. Deep neural networks have recently shown high accuracy on sleep staging
by eliminating handcrafted features. However, these methods focus exclusively
on extracting informative features from the input signal, without paying much
attention to the dynamics of sleep stages in the output sequence. We propose an
end-to-end framework that uses a combination of deep convolution and recurrent
neural networks to extract high-level features from raw flow signal with a
structured output layer based on a conditional random field to model the
temporal transition structure of the sleep stages. We improve upon the previous
methods by 10% using our model, that can be augmented to the previous sleep
staging deep learning methods. We also show that our method can be used to
accurately track sleep metrics like sleep efficiency calculated from sleep
stages that can be deployed for monitoring the response of CPAP therapy on
sleep apnea patients. Apart from the technical contributions, we expect this
study to motivate new research questions in sleep science.
| 0 | 0 | 0 | 1 | 0 | 0 |
Dominant dimension and tilting modules | We study which algebras have tilting modules that are both generated and
cogenerated by projective-injective modules. Crawley-Boevey and Sauter have
shown that Auslander algebras have such tilting modules; and for algebras of
global dimension $2$, Auslander algebras are classified by the existence of
such tilting modules.
In this paper, we show that the existence of such a tilting module is
equivalent to the algebra having dominant dimension at least $2$, independent
of its global dimension. In general such a tilting module is not necessarily
cotilting. Here, we show that the algebras which have a tilting-cotilting
module generated-cogenerated by projective-injective modules are precisely
$1$-Auslander-Gorenstein algebras.
When considering such a tilting module, without the assumption that it is
cotilting, we study the global dimension of its endomorphism algebra, and
discuss a connection with the Finitistic Dimension Conjecture. Furthermore, as
special cases, we show that triangular matrix algebras obtained from Auslander
algebras and certain injective modules, have such a tilting module. We also
give a description of which Nakayama algebras have such a tilting module.
| 0 | 0 | 1 | 0 | 0 | 0 |
Fast and Accurate 3D Medical Image Segmentation with Data-swapping Method | Deep neural network models used for medical image segmentation are large
because they are trained with high-resolution three-dimensional (3D) images.
Graphics processing units (GPUs) are widely used to accelerate the trainings.
However, the memory on a GPU is not large enough to train the models. A popular
approach to tackling this problem is patch-based method, which divides a large
image into small patches and trains the models with these small patches.
However, this method would degrade the segmentation quality if a target object
spans multiple patches. In this paper, we propose a novel approach for 3D
medical image segmentation that utilizes the data-swapping, which swaps out
intermediate data from GPU memory to CPU memory to enlarge the effective GPU
memory size, for training high-resolution 3D medical images without patching.
We carefully tuned parameters in the data-swapping method to obtain the best
training performance for 3D U-Net, a widely used deep neural network model for
medical image segmentation. We applied our tuning to train 3D U-Net with
full-size images of 192 x 192 x 192 voxels in brain tumor dataset. As a result,
communication overhead, which is the most important issue, was reduced by
17.1%. Compared with the patch-based method for patches of 128 x 128 x 128
voxels, our training for full-size images achieved improvement on the mean Dice
score by 4.48% and 5.32 % for detecting whole tumor sub-region and tumor core
sub-region, respectively. The total training time was reduced from 164 hours to
47 hours, resulting in 3.53 times of acceleration.
| 1 | 0 | 0 | 0 | 0 | 0 |
Direct Optical Visualization of Water Transport across Polymer Nano-films | Gaining a detailed understanding of water transport behavior through
ultra-thin polymer membranes is increasingly becoming necessary due to the
recent interest in exploring applications such as water desalination using
nanoporous membranes. Current techniques only measure bulk water transport
rates and do not offer direct visualization of water transport which can
provide insights into the microscopic mechanisms affecting bulk behavior such
as the role of defects. We describe the use of a technique, referred here as
Bright-Field Nanoscopy (BFN) to directly image the transport of water across
thin polymer films using a regular bright-field microscope. The technique
exploits the strong thickness dependent color response of an optical stack
consisting of a thin (~25 nm) germanium film deposited over a gold substrate.
Using this technique, we were able to observe the strong influence of the
terminal layer and ambient conditions on the bulk water transport rates in thin
(~ 20 nm) layer-by-layer deposited multilayer films of weak polyelectrolytes
(PEMs).
| 0 | 1 | 0 | 0 | 0 | 0 |
Hyperbolicity cones and imaginary projections | Recently, the authors and de Wolff introduced the imaginary projection of a
polynomial $f\in\mathbb{C}[\mathbf{z}]$ as the projection of the variety of $f$
onto its imaginary part, $\mathcal{I}(f) \ = \ \{\text{Im}(\mathbf{z}) \, : \,
\mathbf{z} \in \mathcal{V}(f) \}$. Since a polynomial $f$ is stable if and only
if $\mathcal{I}(f) \cap \mathbb{R}_{>0}^n \ = \ \emptyset$, the notion offers a
novel geometric view underlying stability questions of polynomials. In this
article, we study the relation between the imaginary projections and
hyperbolicity cones, where the latter ones are only defined for homogeneous
polynomials. Building upon this, for homogeneous polynomials we provide a tight
upper bound for the number of components in the complement $\mathcal{I}(f)^{c}$
and thus for the number of hyperbolicity cones of $f$. And we show that for $n
\ge 2$, a polynomial $f$ in $n$ variables can have an arbitrarily high number
of strictly convex and bounded components in $\mathcal{I}(f)^{c}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Fairness-aware Classification: Criterion, Convexity, and Bounds | Fairness-aware classification is receiving increasing attention in the
machine learning fields. Recently research proposes to formulate the
fairness-aware classification as constrained optimization problems. However,
several limitations exist in previous works due to the lack of a theoretical
framework for guiding the formulation. In this paper, we propose a general
framework for learning fair classifiers which addresses previous limitations.
The framework formulates various commonly-used fairness metrics as convex
constraints that can be directly incorporated into classic classification
models. Within the framework, we propose a constraint-free criterion on the
training data which ensures that any classifier learned from the data is fair.
We also derive the constraints which ensure that the real fairness metric is
satisfied when surrogate functions are used to achieve convexity. Our framework
can be used to for formulating fairness-aware classification with fairness
guarantee and computational efficiency. The experiments using real-world
datasets demonstrate our theoretical results and show the effectiveness of
proposed framework and methods.
| 0 | 0 | 0 | 1 | 0 | 0 |
Optimal Rates for Community Estimation in the Weighted Stochastic Block Model | Community identification in a network is an important problem in fields such
as social science, neuroscience, and genetics. Over the past decade, stochastic
block models (SBMs) have emerged as a popular statistical framework for this
problem. However, SBMs have an important limitation in that they are suited
only for networks with unweighted edges; in various scientific applications,
disregarding the edge weights may result in a loss of valuable information. We
study a weighted generalization of the SBM, in which observations are collected
in the form of a weighted adjacency matrix and the weight of each edge is
generated independently from an unknown probability density determined by the
community membership of its endpoints. We characterize the optimal rate of
misclustering error of the weighted SBM in terms of the Renyi divergence of
order 1/2 between the weight distributions of within-community and
between-community edges, substantially generalizing existing results for
unweighted SBMs. Furthermore, we present a computationally tractable algorithm
based on discretization that achieves the optimal error rate. Our method is
adaptive in the sense that the algorithm, without assuming knowledge of the
weight densities, performs as well as the best algorithm that knows the weight
densities.
| 0 | 0 | 1 | 1 | 0 | 0 |
On Sound Relative Error Bounds for Floating-Point Arithmetic | State-of-the-art static analysis tools for verifying finite-precision code
compute worst-case absolute error bounds on numerical errors. These are,
however, often not a good estimate of accuracy as they do not take into account
the magnitude of the computed values. Relative errors, which compute errors
relative to the value's magnitude, are thus preferable. While today's tools do
report relative error bounds, these are merely computed via absolute errors and
thus not necessarily tight or more informative. Furthermore, whenever the
computed value is close to zero on part of the domain, the tools do not report
any relative error estimate at all. Surprisingly, the quality of relative error
bounds computed by today's tools has not been systematically studied or
reported to date. In this paper, we investigate how state-of-the-art static
techniques for computing sound absolute error bounds can be used, extended and
combined for the computation of relative errors. Our experiments on a standard
benchmark set show that computing relative errors directly, as opposed to via
absolute errors, is often beneficial and can provide error estimates up to six
orders of magnitude tighter, i.e. more accurate. We also show that interval
subdivision, another commonly used technique to reduce over-approximations, has
less benefit when computing relative errors directly, but it can help to
alleviate the effects of the inherent issue of relative error estimates close
to zero.
| 1 | 0 | 0 | 0 | 0 | 0 |
Light yield determination in large sodium iodide detectors applied in the search for dark matter | Application of NaI(Tl) detectors in the search for galactic dark matter
particles through their elastic scattering off the target nuclei is well
motivated because of the long standing DAMA/LIBRA highly significant positive
result on annual modulation, still requiring confirmation. For such a goal, it
is mandatory to reach very low threshold in energy (at or below the keV level),
very low radioactive background (at a few counts/keV/kg/day), and high
detection mass (at or above the 100 kg scale). One of the most relevant
technical issues is the optimization of the crystal intrinsic scintillation
light yield and the efficiency of the light collecting system for large mass
crystals. In the frame of the ANAIS (Annual modulation with NaI Scintillators)
dark matter search project large NaI(Tl) crystals from different providers
coupled to two photomultiplier tubes (PMTs) have been tested at the Canfranc
Underground Laboratory. In this paper we present the estimates of the NaI(Tl)
scintillation light collected using full-absorption peaks at very low energy
from external and internal sources emitting gammas/electrons, and
single-photoelectron events populations selected by using very low energy
pulses tails. Outstanding scintillation light collection at the level of
15~photoelectrons/keV can be reported for the final design and provider chosen
for ANAIS detectors. Taking into account the Quantum Efficiency of the PMT
units used, the intrinsic scintillation light yield in these NaI(Tl) crystals
is above 40~photoelectrons/keV for energy depositions in the range from 3 up to
25~keV. This very high light output of ANAIS crystals allows triggering below
1~keV, which is very important in order to increase the sensitivity in the
direct detection of dark matter.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Systematic Approach to Numerical Dispersion in Maxwell Solvers | The finite-difference time-domain (FDTD) method is a well established method
for solving the time evolution of Maxwell's equations. Unfortunately the scheme
introduces numerical dispersion and therefore phase and group velocities which
deviate from the correct values. The solution to Maxwell's equations in more
than one dimension results in non-physical predictions such as numerical
dispersion or numerical Cherenkov radiation emitted by a relativistic electron
beam propagating in vacuum.
Improved solvers, which keep the staggered Yee-type grid for electric and
magnetic fields, generally modify the spatial derivative operator in the
Maxwell-Faraday equation by increasing the computational stencil. These
modified solvers can be characterized by different sets of coefficients,
leading to different dispersion properties. In this work we introduce a norm
function to rewrite the choice of coefficients into a minimization problem. We
solve this problem numerically and show that the minimization procedure leads
to phase and group velocities that are considerably closer to $c$ as compared
to schemes with manually set coefficients available in the literature.
Depending on a specific problem at hand (e.g. electron beam propagation in
plasma, high-order harmonic generation from plasma surfaces, etc), the norm
function can be chosen accordingly, for example, to minimize the numerical
dispersion in a certain given propagation direction. Particle-in-cell
simulations of an electron beam propagating in vacuum using our solver are
provided.
| 0 | 1 | 0 | 0 | 0 | 0 |
On Synchronous, Asynchronous, and Randomized Best-Response schemes for computing equilibria in Stochastic Nash games | This work considers a stochastic Nash game in which each player solves a
parameterized stochastic optimization problem. In deterministic regimes,
best-response schemes have been shown to be convergent under a suitable
spectral property associated with the proximal best-response map. However, a
direct application of this scheme to stochastic settings requires obtaining
exact solutions to stochastic optimization at each iteration. Instead, we
propose an inexact generalization in which an inexact solution is computed via
an increasing number of projected stochastic gradient steps. Based on this
framework, we present three inexact best-response schemes: (i) First, we
propose a synchronous scheme where all players simultaneously update their
strategies; (ii) Subsequently, we extend this to a randomized setting where a
subset of players is randomly chosen to their update strategies while the
others keep their strategies invariant; (iii) Finally, we propose an
asynchronous scheme, where each player determines its own update frequency and
may use outdated rival-specific data in updating its strategy. Under a suitable
contractive property of the proximal best-response map, we derive a.s.
convergence of the iterates for (i) and (ii) and mean-convergence for (i) --
(iii). In addition, we show that for (i) -- (iii), the iterates converge to the
unique equilibrium in mean at a prescribed linear rate. Finally, we establish
the overall iteration complexity in terms of projected stochastic gradient
steps for computing an $\epsilon-$Nash equilibrium and in all settings, the
iteration complexity is ${\cal O}(1/\epsilon^{2(1+c) + \delta})$ where $c = 0$
in the context of (i) and represents the positive cost of randomization (in
(ii)) and asynchronicity and delay (in (iii)). The schemes are further extended
to linear and quadratic recourse-based stochastic Nash games.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the insertion of n-powers | In algebraic terms, the insertion of $n$-powers in words may be modelled at
the language level by considering the pseudovariety of ordered monoids defined
by the inequality $1\le x^n$. We compare this pseudovariety with several other
natural pseudovarieties of ordered monoids and of monoids associated with the
Burnside pseudovariety of groups defined by the identity $x^n=1$. In
particular, we are interested in determining the pseudovariety of monoids that
it generates, which can be viewed as the problem of determining the Boolean
closure of the class of regular languages closed under $n$-power insertions. We
exhibit a simple upper bound and show that it satisfies all pseudoidentities
which are provable from $1\le x^n$ in which both sides are regular elements
with respect to the upper bound.
| 1 | 0 | 1 | 0 | 0 | 0 |
The TUS detector of extreme energy cosmic rays on board the Lomonosov satellite | The origin and nature of extreme energy cosmic rays (EECRs), which have
energies above the 50 EeV, the Greisen-Zatsepin-Kuzmin (GZK) energy limit, is
one of the most interesting and complicated problems in modern cosmic-ray
physics. Existing ground-based detectors have helped to obtain remarkable
results in studying cosmic rays before and after the GZK limit, but have also
produced some contradictions in our understanding of cosmic ray mass
composition. Moreover, each of these detectors covers only a part of the
celestial sphere, which poses problems for studying the arrival directions of
EECRs and identifying their sources. As a new generation of EECR space
detectors, TUS (Tracking Ultraviolet Set-up), KLYPVE and JEM-EUSO, are intended
to study the most energetic cosmic-ray particles, providing larger, uniform
exposures of the entire celestial sphere. The TUS detector, launched on board
the Lomonosov satellite on April 28, 2016, from Vostochny Cosmodrome in Russia,
is the first of these. It employs a single-mirror optical system and a
photomultiplier tube matrix as a photo-detector and will test the fluorescent
method of measuring EECRs from space. Utilizing the Earth's atmosphere as a
huge calorimeter, it is expected to detect EECRs with energies above 100 EeV.
It will also be able to register slower atmospheric transient events:
atmospheric fluorescence in electrical discharges of various types including
precipitating electrons escaping the magnetosphere and from the radiation of
meteors passing through the atmosphere. We describe the design of the TUS
detector and present results of different ground-based tests and simulations.
| 0 | 1 | 0 | 0 | 0 | 0 |
New ellipsometric approach for determining small light ellipticities | We propose a precise ellipsometric method for the investigation of coherent
light with a small ellipticity. The main feature of this method is the use of
compensators with phase delays providing the maximum accuracy of measurements
for the selected range of ellipticities and taking into account the
interference of multiple reflections of coherent light. The relative error of
the ellipticity measurement in the range of mesurement does not exceed 0.02.
| 0 | 1 | 0 | 0 | 0 | 0 |
Maximizing the Mutual Information of Multi-Antenna Links Under an Interfered Receiver Power Constraint | Single-user multiple-input / multiple-output (SU-MIMO) communication systems
have been successfully used over the years and have provided a significant
increase on a wireless link's capacity by enabling the transmission of multiple
data streams. Assuming channel knowledge at the transmitter, the maximization
of the mutual information of a MIMO link is achieved by finding the optimal
power allocation under a given sum-power constraint, which is in turn obtained
by the water-filling (WF) algorithm. However, in spectrum sharing setups, such
as Licensed Shared Access (LSA), where a primary link (PL) and a secondary link
(SL) coexist, the power transmitted by the SL transmitter may induce harmful
interference to the PL receiver. While such co-existing links have been
considered extensively in various spectrum sharing setups, the mutual
information of the SL under a constraint on the interference it may cause to
the PL receiver has, quite astonishingly, not been evaluated so far. In this
paper, we solve this problem, find its unique optimal solution and provide the
power allocation policy and corresponding precoding solution that achieves the
optimal capacity under the imposed constraint. The performance of the optimal
solution and the penalty due to the interference constraint are evaluated over
some indicative Rayleigh fading channel conditions and interference thresholds.
We believe that the obtained results are of general nature and that they may
apply, beyond spectrum sharing, to a variety of applications that admit a
similar setup.
| 1 | 0 | 0 | 0 | 0 | 0 |
Multistationarity and Bistability for Fewnomial Chemical Reaction Networks | Bistability and multistationarity are properties of reaction networks linked
to switch-like responses and connected to cell memory and cell decision making.
Determining whether and when a network exhibits bistability is a hard and open
mathematical problem. One successful strategy consists of analyzing small
networks and deducing that some of the properties are preserved upon passage to
the full network. Motivated by this we study chemical reaction networks with
few chemical complexes. Under mass-action kinetics the steady states of these
networks are described by fewnomial systems, that is polynomial systems having
few distinct monomials. Such systems of polynomials are often studied in real
algebraic geometry by the use of Gale dual systems. Using this Gale duality we
give precise conditions in terms of the reaction rate constants for the number
and stability of the steady states of families of reaction networks with one
non-flow reaction.
| 1 | 0 | 0 | 0 | 1 | 0 |
Erratum: Link prediction in drug-target interactions network using similarity indices | Background: In silico drug-target interaction (DTI) prediction plays an
integral role in drug repositioning: the discovery of new uses for existing
drugs. One popular method of drug repositioning is network-based DTI
prediction, which uses complex network theory to predict DTIs from a
drug-target network. Currently, most network-based DTI prediction is based on
machine learning methods such as Restricted Boltzmann Machines (RBM) or Support
Vector Machines (SVM). These methods require additional information about the
characteristics of drugs, targets and DTIs, such as chemical structure, genome
sequence, binding types, causes of interactions, etc., and do not perform
satisfactorily when such information is unavailable. We propose a new,
alternative method for DTI prediction that makes use of only network topology
information attempting to solve this problem.
Results: We compare our method for DTI prediction against the well-known RBM
approach. We show that when applied to the MATADOR database, our approach based
on node neighborhoods yield higher precision for high-ranking predictions than
RBM when no information regarding DTI types is available.
Conclusion: This demonstrates that approaches purely based on network
topology provide a more suitable approach to DTI prediction in the many
real-life situations where little or no prior knowledge is available about the
characteristics of drugs, targets, or their interactions.
| 1 | 0 | 0 | 0 | 0 | 0 |
Variable Selection for Highly Correlated Predictors | Penalty-based variable selection methods are powerful in selecting relevant
covariates and estimating coefficients simultaneously. However, variable
selection could fail to be consistent when covariates are highly correlated.
The partial correlation approach has been adopted to solve the problem with
correlated covariates. Nevertheless, the restrictive range of partial
correlation is not effective for capturing signal strength for relevant
covariates. In this paper, we propose a new Semi-standard PArtial Covariance
(SPAC) which is able to reduce correlation effects from other predictors while
incorporating the magnitude of coefficients. The proposed SPAC variable
selection facilitates choosing covariates which have direct association with
the response variable, via utilizing dependency among covariates. We show that
the proposed method with the Lasso penalty (SPAC-Lasso) enjoys strong sign
consistency in both finite-dimensional and high-dimensional settings under
regularity conditions. Simulation studies and the `HapMap' gene data
application show that the proposed method outperforms the traditional Lasso,
adaptive Lasso, SCAD, and Peter-Clark-simple (PC-simple) methods for highly
correlated predictors.
| 0 | 0 | 1 | 1 | 0 | 0 |
Scraping and Preprocessing Commercial Auction Data for Fraud Classification | In the last three decades, we have seen a significant increase in trading
goods and services through online auctions. However, this business created an
attractive environment for malicious moneymakers who can commit different types
of fraud activities, such as Shill Bidding (SB). The latter is predominant
across many auctions but this type of fraud is difficult to detect due to its
similarity to normal bidding behaviour. The unavailability of SB datasets makes
the development of SB detection and classification models burdensome.
Furthermore, to implement efficient SB detection models, we should produce SB
data from actual auctions of commercial sites. In this study, we first scraped
a large number of eBay auctions of a popular product. After preprocessing the
raw auction data, we build a high-quality SB dataset based on the most reliable
SB strategies. The aim of our research is to share the preprocessed auction
dataset as well as the SB training (unlabelled) dataset, thereby researchers
can apply various machine learning techniques by using authentic data of
auctions and fraud.
| 0 | 0 | 0 | 1 | 0 | 0 |
Batch Size Influence on Performance of Graphic and Tensor Processing Units during Training and Inference Phases | The impact of the maximally possible batch size (for the better runtime) on
performance of graphic processing units (GPU) and tensor processing units (TPU)
during training and inference phases is investigated. The numerous runs of the
selected deep neural network (DNN) were performed on the standard MNIST and
Fashion-MNIST datasets. The significant speedup was obtained even for extremely
low-scale usage of Google TPUv2 units (8 cores only) in comparison to the quite
powerful GPU NVIDIA Tesla K80 card with the speedup up to 10x for training
stage (without taking into account the overheads) and speedup up to 2x for
prediction stage (with and without taking into account overheads). The precise
speedup values depend on the utilization level of TPUv2 units and increase with
the increase of the data volume under processing, but for the datasets used in
this work (MNIST and Fashion-MNIST with images of sizes 28x28) the speedup was
observed for batch sizes >512 images for training phase and >40 000 images for
prediction phase. It should be noted that these results were obtained without
detriment to the prediction accuracy and loss that were equal for both GPU and
TPU runs up to the 3rd significant digit for MNIST dataset, and up to the 2nd
significant digit for Fashion-MNIST dataset.
| 1 | 0 | 0 | 0 | 0 | 0 |
Minimax Euclidean Separation Rates for Testing Convex Hypotheses in $\mathbb{R}^d$ | We consider composite-composite testing problems for the expectation in the
Gaussian sequence model where the null hypothesis corresponds to a convex
subset $\mathcal{C}$ of $\mathbb{R}^d$. We adopt a minimax point of view and
our primary objective is to describe the smallest Euclidean distance between
the null and alternative hypotheses such that there is a test with small total
error probability. In particular, we focus on the dependence of this distance
on the dimension $d$ and the sample size/variance parameter $n$ giving rise to
the minimax separation rate. In this paper we discuss lower and upper bounds on
this rate for different smooth and non- smooth choices for $\mathcal{C}$.
| 0 | 0 | 1 | 1 | 0 | 0 |
Simple Policy Evaluation for Data-Rich Iterative Tasks | A data-based policy for iterative control task is presented. The proposed
strategy is model-free and can be applied whenever safe input and state
trajectories of a system performing an iterative task are available. These
trajectories, together with a user-defined cost function, are exploited to
construct a piecewise affine approximation to the value function. Approximated
value functions are then used to evaluate the control policy by solving a
linear program. We show that for linear system subject to convex cost and
constraints, the proposed strategy guarantees closed-loop constraint
satisfaction and performance bounds on the closed-loop trajectory. We evaluate
the proposed strategy in simulations and experiments, the latter carried out on
the Berkeley Autonomous Race Car (BARC) platform. We show that the proposed
strategy is able to reduce the computation time by one order of magnitude while
achieving the same performance as our model-based control algorithm.
| 1 | 0 | 0 | 0 | 0 | 0 |
Proper quadrics in the Euclidean $n$-space | In this paper we investigate the metric properties of quadrics and cones of
the $n$-dimensional Euclidean space. As applications of our formulas we give a
more detailed description of the construction of Chasles and the wire model of
Staude, respectively.
| 0 | 0 | 1 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.