title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Infinite symmetric ergodic index and related examples in infinite measure | We define an infinite measure-preserving transformation to have infinite
symmetric ergodic index if all finite Cartesian products of the transformation
and its inverse are ergodic, and show that infinite symmetric ergodic index
does not imply that all products of powers are conservative, so does not imply
power weak mixing. We provide a sufficient condition for $k$-fold and infinite
symmetric ergodic index and use it to answer a question on the relationship
between product conservativity and product ergodicity. We also show that a
class of rank-one transformations that have infinite symmetric ergodic index
are not power weakly mixing, and precisely characterize a class of power weak
transformations that generalizes existing examples.
| 0 | 0 | 1 | 0 | 0 | 0 |
Exploiting gradients and Hessians in Bayesian optimization and Bayesian quadrature | An exciting branch of machine learning research focuses on methods for
learning, optimizing, and integrating unknown functions that are difficult or
costly to evaluate. A popular Bayesian approach to this problem uses a Gaussian
process (GP) to construct a posterior distribution over the function of
interest given a set of observed measurements, and selects new points to
evaluate using the statistics of this posterior. Here we extend these methods
to exploit derivative information from the unknown function. We describe
methods for Bayesian optimization (BO) and Bayesian quadrature (BQ) in settings
where first and second derivatives may be evaluated along with the function
itself. We perform sampling-based inference in order to incorporate uncertainty
over hyperparameters, and show that both hyperparameter and function
uncertainty decrease much more rapidly when using derivative information.
Moreover, we introduce techniques for overcoming ill-conditioning issues that
have plagued earlier methods for gradient-enhanced Gaussian processes and
kriging. We illustrate the efficacy of these methods using applications to real
and simulated Bayesian optimization and quadrature problems, and show that
exploting derivatives can provide substantial gains over standard methods.
| 0 | 0 | 0 | 1 | 0 | 0 |
Permanency of the age-structured population model on several temporally variable patches | We consider a system of nonlinear partial differential equations that
describes an age-structured population inhabiting several temporally varying
patches. We prove existence and uniqueness of solution and analyze its
large-time behavior in cases when the environment is constant and when it
changes periodically. A pivotal assumption is that individuals can disperse and
that each patch can be reached from every other patch, directly or through
several intermediary patches. We introduce the net reproductive operator and
characteristic equations for time-independent and periodical models and prove
that permanency is defined by the net reproductive rate for the whole system.
If the net reproductive rate is less or equal to one, extinction on all patches
is imminent. Otherwise, permanency on all patches is guaranteed. The proof is
based on a new approach to analysis of large-time stability.
| 0 | 1 | 1 | 0 | 0 | 0 |
Rapid laser-induced photochemical conversion of sol-gel precursors to In2O3 layers and their application in thin-film transistors | We report the development of indium oxide (In2O3) transistors via a single
step laser-induced photochemical conversion process of a sol-gel metal oxide
precursor. Through careful optimization of the laser annealing conditions we
demonstrated successful conversion of the precursor to In2O3 and its subsequent
implementation in n-channel transistors with electron mobility up to 13 cm2/Vs.
Importantly, the process does not require thermal annealing making it
compatible with temperature sensitive materials such as plastic. On the other
hand, the spatial conversion/densification of the sol-gel layer eliminates
additional process steps associated with semiconductor patterning and hence
significantly reduces fabrication complexity and cost. Our work demonstrates
unambiguously that laser-induced photochemical conversion of sol-gel metal
oxide precursors can be rapid and compatible with large-area electronics
manufacturing.
| 0 | 1 | 0 | 0 | 0 | 0 |
Testing for Change in Stochastic Volatility with Long Range Dependence | In this paper, change-point problems for long memory stochastic volatility
models are considered. A general testing problem which includes various
alternative hypotheses is discussed. Under the hypothesis of stationarity the
limiting behavior of CUSUM- and Wilcoxon-type test statistics is derived. In
this context, a limit theorem for the two-parameter empirical process of long
memory stochastic volatility time series is proved. In particular, it is shown
that the asymptotic distribution of CUSUM test statistics may not be affected
by long memory, unlike Wilcoxon test statistics which are typically influenced
by long range dependence. To avoid the estimation of nuisance parameters in
applications, the usage of self-normalized test statistics is proposed. The
theoretical results are accompanied by simulation studies which characterize
the finite sample behavior of the considered testing procedures when testing
for changes in mean, in variance, and in the tail index.
| 0 | 0 | 1 | 1 | 0 | 0 |
Kinematics effects of atmospheric friction in spacecraft flybys | Gravity assist manoeuvres are one of the most succesful techniques in
astrodynamics. In these trajectories the spacecraft comes very close to the
surface of the Earth, or other Solar system planets or moons, and, as a
consequence, it experiences the effect of atmospheric friction by the outer
layers of the Earth's atmosphere or ionosphere.
In this paper we analyze a standard atmospheric model to estimate the density
profile during the two Galileo flybys, the NEAR and the Juno flyby. We show
that, even allowing for a margin of uncertainty in the spacecraft cross-section
and the drag coefficient, the observed -8 mm/sec anomalous velocity decrease
during the second Galileo flyby of December, 8th, 1992 cannot be attributed
only to atmospheric friction. On the other hand, for perigees on the border
between the termosphere and the exosphere the friction only accounts for a
fraction of a millimeter per second in the final asymptotic velocity.
| 0 | 1 | 0 | 0 | 0 | 0 |
Interval vs. Point Temporal Logic Model Checking: an Expressiveness Comparison | In the last years, model checking with interval temporal logics is emerging
as a viable alternative to model checking with standard point-based temporal
logics, such as LTL, CTL, CTL*, and the like. The behavior of the system is
modeled by means of (finite) Kripke structures, as usual. However, while
temporal logics which are interpreted "point-wise" describe how the system
evolves state-by-state, and predicate properties of system states, those which
are interpreted "interval-wise" express properties of computation stretches,
spanning a sequence of states. A proposition letter is assumed to hold over a
computation stretch (interval) if and only if it holds over each component
state (homogeneity assumption). A natural question arises: is there any
advantage in replacing points by intervals as the primary temporal entities, or
is it just a matter of taste?
In this paper, we study the expressiveness of Halpern and Shoham's interval
temporal logic (HS) in model checking, in comparison with those of LTL, CTL,
and CTL*. To this end, we consider three semantic variants of HS: the
state-based one, introduced by Montanari et al., that allows time to branch
both in the past and in the future, the computation-tree-based one, that allows
time to branch in the future only, and the trace-based variant, that disallows
time to branch. These variants are compared among themselves and to the
aforementioned standard logics, getting a complete picture. In particular, we
show that HS with trace-based semantics is equivalent to LTL (but at least
exponentially more succinct), HS with computation-tree-based semantics is
equivalent to finitary CTL*, and HS with state-based semantics is incomparable
with all of them (LTL, CTL, and CTL*).
| 1 | 0 | 0 | 0 | 0 | 0 |
Calculations for electron-impact ionization of magnesium and calcium atoms in the method of interacting configurations in the complex number representation | Next investigations in our program of transition from the He atom to the
complex atoms description have been presented. The method of interacting
configurations in the complex number representation is under consideration. The
spectroscopic characteristics of the Mg and Ca atoms in the problem of the
electron-impact ionization of these atoms are investigated. The energies and
the widths of the lowest autoionizing states of Mg and Ca atoms are calculated.
Few results in the photoionization problem on the autoionizing states above the
n=2 threshold of helium-like Be ion are presented.
| 0 | 1 | 0 | 0 | 0 | 0 |
An Information Theoretic Approach to Sample Acquisition and Perception in Planetary Robotics | An important and emerging component of planetary exploration is sample
retrieval and return to Earth. Obtaining and analyzing rock samples can provide
unprecedented insight into the geology, geo-history and prospects for finding
past life and water. Current methods of exploration rely on mission scientists
to identify objects of interests and this presents major operational
challenges. Finding objects of interests will require systematic and efficient
methods to quickly and correctly evaluate the importance of hundreds if not
thousands of samples so that the most interesting are saved for further
analysis by the mission scientists. In this paper, we propose an automated
information theoretic approach to identify shapes of interests using a library
of predefined interesting shapes. These predefined shapes maybe human input or
samples that are then extrapolated by the shape matching system using the
Superformula to judge the importance of newly obtained objects. Shape samples
are matched to a library of shapes using the eigenfaces approach enabling
categorization and prioritization of the sample. The approach shows robustness
to simulated sensor noise of up to 20%. The effect of shape parameters and
rotational angle on shape matching accuracy has been analyzed. The approach
shows significant promise and efforts are underway in testing the algorithm
with real rock samples.
| 1 | 1 | 0 | 0 | 0 | 0 |
Global solvability of the Navier-Stokes equations with a free surface in the maximal $L_p\text{-}L_q$ regularity class | We consider the motion of incompressible viscous fluids bounded above by a
free surface and below by a solid surface in the $N$-dimensional Euclidean
space for $N\geq 2$ when the gravity is not taken into account. The aim of this
paper is to show the global solvability of the Naiver-Stokes equations with a
free surface, describing the above-mentioned motion, in the maximal
$L_p\text{-}L_q$ regularity class. Our approach is based on the maximal
$L_p\text{-}L_q$ regularity with exponential stability for the linearized
equations, and solutions to the original nonlinear problem are also
exponentially stable.
| 0 | 0 | 1 | 0 | 0 | 0 |
The fundamental group of the complement of the singular locus of Lauricella's $F_C$ | We study the fundamental group of the complement of the singular locus of
Lauricella's hypergeometric function $F_C$ of $n$ variables. The singular locus
consists of $n$ hyperplanes and a hypersurface of degree $2^{n-1}$ in the
complex $n$-space. We derive some relations that holds for general $n\geq 3$.
We give an explicit presentation of the fundamental groupin the
three-dimensional case. We also consider a presentation of the fundamental
group of $2^3$-covering of this space.
In the version 2, we omit some of the calculations. For all the calculations,
refer to the version 1 (arXiv:1710.09594v1) of this article.
| 0 | 0 | 1 | 0 | 0 | 0 |
Nonconvex Sparse Spectral Clustering by Alternating Direction Method of Multipliers and Its Convergence Analysis | Spectral Clustering (SC) is a widely used data clustering method which first
learns a low-dimensional embedding $U$ of data by computing the eigenvectors of
the normalized Laplacian matrix, and then performs k-means on $U^\top$ to get
the final clustering result. The Sparse Spectral Clustering (SSC) method
extends SC with a sparse regularization on $UU^\top$ by using the block
diagonal structure prior of $UU^\top$ in the ideal case. However, encouraging
$UU^\top$ to be sparse leads to a heavily nonconvex problem which is
challenging to solve and the work (Lu, Yan, and Lin 2016) proposes a convex
relaxation in the pursuit of this aim indirectly. However, the convex
relaxation generally leads to a loose approximation and the quality of the
solution is not clear. This work instead considers to solve the nonconvex
formulation of SSC which directly encourages $UU^\top$ to be sparse. We propose
an efficient Alternating Direction Method of Multipliers (ADMM) to solve the
nonconvex SSC and provide the convergence guarantee. In particular, we prove
that the sequences generated by ADMM always exist a limit point and any limit
point is a stationary point. Our analysis does not impose any assumptions on
the iterates and thus is practical. Our proposed ADMM for nonconvex problems
allows the stepsize to be increasing but upper bounded, and this makes it very
efficient in practice. Experimental analysis on several real data sets verifies
the effectiveness of our method.
| 1 | 0 | 0 | 0 | 0 | 0 |
Stable Geodesic Update on Hyperbolic Space and its Application to Poincare Embeddings | A hyperbolic space has been shown to be more capable of modeling complex
networks than a Euclidean space. This paper proposes an explicit update rule
along geodesics in a hyperbolic space. The convergence of our algorithm is
theoretically guaranteed, and the convergence rate is better than the
conventional Euclidean gradient descent algorithm. Moreover, our algorithm
avoids the "bias" problem of existing methods using the Riemannian gradient.
Experimental results demonstrate the good performance of our algorithm in the
\Poincare embeddings of knowledge base data.
| 0 | 0 | 0 | 1 | 0 | 0 |
A Parameterized Approach to Personalized Variable Length Summarization of Soccer Matches | We present a parameterized approach to produce personalized variable length
summaries of soccer matches. Our approach is based on temporally segmenting the
soccer video into 'plays', associating a user-specifiable 'utility' for each
type of play and using 'bin-packing' to select a subset of the plays that add
up to the desired length while maximizing the overall utility (volume in
bin-packing terms). Our approach systematically allows a user to override the
default weights assigned to each type of play with individual preferences and
thus see a highly personalized variable length summarization of soccer matches.
We demonstrate our approach based on the output of an end-to-end pipeline that
we are building to produce such summaries. Though aspects of the overall
end-to-end pipeline are human assisted at present, the results clearly show
that the proposed approach is capable of producing semantically meaningful and
compelling summaries. Besides the obvious use of producing summaries of
superior league matches for news broadcasts, we anticipate our work to promote
greater awareness of the local matches and junior leagues by producing
consumable summaries of them.
| 1 | 0 | 0 | 0 | 0 | 0 |
Towards Robust Neural Networks via Random Self-ensemble | Recent studies have revealed the vulnerability of deep neural networks: A
small adversarial perturbation that is imperceptible to human can easily make a
well-trained deep neural network misclassify. This makes it unsafe to apply
neural networks in security-critical applications. In this paper, we propose a
new defense algorithm called Random Self-Ensemble (RSE) by combining two
important concepts: {\bf randomness} and {\bf ensemble}. To protect a targeted
model, RSE adds random noise layers to the neural network to prevent the strong
gradient-based attacks, and ensembles the prediction over random noises to
stabilize the performance. We show that our algorithm is equivalent to ensemble
an infinite number of noisy models $f_\epsilon$ without any additional memory
overhead, and the proposed training procedure based on noisy stochastic
gradient descent can ensure the ensemble model has a good predictive
capability. Our algorithm significantly outperforms previous defense techniques
on real data sets. For instance, on CIFAR-10 with VGG network (which has 92\%
accuracy without any attack), under the strong C\&W attack within a certain
distortion tolerance, the accuracy of unprotected model drops to less than
10\%, the best previous defense technique has $48\%$ accuracy, while our method
still has $86\%$ prediction accuracy under the same level of attack. Finally,
our method is simple and easy to integrate into any neural network.
| 1 | 0 | 0 | 1 | 0 | 0 |
Prospects for detection of intermediate-mass black holes in globular clusters using integrated-light spectroscopy | The detection of intermediate mass black holes (IMBHs) in Galactic globular
clusters (GCs) has so far been controversial. In order to characterize the
effectiveness of integrated-light spectroscopy through integral field units, we
analyze realistic mock data generated from state-of-the-art Monte Carlo
simulations of GCs with a central IMBH, considering different setups and
conditions varying IMBH mass, cluster distance, and accuracy in determination
of the center. The mock observations are modeled with isotropic Jeans models to
assess the success rate in identifying the IMBH presence, which we find to be
primarily dependent on IMBH mass. However, even for a IMBH of considerable mass
(3% of the total GC mass), the analysis does not yield conclusive results in 1
out of 5 cases, because of shot noise due to bright stars close to the IMBH
line-of-sight. This stochastic variability in the modeling outcome grows with
decreasing BH mass, with approximately 3 failures out of 4 for IMBHs with 0.1%
of total GC mass. Finally, we find that our analysis is generally unable to
exclude at 68% confidence an IMBH with mass of $10^3~M_\odot$ in snapshots
without a central BH. Interestingly, our results are not sensitive to GC
distance within 5-20 kpc, nor to mis-identification of the GC center by less
than 2'' (<20% of the core radius). These findings highlight the value of
ground-based integral field spectroscopy for large GC surveys, where systematic
failures can be accounted for, but stress the importance of discrete kinematic
measurements that are less affected by stochasticity induced by bright stars.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Multi-frequency analysis of possible Dark Matter Contributions to M31 Gamma-Ray Emissions | We examine the possibility of a dark matter (DM) contribution to the recently
observed gamma-ray spectrum seen in the M31 galaxy. In particular, we apply
limits on Weakly Interacting Massive Particle DM annihilation cross-sections
derived from the Coma galaxy cluster and the Reticulum II dwarf galaxy to
determine the maximal flux contribution by DM annihilation to both the M31
gamma-ray spectrum and that of the Milky-Way galactic centre. We limit the
energy range between 1 and 12 GeV in M31 and galactic centre spectra due to the
limited range of former's data, as well as to encompass the high-energy
gamma-ray excess observed in the latter target. In so doing, we will make use
of Fermi-LAT data for all mentioned targets, as well as diffuse radio data for
the Coma cluster. The multi-target strategy using both Coma and Reticulum II to
derive cross-section limits, as well as multi-frequency data, ensures that our
results are robust against the various uncertainties inherent in modelling of
indirect DM emissions.
Our results indicate that, when a Navarro-Frenk-White (or shallower) radial
density profile is assumed, severe constraints can be imposed upon the fraction
of the M31 and galactic centre spectra that can be accounted for by DM, with
the best limits arising from cross-section constraints from Coma radio data and
Reticulum II gamma-ray limits. These particular limits force all the studied
annihilation channels to contribute 1% or less to the total integrated
gamma-ray flux within both M31 and galactic centre targets. In contrast,
considerably more, 10-100%, of the flux can be attributed to DM when a
contracted Navarro-Frenk-White profile is assumed. This demonstrates how
sensitive DM contributions to gamma-ray emissions are to the possibility of
cored profiles in galaxies.
| 0 | 1 | 0 | 0 | 0 | 0 |
Tensor Decompositions for Modeling Inverse Dynamics | Modeling inverse dynamics is crucial for accurate feedforward robot control.
The model computes the necessary joint torques, to perform a desired movement.
The highly non-linear inverse function of the dynamical system can be
approximated using regression techniques. We propose as regression method a
tensor decomposition model that exploits the inherent three-way interaction of
positions x velocities x accelerations. Most work in tensor factorization has
addressed the decomposition of dense tensors. In this paper, we build upon the
decomposition of sparse tensors, with only small amounts of nonzero entries.
The decomposition of sparse tensors has successfully been used in relational
learning, e.g., the modeling of large knowledge graphs. Recently, the approach
has been extended to multi-class classification with discrete input variables.
Representing the data in high dimensional sparse tensors enables the
approximation of complex highly non-linear functions. In this paper we show how
the decomposition of sparse tensors can be applied to regression problems.
Furthermore, we extend the method to continuous inputs, by learning a mapping
from the continuous inputs to the latent representations of the tensor
decomposition, using basis functions. We evaluate our proposed model on a
dataset with trajectories from a seven degrees of freedom SARCOS robot arm. Our
experimental results show superior performance of the proposed functional
tensor model, compared to challenging state-of-the art methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
Synthesis and Hydrogen Sorption Characteristics of Mechanically Alloyed Mg(NixMn1-x)2 Intermetallics | New ternary Mg-Ni-Mn intermetallics have been successfully synthesized by
High Energy Ball Milling (HEBM) and have been studied as possible materials for
efficient hydrogen storage applications. The microstructures of the as-cast and
milled alloys were characterized by means of X-ray Powder Diffraction (XRD) and
Scanning Electron Microscopy (SEM) both prior and after the hydrogenation
process, while the hydrogen storage characteristics (P-c-T) and the kinetics
were measured by using a commercial and automatically controlled Sievert-type
apparatus. The hydrogenation and dehydrogenation measurements were performed at
four different temperatures 150-200-250-300oC and the results showed that the
kinetics for both the hydrogenation and dehydrogenation process are very fast
for operation temperatures 250 and 300oC, but for temperatures below 200oC the
hydrogenation process becomes very slow and the dehydrogenation process cannot
be achieved.
| 0 | 1 | 0 | 0 | 0 | 0 |
Enhancing Network Embedding with Auxiliary Information: An Explicit Matrix Factorization Perspective | Recent advances in the field of network embedding have shown the
low-dimensional network representation is playing a critical role in network
analysis. However, most of the existing principles of network embedding do not
incorporate auxiliary information such as content and labels of nodes flexibly.
In this paper, we take a matrix factorization perspective of network embedding,
and incorporate structure, content and label information of the network
simultaneously. For structure, we validate that the matrix we construct
preserves high-order proximities of the network. Label information can be
further integrated into the matrix via the process of random walk sampling to
enhance the quality of embedding in an unsupervised manner, i.e., without
leveraging downstream classifiers. In addition, we generalize the Skip-Gram
Negative Sampling model to integrate the content of the network in a matrix
factorization framework. As a consequence, network embedding can be learned in
a unified framework integrating network structure and node content as well as
label information simultaneously. We demonstrate the efficacy of the proposed
model with the tasks of semi-supervised node classification and link prediction
on a variety of real-world benchmark network datasets.
| 1 | 0 | 0 | 1 | 0 | 0 |
Incremental Principal Component Analysis Exact implementation and continuity corrections | This paper describes some applications of an incremental implementation of
the principal component analysis (PCA). The algorithm updates the
transformation coefficients matrix on-line for each new sample, without the
need to keep all the samples in memory. The algorithm is formally equivalent to
the usual batch version, in the sense that given a sample set the
transformation coefficients at the end of the process are the same. The
implications of applying the PCA in real time are discussed with the help of
data analysis examples. In particular we focus on the problem of the continuity
of the PCs during an on-line analysis.
| 1 | 0 | 0 | 1 | 0 | 0 |
More investment in Research and Development for better Education in the future? | The question in this paper is whether R&D efforts affect education
performance in small classes. Merging two datasets collected from the PISA
studies and the World Development Indicators and using Learning Bayesian
Networks, we prove the existence of a statistical causal relationship between
investment in R&D of a country and its education performance (PISA scores). We
also prove that the effect of R\&D on Education is long term as a country has
to invest at least 10 years before beginning to improve the level of young
pupils.
| 0 | 0 | 0 | 1 | 0 | 0 |
Discriminative Modeling of Social Influence for Prediction and Explanation in Event Cascades | The global dynamics of event cascades are often governed by the local
dynamics of peer influence. However, detecting social influence from
observational data is challenging, due to confounds like homophily and
practical issues like missing data. In this work, we propose a novel
discriminative method to detect influence from observational data. The core of
the approach is to train a ranking algorithm to predict the source of the next
event in a cascade, and compare its out-of-sample accuracy against a
competitive baseline which lacks access to features corresponding to social
influence. Using synthetically generated data, we provide empirical evidence
that this method correctly identifies influence in the presence of confounds,
and is robust to both missing data and misspecification --- unlike popular
alternatives. We also apply the method to two real-world datasets: (1) cascades
of co-sponsorship of legislation in the U.S. House of Representatives, on a
social network of shared campaign donors; (2) rumors about the Higgs boson
discovery, on a follower network of $10^5$ Twitter accounts. Our model
identifies the role of peer influence in these scenarios, and uses it to make
more accurate predictions about the future trajectory of cascades.
| 1 | 0 | 0 | 0 | 0 | 0 |
Online codes for analog signals | We revisit a classical scenario in communication theory: a source is
generating a waveform which we sample at regular intervals; we wish to
transform the signal in such a way as to minimize distortion in its
reconstruction, despite noise. The transformation must be online (also called
causal), in order to enable real-time signaling. The noise model we consider is
adversarial $\ell_1$-bounded; this is the "atomic norm" convex relaxation of
the standard adversary model in discrete-alphabet communications, namely
sparsity (low Hamming weight). We require that our encoding not increase the
power of the original signal.
In the "block coding" setting such encoding is possible due to the existence
of large almost-Euclidean sections in $\ell_1$ spaces (established in the work
of Dvoretzky, Milman, Kašin, and Figiel, Lindenstrauss and Milman).
Our main result is that an analogous result is achievable even online.
Equivalently, we show a "lower triangular" version of $\ell_1$ Dvoretzky
theorems. In terms of communication, the result has the following form: If the
signal is a stream of reals $x_1,\ldots$, one per unit time, which we encode
causally into $\rho$ (a constant) reals per unit time (forming altogether an
output stream $\mathcal{E}(x)$), and if the adversarial noise added to this
encoded stream up to time $s$ is a vector $\vec{y}$, then at time $s$ the
decoder's reconstruction of the input prefix $x_{[s]}$ is accurate in a
time-weighted $\ell_2$ norm, to within $s^{-1/2+\delta}$ (any $\delta>0$) times
the adversary's noise as measured in a time-weighted $\ell_1$ norm. The
time-weighted decoding norm forces increasingly accurate reconstruction of the
distant past, while the time-weighted noise norm permits only vanishing effect
from noise in the distant past.
Encoding is linear, and decoding is performed by an LP analogous to those
used in compressed sensing.
| 1 | 0 | 0 | 0 | 0 | 0 |
Statistical test for fractional Brownian motion based on detrending moving average algorithm | Motivated by contemporary and rich applications of anomalous diffusion
processes we propose a new statistical test for fractional Brownian motion,
which is one of the most popular models for anomalous diffusion systems. The
test is based on detrending moving average statistic and its probability
distribution. Using the theory of Gaussian quadratic forms we determined it as
a generalized chi-squared distribution. The proposed test could be generalized
for statistical testing of any centered non-degenerate Gaussian process.
Finally, we examine the test via Monte Carlo simulations for two exemplary
scenarios of subdiffusive and superdiffusive dynamics.
| 0 | 0 | 0 | 1 | 0 | 0 |
Fast Radio Map Construction and Position Estimation via Direct Mapping for WLAN Indoor Localization System | The main limitation that constrains the fast and comprehensive application of
Wireless Local Area Network (WLAN) based indoor localization systems with
Received Signal Strength (RSS) positioning algorithms is the building of the
fingerprinting radio map, which is time-consuming especially when the indoor
environment is large and/or with high frequent changes. Different approaches
have been proposed to reduce workload, including fingerprinting deployment and
update efforts, but the performance degrades greatly when the workload is
reduced below a certain level. In this paper, we propose an indoor localization
scenario that applies metric learning and manifold alignment to realize direct
mapping localization (DML) using a low resolution radio map with single sample
of RSS that reduces the fingerprinting workload by up to 87\%. Compared to
previous work. The proposed two localization approaches, DML and $k$ nearest
neighbors based on reconstructed radio map (reKNN), were shown to achieve less
than 4.3\ m and 3.7\ m mean localization error respectively in a typical office
environment with an area of approximately 170\ m$^2$, while the unsupervised
localization with perturbation algorithm was shown to achieve 4.7\ m mean
localization error with 8 times more workload than the proposed methods. As for
the room level localization application, both DML and reKNN can meet the
requirement with at most 9\ m of localization error which is enough to tell
apart different rooms with over 99\% accuracy.
| 1 | 0 | 0 | 1 | 0 | 0 |
Lagrangian solutions to the Vlasov-Poisson system with a point charge | We consider the Cauchy problem for the repulsive Vlasov-Poisson system in the
three dimensional space, where the initial datum is the sum of a diffuse
density, assumed to be bounded and integrable, and a point charge. Under some
decay assumptions for the diffuse density close to the point charge, under
bounds on the total energy, and assuming that the initial total diffuse charge
is strictly less than one, we prove existence of global Lagrangian solutions.
Our result extends the Eulerian theory of [16], proving that solutions are
transported by the flow trajectories. The proof is based on the ODE theory
developed in [8] in the setting of vector fields with anisotropic regularity,
where some components of the gradient of the vector field is a singular
integral of a measure.
| 0 | 0 | 1 | 0 | 0 | 0 |
MITHRIL: Mining Sporadic Associations for Cache Prefetching | The growing pressure on cloud application scalability has accentuated storage
performance as a critical bottle- neck. Although cache replacement algorithms
have been extensively studied, cache prefetching - reducing latency by
retrieving items before they are actually requested remains an underexplored
area. Existing approaches to history-based prefetching, in particular, provide
too few benefits for real systems for the resources they cost. We propose
MITHRIL, a prefetching layer that efficiently exploits historical patterns in
cache request associations. MITHRIL is inspired by sporadic association rule
mining and only relies on the timestamps of requests. Through evaluation of 135
block-storage traces, we show that MITHRIL is effective, giving an average of a
55% hit ratio increase over LRU and PROBABILITY GRAPH, a 36% hit ratio gain
over AMP at reasonable cost. We further show that MITHRIL can supplement any
cache replacement algorithm and be readily integrated into existing systems.
Furthermore, we demonstrate the improvement comes from MITHRIL being able to
capture mid-frequency blocks.
| 1 | 0 | 0 | 0 | 0 | 0 |
On 2-level polytopes arising in combinatorial settings | 2-level polytopes naturally appear in several areas of pure and applied
mathematics, including combinatorial optimization, polyhedral combinatorics,
communication complexity, and statistics. In this paper, we present a study of
some 2-level polytopes arising in combinatorial settings. Our first
contribution is proving that v(P)*f(P) is upper bounded by d*2^(d+1), for a
large collection of families of such polytopes P. Here v(P) (resp. f(P)) is the
number of vertices (resp. facets) of P, and d is its dimension. Whether this
holds for all 2-level polytopes was asked in [Bohn et al., ESA 2015], and
experimental results from [Fiorini et al., ISCO 2016] showed it true up to
dimension 7. The key to most of our proofs is a deeper understanding of the
relations among those polytopes and their underlying combinatorial structures.
This leads to a number of results that we believe to be of independent
interest: a trade-off formula for the number of cliques and stable sets in a
graph; a description of stable matching polytopes as affine projections of
certain order polytopes; and a linear-size description of the base polytope of
matroids that are 2-level in terms of cuts of an associated tree.
| 1 | 0 | 1 | 0 | 0 | 0 |
Learned Optimizers that Scale and Generalize | Learning to learn has emerged as an important direction for achieving
artificial intelligence. Two of the primary barriers to its adoption are an
inability to scale to larger problems and a limited ability to generalize to
new tasks. We introduce a learned gradient descent optimizer that generalizes
well to new tasks, and which has significantly reduced memory and computation
overhead. We achieve this by introducing a novel hierarchical RNN architecture,
with minimal per-parameter overhead, augmented with additional architectural
features that mirror the known structure of optimization tasks. We also develop
a meta-training ensemble of small, diverse optimization tasks capturing common
properties of loss landscapes. The optimizer learns to outperform RMSProp/ADAM
on problems in this corpus. More importantly, it performs comparably or better
when applied to small convolutional neural networks, despite seeing no neural
networks in its meta-training set. Finally, it generalizes to train Inception
V3 and ResNet V2 architectures on the ImageNet dataset for thousands of steps,
optimization problems that are of a vastly different scale than those it was
trained on. We release an open source implementation of the meta-training
algorithm.
| 1 | 0 | 0 | 1 | 0 | 0 |
Toward Optimal Run Racing: Application to Deep Learning Calibration | This paper aims at one-shot learning of deep neural nets, where a highly
parallel setting is considered to address the algorithm calibration problem -
selecting the best neural architecture and learning hyper-parameter values
depending on the dataset at hand. The notoriously expensive calibration problem
is optimally reduced by detecting and early stopping non-optimal runs. The
theoretical contribution regards the optimality guarantees within the multiple
hypothesis testing framework. Experimentations on the Cifar10, PTB and Wiki
benchmarks demonstrate the relevance of the approach with a principled and
consistent improvement on the state of the art with no extra hyper-parameter.
| 1 | 0 | 0 | 0 | 0 | 0 |
Bernstein - von Mises theorems for statistical inverse problems I: Schrödinger equation | The inverse problem of determining the unknown potential $f>0$ in the partial
differential equation $$\frac{\Delta}{2} u - fu =0 \text{ on } \mathcal O
~~\text{s.t. } u = g \text { on } \partial \mathcal O,$$ where $\mathcal O$ is
a bounded $C^\infty$-domain in $\mathbb R^d$ and $g>0$ is a given function
prescribing boundary values, is considered. The data consist of the solution
$u$ corrupted by additive Gaussian noise. A nonparametric Bayesian prior for
the function $f$ is devised and a Bernstein - von Mises theorem is proved which
entails that the posterior distribution given the observations is approximated
in a suitable function space by an infinite-dimensional Gaussian measure that
has a `minimal' covariance structure in an information-theoretic sense. As a
consequence the posterior distribution performs valid and optimal frequentist
statistical inference on $f$ in the small noise limit.
| 0 | 0 | 1 | 1 | 0 | 0 |
Tailoring Architecture Centric Design Method with Rapid Prototyping | Many engineering processes exist in the industry, text books and
international standards. However, in practice rarely any of the processes are
followed consistently and literally. It is observed across industries the
processes are altered based on the requirements of the projects. Two features
commonly lacking from many engineering processes are, 1) the formal capacity to
rapidly develop prototypes in the rudimentary stage of the project, 2)
transitioning of requirements into architectural designs, when and how to
evaluate designs and how to use the throw away prototypes throughout the system
lifecycle. Prototypes are useful for eliciting requirements, generating
customer feedback and identifying, examining or mitigating risks in a project
where the product concept is at a cutting edge or not fully perceived. Apart
from the work that the product is intended to do, systemic properties like
availability, performance and modifiability matter as much as functionality.
Architects must even these concerns with the method they select to promote
these systemic properties and at the same time equip the stakeholders with the
desired functionality. Architectural design and prototyping is one of the key
ways to build the right product embedded with the desired systemic properties.
Once the product is built it can be almost impossible to retrofit the system
with the desired attributes. This paper customizes the architecture centric
development method with rapid prototyping to achieve the above-mentioned goals
and reducing the number of iterations across the stages of ACDM.
| 1 | 0 | 0 | 0 | 0 | 0 |
Easy High-Dimensional Likelihood-Free Inference | We introduce a framework using Generative Adversarial Networks (GANs) for
likelihood--free inference (LFI) and Approximate Bayesian Computation (ABC)
where we replace the black-box simulator model with an approximator network and
generate a rich set of summary features in a data driven fashion. On benchmark
data sets, our approach improves on others with respect to scalability, ability
to handle high dimensional data and complex probability distributions.
| 1 | 0 | 0 | 1 | 0 | 0 |
Narrow-line Laser Cooling by Adiabatic Transfer | We propose and demonstrate a novel laser cooling mechanism applicable to
particles with narrow-linewidth optical transitions. By sweeping the frequency
of counter-propagating laser beams in a sawtooth manner, we cause adiabatic
transfer back and forth between the ground state and a long-lived optically
excited state. The time-ordering of these adiabatic transfers is determined by
Doppler shifts, which ensures that the associated photon recoils are in the
opposite direction to the particle's motion. This ultimately leads to a robust
cooling mechanism capable of exerting large forces via a weak transition and
with reduced reliance on spontaneous emission. We present a simple intuitive
model for the resulting frictional force, and directly demonstrate its efficacy
for increasing the total phase-space density of an atomic ensemble. We rely on
both simulation and experimental studies using the 7.5~kHz linewidth $^1$S$_0$
to $^3$P$_1$ transition in $^{88}$Sr. The reduced reliance on spontaneous
emission may allow this adiabatic sweep method to be a useful tool for cooling
particles that lack closed cycling transitions, such as molecules.
| 0 | 1 | 0 | 0 | 0 | 0 |
Assessing the Effect of Stellar Companions from High-Resolution Imaging of Kepler Objects of Interest | We report on 176 close (<2") stellar companions detected with high-resolution
imaging near 170 hosts of Kepler Objects of Interest. These Kepler targets were
prioritized for imaging follow-up based on the presence of small planets, so
most of the KOIs in these systems (176 out of 204) have nominal radii <6 R_E .
Each KOI in our sample was observed in at least 2 filters with adaptive optics,
speckle imaging, lucky imaging, or HST. Multi-filter photometry provides color
information on the companions, allowing us to constrain their stellar
properties and assess the probability that the companions are physically bound.
We find that 60 -- 80% of companions within 1" are bound, and the bound
fraction is >90% for companions within 0.5"; the bound fraction decreases with
increasing angular separation. This picture is consistent with simulations of
the binary and background stellar populations in the Kepler field. We also
reassess the planet radii in these systems, converting the observed
differential magnitudes to a contamination in the Kepler bandpass and
calculating the planet radius correction factor, $X_R = R_p (true) / R_p
(single)$. Under the assumption that planets in bound binaries are equally
likely to orbit the primary or secondary, we find a mean radius correction
factor for planets in stellar multiples of $X_R = 1.65$. If stellar
multiplicity in the Kepler field is similar to the solar neighborhood, then
nearly half of all Kepler planets may have radii underestimated by an average
of 65%, unless vetted using high resolution imaging or spectroscopy.
| 0 | 1 | 0 | 0 | 0 | 0 |
Semi-Supervised Learning via New Deep Network Inversion | We exploit a recently derived inversion scheme for arbitrary deep neural
networks to develop a new semi-supervised learning framework that applies to a
wide range of systems and problems. The approach outperforms current
state-of-the-art methods on MNIST reaching $99.14\%$ of test set accuracy while
using $5$ labeled examples per class. Experiments with one-dimensional signals
highlight the generality of the method. Importantly, our approach is simple,
efficient, and requires no change in the deep network architecture.
| 1 | 0 | 0 | 1 | 0 | 0 |
Using English as Pivot to Extract Persian-Italian Parallel Sentences from Non-Parallel Corpora | The effectiveness of a statistical machine translation system (SMT) is very
dependent upon the amount of parallel corpus used in the training phase. For
low-resource language pairs there are not enough parallel corpora to build an
accurate SMT. In this paper, a novel approach is presented to extract bilingual
Persian-Italian parallel sentences from a non-parallel (comparable) corpus. In
this study, English is used as the pivot language to compute the matching
scores between source and target sentences and candidate selection phase.
Additionally, a new monolingual sentence similarity metric, Normalized Google
Distance (NGD) is proposed to improve the matching process. Moreover, some
extensions of the baseline system are applied to improve the quality of
extracted sentences measured with BLEU. Experimental results show that using
the new pivot based extraction can increase the quality of bilingual corpus
significantly and consequently improves the performance of the Persian-Italian
SMT system.
| 1 | 0 | 0 | 0 | 0 | 0 |
Intrinsic geometry and analysis of Finsler structures | In this short note, we prove that if $F$ is a weak upper semicontinuous
admissible Finsler structure on a domain in $\mathbb{R}^n$, $n\geq 2$, then the
intrinsic distance and differential structures coincide.
| 0 | 0 | 1 | 0 | 0 | 0 |
Combining Prediction of Human Decisions with ISMCTS in Imperfect Information Games | Monte Carlo Tree Search (MCTS) has been extended to many imperfect
information games. However, due to the added complexity that uncertainty
introduces, these adaptations have not reached the same level of practical
success as their perfect information counterparts. In this paper we consider
the development of agents that perform well against humans in imperfect
information games with partially observable actions. We introduce the
Semi-Determinized-MCTS (SDMCTS), a variant of the Information Set MCTS
algorithm (ISMCTS). More specifically, SDMCTS generates a predictive model of
the unobservable portion of the opponent's actions from historical behavioral
data. Next, SDMCTS performs simulations on an instance of the game where the
unobservable portion of the opponent's actions are determined. Thereby, it
facilitates the use of the predictive model in order to decrease uncertainty.
We present an implementation of the SDMCTS applied to the Cheat Game, a
well-known card game, with partially observable (and often deceptive) actions.
Results from experiments with 120 subjects playing a head-to-head Cheat Game
against our SDMCTS agents suggest that SDMCTS performs well against humans, and
its performance improves as the predictive model's accuracy increases.
| 1 | 0 | 0 | 0 | 0 | 0 |
The KLASH Proposal | We propose a search of galactic axions with mass about 0.2 microeV using a
large volume resonant cavity, about 50 m^3, cooled down to 4 K and immersed in
a moderate axial magnetic field of about 0.6 T generated inside the
superconducting magnet of the KLOE experiment located at the National
Laboratory of Frascati of INFN. This experiment, called KLASH (KLoe magnet for
Axion SearcH) in the following, has a potential sensitivity on the
axion-to-photon coupling, g_agg, of about 6x10^-17 GeV-1, reaching the region
predicted by KSVZ and DFSZ models of QCD axions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Dimensionality-Driven Learning with Noisy Labels | Datasets with significant proportions of noisy (incorrect) class labels
present challenges for training accurate Deep Neural Networks (DNNs). We
propose a new perspective for understanding DNN generalization for such
datasets, by investigating the dimensionality of the deep representation
subspace of training samples. We show that from a dimensionality perspective,
DNNs exhibit quite distinctive learning styles when trained with clean labels
versus when trained with a proportion of noisy labels. Based on this finding,
we develop a new dimensionality-driven learning strategy, which monitors the
dimensionality of subspaces during training and adapts the loss function
accordingly. We empirically demonstrate that our approach is highly tolerant to
significant proportions of noisy labels, and can effectively learn
low-dimensional local subspaces that capture the data distribution.
| 0 | 0 | 0 | 1 | 0 | 0 |
Delving into adversarial attacks on deep policies | Adversarial examples have been shown to exist for a variety of deep learning
architectures. Deep reinforcement learning has shown promising results on
training agent policies directly on raw inputs such as image pixels. In this
paper we present a novel study into adversarial attacks on deep reinforcement
learning polices. We compare the effectiveness of the attacks using adversarial
examples vs. random noise. We present a novel method for reducing the number of
times adversarial examples need to be injected for a successful attack, based
on the value function. We further explore how re-training on random noise and
FGSM perturbations affects the resilience against adversarial examples.
| 1 | 0 | 0 | 1 | 0 | 0 |
On two functions arising in the study of the Euler and Carmichael quotients | We investigate two arithmetic functions naturally occurring in the study of
the Euler and Carmichael quotients. The functions are related to the frequency
of vanishing of the Euler and Carmichael quotients. We obtain several results
concerning the relations between these functions as well as their typical and
extreme values.
| 0 | 0 | 1 | 0 | 0 | 0 |
Lurking Variable Detection via Dimensional Analysis | Lurking variables represent hidden information, and preclude a full
understanding of phenomena of interest. Detection is usually based on
serendipity -- visual detection of unexplained, systematic variation. However,
these approaches are doomed to fail if the lurking variables do not vary. In
this article, we address these challenges by introducing formal hypothesis
tests for the presence of lurking variables, based on Dimensional Analysis.
These procedures utilize a modified form of the Buckingham Pi theorem to
provide structure for a suitable null hypothesis. We present analytic tools for
reasoning about lurking variables in physical phenomena, construct procedures
to handle cases of increasing complexity, and present examples of their
application to engineering problems. The results of this work enable
algorithm-driven lurking variable detection, complementing a traditionally
inspection-based approach.
| 0 | 0 | 0 | 1 | 0 | 0 |
System calibration method for Fourier ptychographic microscopy | Fourier ptychographic microscopy (FPM) is a recently proposed quantitative
phase imaging technique with high resolution and wide field-of-view (FOV). In
current FPM imaging platforms, systematic error sources come from the
aberrations, LED intensity fluctuation, parameter imperfections and noise,
which will severely corrupt the reconstruction results with artifacts. Although
these problems have been researched and some special methods have been proposed
respectively, there is no method to solve all of them. However, the systematic
error is a mixture of various sources in the real situation. It is difficult to
distinguish a kind of error source from another due to the similar artifacts.
To this end, we report a system calibration procedure, termed SC-FPM, based on
the simulated annealing (SA) algorithm, LED intensity correction and adaptive
step-size strategy, which involves the evaluation of an error matric at each
iteration step, followed by the re-estimation of accurate parameters. The great
performance has been achieved both in simulation and experiments. The reported
system calibration scheme improves the robustness of FPM and relaxes the
experiment conditions, which makes the FPM more pragmatic.
| 0 | 1 | 0 | 0 | 0 | 0 |
Categories for Dynamic Epistemic Logic | The primary goal of this paper is to recast the semantics of modal logic, and
dynamic epistemic logic (DEL) in particular, in category-theoretic terms. We
first review the category of relations and categories of Kripke frames, with
particular emphasis on the duality between relations and adjoint homomorphisms.
Using these categories, we then reformulate the semantics of DEL in a more
categorical and algebraic form. Several virtues of the new formulation will be
demonstrated: The DEL idea of updating a model into another is captured
naturally by the categorical perspective -- which emphasizes a family of
objects and structural relationships among them, as opposed to a single object
and structure on it. Also, the categorical semantics of DEL can be merged
straightforwardly with a standard categorical semantics for first-order logic,
providing a semantics for first-order DEL.
| 1 | 0 | 1 | 0 | 0 | 0 |
Accurate Pouring with an Autonomous Robot Using an RGB-D Camera | Robotic assistants in a home environment are expected to perform various
complex tasks for their users. One particularly challenging task is pouring
drinks into cups, which for successful completion, requires the detection and
tracking of the liquid level during a pour to determine when to stop. In this
paper, we present a novel approach to autonomous pouring that tracks the liquid
level using an RGB-D camera and adapts the rate of pouring based on the liquid
level feedback. We thoroughly evaluate our system on various types of liquids
and under different conditions, conducting over 250 pours with a PR2 robot. The
results demonstrate that our approach is able to pour liquids to a target
height with an accuracy of a few millimeters.
| 1 | 0 | 0 | 0 | 0 | 0 |
Restriction of Odd Degree Characters of $\mathfrak{S}_n$ | Let $n$ and $k$ be natural numbers such that $2^k < n$. We study the
restriction to $\mathfrak{S}_{n-2^k}$ of odd-degree irreducible characters of
the symmetric group $\mathfrak{S}_n$. This analysis completes the study begun
in [Ayyer A., Prasad A., Spallone S., Sem. Lothar. Combin. 75 (2015), Art.
B75g, 13 pages] and recently developed in [Isaacs I.M., Navarro G., Olsson
J.B., Tiep P.H., J. Algebra 478 (2017), 271-282].
| 0 | 0 | 1 | 0 | 0 | 0 |
CSGNet: Neural Shape Parser for Constructive Solid Geometry | We present a neural architecture that takes as input a 2D or 3D shape and
outputs a program that generates the shape. The instructions in our program are
based on constructive solid geometry principles, i.e., a set of boolean
operations on shape primitives defined recursively. Bottom-up techniques for
this shape parsing task rely on primitive detection and are inherently slow
since the search space over possible primitive combinations is large. In
contrast, our model uses a recurrent neural network that parses the input shape
in a top-down manner, which is significantly faster and yields a compact and
easy-to-interpret sequence of modeling instructions. Our model is also more
effective as a shape detector compared to existing state-of-the-art detection
techniques. We finally demonstrate that our network can be trained on novel
datasets without ground-truth program annotations through policy gradient
techniques.
| 1 | 0 | 0 | 0 | 0 | 0 |
Composite Weyl nodes stabilized by screw symmetry with and without time reversal | We classify the band degeneracies in 3D crystals with screw symmetry $n_m$
and broken $\mathcal P*\mathcal T$ symmetry, where $\mathcal P$ stands for
spatial inversion and $\mathcal T$ for time reversal. The generic degeneracies
along symmetry lines are Weyl nodes: Chiral contact points between pairs of
bands. They can be single nodes with a chiral charge of magnitude $|\chi|=1$ or
composite nodes with $|\chi|=2$ or $3$, and the possible $\chi$ values only
depend on the order $n$ of the axis, not on the pitch $m/n$ of the screw.
Double Weyl nodes require $n=4$ or 6, and triple nodes require $n=6$. In all
cases the bands split linearly along the axis, and for composite nodes the
splitting is quadratic on the orthogonal plane. This is true for triple as well
as double nodes, due to the presence in the effective two-band Hamiltonian of a
nonchiral quadratic term that masks the chiral cubic dispersion. If $\mathcal
T$ symmetry is present and $\mathcal P$ is broken there may exist on some
symmetry lines Weyl nodes pinned to $\mathcal T$-invariant momenta, which in
some cases are unavoidable. In the absence of other symmetries their
classification depends on $n$, $m$, and the type of $\mathcal T$ symmetry. With
spinless $\mathcal T$ such $\mathcal T$-invariant Weyl nodes are always double
nodes, while with spinful $\mathcal T$ they can be single or triple nodes.
$\mathcal T$-invariant triples nodes can occur not only on 6-fold axes but also
on 3-fold ones, and their in-plane band splitting is cubic, not quadratic as in
the case of generic triple nodes. These rules are illustrated by means of
first-principles calculations for hcp cobalt, a $\mathcal T$-broken, $\mathcal
P$-invariant crystal with $6_3$ symmetry, and for trigonal tellurium and
hexagonal NbSi$_2$, which are $\mathcal T$-invariant, $\mathcal P$-broken
crystals with 3-fold and 6-fold screw symmetry respectively.
| 0 | 1 | 0 | 0 | 0 | 0 |
Deep Neural Networks as 0-1 Mixed Integer Linear Programs: A Feasibility Study | Deep Neural Networks (DNNs) are very popular these days, and are the subject
of a very intense investigation. A DNN is made by layers of internal units (or
neurons), each of which computes an affine combination of the output of the
units in the previous layer, applies a nonlinear operator, and outputs the
corresponding value (also known as activation). A commonly-used nonlinear
operator is the so-called rectified linear unit (ReLU), whose output is just
the maximum between its input value and zero. In this (and other similar cases
like max pooling, where the max operation involves more than one input value),
one can model the DNN as a 0-1 Mixed Integer Linear Program (0-1 MILP) where
the continuous variables correspond to the output values of each unit, and a
binary variable is associated with each ReLU to model its yes/no nature. In
this paper we discuss the peculiarity of this kind of 0-1 MILP models, and
describe an effective bound-tightening technique intended to ease its solution.
We also present possible applications of the 0-1 MILP model arising in feature
visualization and in the construction of adversarial examples. Preliminary
computational results are reported, aimed at investigating (on small DNNs) the
computational performance of a state-of-the-art MILP solver when applied to a
known test case, namely, hand-written digit recognition.
| 1 | 0 | 0 | 0 | 0 | 0 |
Directionality Fields generated by a Local Hilbert Transform | We propose a new approach based on a local Hilbert transform to design
non-Hermitian potentials generating arbitrary vector fields of directionality,
p(r), with desired shapes and topologies. We derive a local Hilbert transform
to systematically build such potentials, by modifying background potentials
(being either regular or random, extended or localized). In particular, we
explore particular directionality fields, for instance in the form of a focus
to create sinks for probe fields (which could help to increase absorption at
the sink), or to generate vortices in the probe fields. Physically, the
proposed directionality fields provide a flexible new mechanism for dynamically
shaping and precise control over probe fields leading to novel effects in wave
dynamics.
| 0 | 1 | 0 | 0 | 0 | 0 |
Bulk Eigenvalue Correlation Statistics of Random Biregular Bipartite Graphs | This paper is the second chapter of three of the author's undergraduate
thesis. In this paper, we consider the random matrix ensemble given by $(d_b,
d_w)$-regular graphs on $M$ black vertices and $N$ white vertices, where $d_b
\in [N^{\gamma}, N^{2/3 - \gamma}]$ for any $\gamma > 0$. We simultaneously
prove that the bulk eigenvalue correlation statistics for both normalized
adjacency matrices and their corresponding covariance matrices are stable for
short times. Combined with an ergodicity analysis of the Dyson Brownian motion
in another paper, this proves universality of bulk eigenvalue correlation
statistics, matching normalized adjacency matrices with the GOE and the
corresponding covariance matrices with the Gaussian Wishart Ensemble.
| 0 | 0 | 1 | 1 | 0 | 0 |
A local weighted Axler-Zheng theorem in $\mathbb{C}^n$ | The well-known Axler-Zheng theorem characterizes compactness of finite sums
of finite products of Toeplitz operators on the unit disk in terms of the
Berezin transform of these operators. Subsequently this theorem was generalized
to other domains and appeared in different forms, including domains in
$\mathbb{C}^n$ on which the $\overline{\partial}$-Neumann operator $N$ is
compact. In this work we remove the assumption on $N$, and we study weighted
Bergman spaces on smooth bounded pseudoconvex domains. We prove a local version
of the Axler-Zheng theorem characterizing compactness of Toeplitz operators in
the algebra generated by symbols continuous up to the boundary in terms of the
behavior of the Berezin transform at strongly pseudoconvex points. We employ a
Forelli-Rudin type inflation method to handle the weights.
| 0 | 0 | 1 | 0 | 0 | 0 |
Spatial modulation of Joule losses to increase the normal zone propagation velocity in (RE)BaCuO tapes | This paper presents a simple approach to increase the normal zone propagation
velocity in (RE)BaCuO thin films grown on a flexible metallic substrate, also
called superconducting tapes. The key idea behind this approach is to use a
specific geometry of the silver thermal stabilizer that surrounds the
superconducting tape. More specifically, a very thin layer of silver stabilizer
is deposited on top of the superconductor layer, typically less than 100 nm,
while the remaining stabilizer (still silver) is deposited on the substrate
side. Normal zone propagation velocities up to 170 cm/s have been measured
experimentally, corresponding to a stabilizer thickness of 20 nm on top of the
superconductor layer. This is one order of magnitude faster than the speed
measured on actual commercial tapes. Our results clearly demonstrate that a
very thin stabilizer on top of the superconductor layer leads to high normal
zone propagation velocities. The experimental values are in good agreement with
predictions realized by finite element simulations. Furthermore, the
propagation of the normal zone during the quench was recorded in situ and in
real time using a high-speed camera. Due to high Joule losses generated on both
edges of the tape sample, a "U-shaped" profile could be observed at the
boundaries between the superconducting and the normal zones, which matches very
closely the profile predicted by the simulations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Vortex Thermometry for Turbulent Two-Dimensional Fluids | We introduce a new method of statistical analysis to characterise the
dynamics of turbulent fluids in two dimensions. We establish that, in
equilibrium, the vortex distributions can be uniquely connected to the
temperature of the vortex gas, and apply this vortex thermometry to
characterise simulations of decaying superfluid turbulence. We confirm the
hypothesis of vortex evaporative heating leading to Onsager vortices proposed
in Phys. Rev. Lett. 113, 165302 (2014), and find previously unidentified vortex
power-law distributions that emerge from the dynamics.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Hurwitz-type theorem for the regular Coulomb wave function via Hankel determinants | We derive a closed formula for the determinant of the Hankel matrix whose
entries are given by sums of negative powers of the zeros of the regular
Coulomb wave function. This new identity applied together with results of
Grommer and Chebotarev allows us to prove a Hurwitz-type theorem about the
zeros of the regular Coulomb wave function. As a particular case, we obtain a
new proof of the classical Hurwitz's theorem from the theory of Bessel
functions that is based on algebraic arguments. In addition, several Hankel
determinants with entries given by the Rayleigh function and Bernoulli numbers
are also evaluated.
| 0 | 0 | 1 | 0 | 0 | 0 |
An Empirical Analysis of Proximal Policy Optimization with Kronecker-factored Natural Gradients | In this technical report, we consider an approach that combines the PPO
objective and K-FAC natural gradient optimization, for which we call PPOKFAC.
We perform a range of empirical analysis on various aspects of the algorithm,
such as sample complexity, training speed, and sensitivity to batch size and
training epochs. We observe that PPOKFAC is able to outperform PPO in terms of
sample complexity and speed in a range of MuJoCo environments, while being
scalable in terms of batch size. In spite of this, it seems that adding more
epochs is not necessarily helpful for sample efficiency, and PPOKFAC seems to
be worse than its A2C counterpart, ACKTR.
| 0 | 0 | 0 | 1 | 0 | 0 |
On the density of sets avoiding parallelohedron distance 1 | The maximal density of a measurable subset of R^n avoiding Euclidean
distance1 is unknown except in the trivial case of dimension 1. In this paper,
we consider thecase of a distance associated to a polytope that tiles space,
where it is likely that the setsavoiding distance 1 are of maximal density
2^-n, as conjectured by Bachoc and Robins. We prove that this is true for n =
2, and for the Voronoï regions of the lattices An, n >= 2.
| 0 | 0 | 1 | 0 | 0 | 0 |
Adversarial Deep Structured Nets for Mass Segmentation from Mammograms | Mass segmentation provides effective morphological features which are
important for mass diagnosis. In this work, we propose a novel end-to-end
network for mammographic mass segmentation which employs a fully convolutional
network (FCN) to model a potential function, followed by a CRF to perform
structured learning. Because the mass distribution varies greatly with pixel
position, the FCN is combined with a position priori. Further, we employ
adversarial training to eliminate over-fitting due to the small sizes of
mammogram datasets. Multi-scale FCN is employed to improve the segmentation
performance. Experimental results on two public datasets, INbreast and
DDSM-BCRP, demonstrate that our end-to-end network achieves better performance
than state-of-the-art approaches.
\footnote{this https URL}
| 1 | 0 | 0 | 0 | 0 | 0 |
The downward directed grounds hypothesis and very large cardinals | A transitive model $M$ of ZFC is called a ground if the universe $V$ is a set
forcing extension of $M$. We show that the grounds of $V$ are downward
set-directed. Consequently, we establish some fundamental theorems on the
forcing method and the set-theoretic geology. For instance, (1) the mantle, the
intersection of all grounds, must be a model of ZFC. (2) $V$ has only set many
grounds if and only if the mantle is a ground. We also show that if the
universe has some very large cardinal, then the mantle must be a ground.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Distributed Control Framework of Multiple Unmanned Aerial Vehicles for Dynamic Wildfire Tracking | Wild-land fire fighting is a hazardous job. A key task for firefighters is to
observe the "fire front" to chart the progress of the fire and areas that will
likely spread next. Lack of information of the fire front causes many
accidents. Using Unmanned Aerial Vehicles (UAVs) to cover wildfire is promising
because it can replace humans in hazardous fire tracking and significantly
reduce operation costs. In this paper we propose a distributed control
framework designed for a team of UAVs that can closely monitor a wildfire in
open space, and precisely track its development. The UAV team, designed for
flexible deployment, can effectively avoid in-flight collisions and cooperate
well with neighbors. They can maintain a certain height level to the ground for
safe flight above fire. Experimental results are conducted to demonstrate the
capabilities of the UAV team in covering a spreading wildfire.
| 1 | 0 | 0 | 0 | 0 | 0 |
Rayleigh-Brillouin light scattering spectroscopy of nitrous oxide (N$_2$O) | High signal-to-noise and high-resolution light scattering spectra are
measured for nitrous oxide (N$_2$O) gas at an incident wavelength of 403.00 nm,
at 90$^\circ$ scattering, at room temperature and at gas pressures in the range
$0.5-4$ bar. The resulting Rayleigh-Brillouin light scattering spectra are
compared to a number of models describing in an approximate manner the
collisional dynamics and energy transfer in this gaseous medium of this
polyatomic molecular species. The Tenti-S6 model, based on macroscopic gas
transport coefficients, reproduces the scattering profiles in the entire
pressure range at less than 2\% deviation at a similar level as does the
alternative kinetic Grad's 6-moment model, which is based on the internal
collisional relaxation as a decisive parameter. A hydrodynamic model fails to
reproduce experimental spectra for the low pressures of 0.5-1 bar, but yields
very good agreement ($< 1$\%) in the pressure range $2-4$ bar. While these
three models have a different physical basis the internal molecular relaxation
derived can for all three be described in terms of a bulk viscosity of $\eta_b
\sim (6 \pm 2) \times 10^{-5}$ Pa$\cdot$s. A 'rough-sphere' model, previously
shown to be effective to describe light scattering in SF$_6$ gas, is not found
to be suitable, likely in view of the non-sphericity and asymmetry of the N-N-O
structured linear polyatomic molecule.
| 0 | 1 | 0 | 0 | 0 | 0 |
Competition between Chaotic and Non-Chaotic Phases in a Quadratically Coupled Sachdev-Ye-Kitaev Model | The Sachdev-Ye-Kitaev (SYK) model is a concrete solvable model to study
non-Fermi liquid properties, holographic duality and maximally chaotic
behavior. In this work, we consider a generalization of the SYK model that
contains two SYK models with different number of Majorana modes coupled by
quadratic terms. This model is also solvable, and the solution shows a
zero-temperature quantum phase transition between two non-Fermi liquid chaotic
phases. This phase transition is driven by tuning the ratio of two mode
numbers, and a Fermi liquid non-chaotic phase sits at the critical point with
equal mode number. At finite temperature, the Fermi liquid phase expands to a
finite regime. More intriguingly, a different non-Fermi liquid phase emerges at
finite temperature. We characterize the phase diagram in term of the spectral
function, the Lyapunov exponent and the entropy. Our results illustrate a
concrete example of quantum phase transition and critical regime between two
non-Fermi liquid phases.
| 0 | 1 | 0 | 0 | 0 | 0 |
Symmetry Realization via a Dynamical Inverse Higgs Mechanism | The Ward identities associated with spontaneously broken symmetries can be
saturated by Goldstone bosons. However, when space-time symmetries are broken,
the number of Goldstone bosons necessary to non-linearly realize the symmetry
can be less than the number of broken generators. The loss of Goldstones may be
due to a redundancy or the generation of a gap. This phenomena is called an
Inverse Higgs Mechanism (IHM). However, there are cases when a Goldstone boson
associated with a broken generator does not appear in the low energy theory
despite the lack of the existence of an associated IHM. In this paper we will
show that in such cases the relevant broken symmetry can be realized, without
the aid of an associated Goldstone, if there exists a proper set of operator
constraints, which we call a Dynamical Inverse Higgs Mechanism (DIHM). We
consider the spontaneous breaking of boosts, rotations and conformal
transformations in the context of Fermi liquids, finding three possible paths
to symmetry realization: pure Goldstones, no Goldstones and DIHM, or some
mixture thereof. We show that in the two dimensional degenerate electron system
the DIHM route is the only consistent way to realize spontaneously broken
boosts and dilatations, while in three dimensions these symmetries could just
as well be realized via the inclusion of non-derivatively coupled Goldstone
bosons. We have present the action, including the leading order
non-linearities, for the rotational Goldstone (angulon), and discuss the
constraint associated with the possible DIHM that would need to be imposed to
remove it from the spectrum. Finally we discuss the conditions under which
Goldstone bosons are non-derivatively coupled, a necessary condition for the
existence of a Dynamical Inverse Higgs Constraint (DIHC), generalizaing the
results for Vishwanath and Wantanabe.
| 0 | 1 | 0 | 0 | 0 | 0 |
Growth of strontium ruthenate films by hybrid molecular beam epitaxy | We report on the growth of epitaxial Sr2RuO4 films using a hybrid molecular
beam epitaxy approach in which a volatile precursor containing RuO4 is used to
supply ruthenium and oxygen. The use of the precursor overcomes a number of
issues encountered in traditional MBE that uses elemental metal sources.
Phase-pure, epitaxial thin films of Sr2RuO4 are obtained. At high substrate
temperatures, growth proceeds in a layer-by-layer mode with intensity
oscillations observed in reflection high-energy electron diffraction. Films are
of high structural quality, as documented by x-ray diffraction, atomic force
microscopy, and transmission electron microscopy. The method should be suitable
for the growth of other complex oxides containing ruthenium, opening up
opportunities to investigate thin films that host rich exotic ground states.
| 0 | 1 | 0 | 0 | 0 | 0 |
Possible Evidence for the Stochastic Acceleration of Secondary Antiprotons by Supernova Remnants | The antiproton-to-proton ratio in the cosmic-ray spectrum is a sensitive
probe of new physics. Using recent measurements of the cosmic-ray antiproton
and proton fluxes in the energy range of 1-1000 GeV, we study the contribution
to the $\bar{p}/p$ ratio from secondary antiprotons that are produced and
subsequently accelerated within individual supernova remnants. We consider
several well-motivated models for cosmic-ray propagation in the interstellar
medium and marginalize our results over the uncertainties related to the
antiproton production cross section and the time-, charge-, and
energy-dependent effects of solar modulation. We find that the increase in the
$\bar{p}/p$ ratio observed at rigidities above $\sim$ 100 GV cannot be
accounted for within the context of conventional cosmic-ray propagation models,
but is consistent with scenarios in which cosmic-ray antiprotons are produced
and subsequently accelerated by shocks within a given supernova remnant. In
light of this, the acceleration of secondary cosmic rays in supernova remnants
is predicted to substantially contribute to the cosmic-ray positron spectrum,
accounting for a significant fraction of the observed positron excess.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Host Galaxy and Redshift of the Repeating Fast Radio Burst FRB 121102 | The precise localization of the repeating fast radio burst (FRB 121102) has
provided the first unambiguous association (chance coincidence probability
$p\lesssim3\times10^{-4}$) of an FRB with an optical and persistent radio
counterpart. We report on optical imaging and spectroscopy of the counterpart
and find that it is an extended ($0.6^{\prime\prime}-0.8^{\prime\prime}$)
object displaying prominent Balmer and [OIII] emission lines. Based on the
spectrum and emission line ratios, we classify the counterpart as a
low-metallicity, star-forming, $m_{r^\prime} = 25.1$ AB mag dwarf galaxy at a
redshift of $z=0.19273(8)$, corresponding to a luminosity distance of 972 Mpc.
From the angular size, the redshift, and luminosity, we estimate the host
galaxy to have a diameter $\lesssim4$ kpc and a stellar mass of
$M_*\sim4-7\times 10^{7}\,M_\odot$, assuming a mass-to-light ratio between 2 to
3$\,M_\odot\,L_\odot^{-1}$. Based on the H$\alpha$ flux, we estimate the star
formation rate of the host to be $0.4\,M_\odot\,\mathrm{yr^{-1}}$ and a
substantial host dispersion measure depth $\lesssim 324\,\mathrm{pc\,cm^{-3}}$.
The net dispersion measure contribution of the host galaxy to FRB 121102 is
likely to be lower than this value depending on geometrical factors. We show
that the persistent radio source at FRB 121102's location reported by Marcote
et al (2017) is offset from the galaxy's center of light by $\sim$200 mas and
the host galaxy does not show optical signatures for AGN activity. If FRB
121102 is typical of the wider FRB population and if future interferometric
localizations preferentially find them in dwarf galaxies with low metallicities
and prominent emission lines, they would share such a preference with long
gamma ray bursts and superluminous supernovae.
| 0 | 1 | 0 | 0 | 0 | 0 |
Thicket Density | Thicket density is a new measure of the complexity of a set system, having
the same relationship to stable formulas that VC density has to NIP formulas.
It satisfies a Sauer-Shelah type dichotomy that has applications in both model
theory and the theory of algorithms
| 0 | 0 | 1 | 0 | 0 | 0 |
Uniformizations of stable $(γ,n)$-gonal Riemann surfaces | A $(\gamma,n)$-gonal pair is a pair $(S,f)$, where $S$ is a closed Riemann
surface and $f:S \to R$ is a degree $n$ holomorphic map onto a closed Riemann
surface $R$ of genus $\gamma$. If the signature of $(S,f)$ is of hyperbolic
type, then there is pair $(\Gamma,G)$, called an uniformization of $(S,f)$,
where $G$ is a Fuchsian group acting on the unit disc ${\mathbb D}$ containing
$\Gamma$ as an index $n$ subgroup, so that $f$ is induced by the inclusion of
$\Gamma <G$. The uniformization is uniquely determined by $(S,f)$, up to
conjugation by holomorphic automorphisms of ${\mathbb D}$, and it permits to
provide natural complex orbifold structures on the Hurwitz spaces parametrizing
(twisted) isomorphic classes of pairs topologically equivalent to $(S,f)$. In
order to produce certain compactifications of these Hurwitz spaces, one needs
to consider the so called stable $(\gamma,n)$-gonal pairs, which are natural
geometrical deformations of $(\gamma,n)$-gonal pairs. Due to the above, it
seems interesting to search for uniformizations of stable $(\gamma,n)$-gonal
pairs, in terms of certain class of Kleinian groups. In this paper we review
such uniformizations by using noded Fuchsian groups, which are (geometric)
limits of quasiconformal deformations of Fuchsian groups, and which provide
uniformizations of stable Riemann orbifolds. These uniformizations permit to
obtain a compactification of the Hurwitz spaces with a complex orbifold
structure, these being quotients of the augmented Teichmüller space of $G$ by
a suitable finite index subgroup of its modular group.
| 0 | 0 | 1 | 0 | 0 | 0 |
If you are not paying for it, you are the product: How much do advertisers pay to reach you? | Online advertising is progressively moving towards a programmatic model in
which ads are matched to actual interests of individuals collected as they
browse the web. Letting the huge debate around privacy aside, a very important
question in this area, for which little is known, is: How much do advertisers
pay to reach an individual? In this study, we develop a first of its kind
methodology for computing exactly that -- the price paid for a web user by the
ad ecosystem -- and we do that in real time. Our approach is based on tapping
on the Real Time Bidding (RTB) protocol to collect cleartext and encrypted
prices for winning bids paid by advertisers in order to place targeted ads. Our
main technical contribution is a method for tallying winning bids even when
they are encrypted. We achieve this by training a model using as ground truth
prices obtained by running our own "probe" ad-campaigns. We design our
methodology through a browser extension and a back-end server that provides it
with fresh models for encrypted bids. We validate our methodology using a one
year long trace of 1600 mobile users and demonstrate that it can estimate a
user's advertising worth with more than 82% accuracy.
| 1 | 0 | 0 | 0 | 0 | 0 |
Circularly polarized vacuum field in three-dimensional chiral photonic crystals probed by quantum dot emission | The quantum nature of light-matter interactions in a circularly polarized
vacuum field was probed by spontaneous emission from quantum dots in
three-dimensional chiral photonic crystals. Due to the circularly polarized
eigenmodes along the helical axis in the GaAs-based mirror-asymmetric
structures we studied, we observed highly circularly polarized emission from
the quantum dots. Both spectroscopic and time-resolved measurements confirmed
that the obtained circularly polarized light was influenced by a large
difference in the photonic density of states between the orthogonal components
of the circular polarization in the vacuum field.
| 0 | 1 | 0 | 0 | 0 | 0 |
Critical behavior of quasi-two-dimensional semiconducting ferromagnet CrGeTe$_3$ | The critical properties of the single-crystalline semiconducting ferromagnet
CrGeTe$_3$ were investigated by bulk dc magnetization around the paramagnetic
to ferromagnetic phase transition. Critical exponents $\beta = 0.200\pm0.003$
with critical temperature $T_c = 62.65\pm0.07$ K and $\gamma = 1.28\pm0.03$
with $T_c = 62.75\pm0.06$ K are obtained by the Kouvel-Fisher method whereas
$\delta = 7.96\pm0.01$ is obtained by the critical isotherm analysis at $T_c =
62.7$ K. These critical exponents obey the Widom scaling relation $\delta =
1+\gamma/\beta$, indicating self-consistency of the obtained values. With these
critical exponents the isotherm $M(H)$ curves below and above the critical
temperatures collapse into two independent universal branches, obeying the
single scaling equation $m = f_\pm(h)$, where $m$ and $h$ are renormalized
magnetization and field, respectively. The determined exponents match well with
those calculated from the results of renormalization group approach for a
two-dimensional Ising system coupled with long-range interaction between spins
decaying as $J(r)\approx r^{-(d+\sigma)}$ with $\sigma=1.52$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Scale-free Monte Carlo method for calculating the critical exponent $γ$ of self-avoiding walks | We implement a scale-free version of the pivot algorithm and use it to sample
pairs of three-dimensional self-avoiding walks, for the purpose of efficiently
calculating an observable that corresponds to the probability that pairs of
self-avoiding walks remain self-avoiding when they are concatenated. We study
the properties of this Markov chain, and then use it to find the critical
exponent $\gamma$ for self-avoiding walks to unprecedented accuracy. Our final
estimate for $\gamma$ is $1.15695300(95)$.
| 0 | 1 | 1 | 0 | 0 | 0 |
The quadratic M-convexity testing problem | M-convex functions, which are a generalization of valuated matroids, play a
central role in discrete convex analysis. Quadratic M-convex functions
constitute a basic and important subclass of M-convex functions, which has a
close relationship with phylogenetics as well as valued constraint satisfaction
problems. In this paper, we consider the quadratic M-convexity testing problem
(QMCTP), which is the problem of deciding whether a given quadratic function on
$\{0,1\}^n$ is M-convex. We show that QMCTP is co-NP-complete in general, but
is polynomial-time solvable under a natural assumption. Furthermore, we propose
an $O(n^2)$-time algorithm for solving QMCTP in the polynomial-time solvable
case.
| 0 | 0 | 1 | 0 | 0 | 0 |
Distributed Bayesian Matrix Factorization with Limited Communication | Bayesian matrix factorization (BMF) is a powerful tool for producing low-rank
representations of matrices and for predicting missing values and providing
confidence intervals. Scaling up the posterior inference for massive-scale
matrices is challenging and requires distributing both data and computation
over many workers, making communication the main computational bottleneck.
Embarrassingly parallel inference would remove the communication needed, by
using completely independent computations on different data subsets, but it
suffers from the inherent unidentifiability of BMF solutions. We introduce a
hierarchical decomposition of the joint posterior distribution, which couples
the subset inferences, allowing for embarrassingly parallel computations in a
sequence of at most three stages. Using an efficient approximate
implementation, we show improvements empirically on both real and simulated
data. Our distributed approach is able to achieve a speed-up of almost an order
of magnitude over the full posterior, with a negligible effect on predictive
accuracy. Our method outperforms state-of-the-art embarrassingly parallel MCMC
methods in accuracy, and achieves results competitive to other available
distributed and parallel implementations of BMF.
| 1 | 0 | 0 | 1 | 0 | 0 |
Optimized Certificate Revocation List Distribution for Secure V2X Communications | The successful deployment of safe and trustworthy Connected and Autonomous
Vehicles (CAVs) will highly depend on the ability to devise robust and
effective security solutions to resist sophisticated cyber attacks and patch up
critical vulnerabilities. Pseudonym Public Key Infrastructure (PPKI) is a
promising approach to secure vehicular networks as well as ensure data and
location privacy, concealing the vehicles' real identities. Nevertheless,
pseudonym distribution and management affect PPKI scalability due to the
significant number of digital certificates required by a single vehicle. In
this paper, we focus on the certificate revocation process and propose a
versatile and low-complexity framework to facilitate the distribution of the
Certificate Revocation Lists (CRL) issued by the Certification Authority (CA).
CRL compression is achieved through optimized Bloom filters, which guarantee a
considerable overhead reduction with a configurable rate of false positives.
Our results show that the distribution of compressed CRLs can significantly
enhance the system scalability without increasing the complexity of the
revocation process.
| 1 | 0 | 0 | 0 | 0 | 0 |
Optical response of highly reflective film used in the water Cherenkov muon veto of the XENON1T dark matter experiment | The XENON1T experiment is the most recent stage of the XENON Dark Matter
Search, aiming for the direct detection of Weakly Interacting Massive Particles
(WIMPs). To reach its projected sensitivity, the background has to be reduced
by two orders of magnitude compared to its predecessor XENON100. This requires
a water Cherenkov muon veto surrounding the XENON1T TPC, both to shield
external backgrounds and to tag muon-induced energetic neutrons through
detection of a passing muon or the secondary shower induced by a muon
interacting in the surrounding rock. The muon veto is instrumented with $84$
$8"$ PMTs with high quantum efficiency (QE) in the Cherenkov regime and the
walls of the watertank are clad with the highly reflective DF2000MA foil by 3M.
Here, we present a study of the reflective properties of this foil, as well as
the measurement of its wavelength shifting (WLS) properties. Further, we
present the impact of reflectance and WLS on the detection efficiency of the
muon veto, using a Monte Carlo simulation carried out with Geant4. The
measurements yield a specular reflectance of $\approx100\%$ for wavelengths
larger than $400\,$nm, while $\approx90\%$ of the incoming light below
$370\,$nm is absorbed by the foil. Approximately $3-7.5\%$ of the light hitting
the foil within the wavelength range $250\,$nm $\leq \lambda \leq 390\,$nm is
used for the WLS process. The intensity of the emission spectrum of the WLS
light is slightly dependent on the absorbed wavelength and shows the shape of a
rotational-vibrational fluorescence spectrum, peaking at around $\lambda
\approx 420\,$nm. Adjusting the reflectance values to the measured ones in the
Monte Carlo simulation originally used for the muon veto design, the veto
detection efficiency remains unchanged. Including the wavelength shifting in
the Monte Carlo simulation leads to an increase of the efficiency of
approximately $0.5\%$.
| 0 | 1 | 0 | 0 | 0 | 0 |
On spectral partitioning of signed graphs | We argue that the standard graph Laplacian is preferable for spectral
partitioning of signed graphs compared to the signed Laplacian. Simple examples
demonstrate that partitioning based on signs of components of the leading
eigenvectors of the signed Laplacian may be meaningless, in contrast to
partitioning based on the Fiedler vector of the standard graph Laplacian for
signed graphs. We observe that negative eigenvalues are beneficial for spectral
partitioning of signed graphs, making the Fiedler vector easier to compute.
| 1 | 0 | 1 | 1 | 0 | 0 |
Sensor Fusion for Public Space Utilization Monitoring in a Smart City | Public space utilization is crucial for urban developers to understand how
efficient a place is being occupied in order to improve existing or future
infrastructures. In a smart cities approach, implementing public space
monitoring with Internet-of-Things (IoT) sensors appear to be a viable
solution. However, choice of sensors often is a challenging problem and often
linked with scalability, coverage, energy consumption, accuracy, and privacy.
To get the most from low cost sensor with aforementioned design in mind, we
proposed data processing modules for capturing public space utilization with
Renewable Wireless Sensor Network (RWSN) platform using pyroelectric infrared
(PIR) and analog sound sensor. We first proposed a calibration process to
remove false alarm of PIR sensor due to the impact of weather and environment.
We then demonstrate how the sounds sensor can be processed to provide various
insight of a public space. Lastly, we fused both sensors and study a particular
public space utilization based on one month data to unveil its usage.
| 1 | 0 | 0 | 1 | 0 | 0 |
Symplectic integrators for second-order linear non-autonomous equations | Two families of symplectic methods specially designed for second-order
time-dependent linear systems are presented. Both are obtained from the Magnus
expansion of the corresponding first-order equation, but otherwise they differ
in significant aspects. The first family is addressed to problems with low to
moderate dimension, whereas the second is more appropriate when the dimension
is large, in particular when the system corresponds to a linear wave equation
previously discretised in space. Several numerical experiments illustrate the
main features of the new schemes.
| 0 | 0 | 1 | 0 | 0 | 0 |
Deep Reinforcement Learning: Framework, Applications, and Embedded Implementations | The recent breakthroughs of deep reinforcement learning (DRL) technique in
Alpha Go and playing Atari have set a good example in handling large state and
actions spaces of complicated control problems. The DRL technique is comprised
of (i) an offline deep neural network (DNN) construction phase, which derives
the correlation between each state-action pair of the system and its value
function, and (ii) an online deep Q-learning phase, which adaptively derives
the optimal action and updates value estimates. In this paper, we first present
the general DRL framework, which can be widely utilized in many applications
with different optimization objectives. This is followed by the introduction of
three specific applications: the cloud computing resource allocation problem,
the residential smart grid task scheduling problem, and building HVAC system
optimal control problem. The effectiveness of the DRL technique in these three
cyber-physical applications have been validated. Finally, this paper
investigates the stochastic computing-based hardware implementations of the DRL
framework, which consumes a significant improvement in area efficiency and
power consumption compared with binary-based implementation counterparts.
| 1 | 0 | 0 | 0 | 0 | 0 |
Experimental observation of node-line-like surface states in LaBi | In a Dirac nodal line semimetal, the bulk conduction and valence bands touch
at extended lines in the Brillouin zone. To date, most of the theoretically
predicted and experimentally discovered nodal lines derive from the bulk bands
of two- and three-dimensional materials. Here, based on combined angle-resolved
photoemission spectroscopy measurements and first-principles calculations, we
report the discovery of node-line-like surface states on the (001) surface of
LaBi. These bands derive from the topological surface states of LaBi and bridge
the band gap opened by spin-orbit coupling and band inversion. Our
first-principles calculations reveal that these "nodal lines" have a tiny gap,
which is beyond typical experimental resolution. These results may provide
important information to understand the extraordinary physical properties of
LaBi, such as the extremely large magnetoresistance and resistivity plateau.
| 0 | 1 | 0 | 0 | 0 | 0 |
Manin's conjecture for a class of singular cubic hypersurfaces | Let $n$ be a positive multiple of $4$. We establish an asymptotic formula for
the number of rational points of bounded height on singular cubic hypersurfaces
$S_n$ defined by $$ x^3=(y_1^2 + \cdots + y_n^2)z . $$ This result is new in
two aspects: first, it can be viewed as a modest start on the study of density
of rational points on those singular cubic hypersurfaces which are not covered
by the classical theorems of Davenport or Heath-Brown; second, it proves
Manin's conjecture for singular cubic hypersurfaces $S_n$ defined above.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Flexible Approach to Automated RNN Architecture Generation | The process of designing neural architectures requires expert knowledge and
extensive trial and error. While automated architecture search may simplify
these requirements, the recurrent neural network (RNN) architectures generated
by existing methods are limited in both flexibility and components. We propose
a domain-specific language (DSL) for use in automated architecture search which
can produce novel RNNs of arbitrary depth and width. The DSL is flexible enough
to define standard architectures such as the Gated Recurrent Unit and Long
Short Term Memory and allows the introduction of non-standard RNN components
such as trigonometric curves and layer normalization. Using two different
candidate generation techniques, random search with a ranking function and
reinforcement learning, we explore the novel architectures produced by the RNN
DSL for language modeling and machine translation domains. The resulting
architectures do not follow human intuition yet perform well on their targeted
tasks, suggesting the space of usable RNN architectures is far larger than
previously assumed.
| 1 | 0 | 0 | 1 | 0 | 0 |
Convexification of Neural Graph | Traditionally, most complex intelligence architectures are extremely
non-convex, which could not be well performed by convex optimization. However,
this paper decomposes complex structures into three types of nodes: operators,
algorithms and functions. Iteratively, propagating from node to node along
edge, we prove that "regarding the tree-structured neural graph, it is nearly
convex in each variable, when the other variables are fixed." In fact, the
non-convex properties stem from circles and functions, which could be
transformed to be convex with our proposed \textit{\textbf{scale mechanism}}.
Experimentally, we justify our theoretical analysis by two practical
applications.
| 0 | 0 | 0 | 1 | 0 | 0 |
Pseudogaps in strongly interacting Fermi gases | A central challenge in modern condensed matter physics is developing the
tools for understanding nontrivial yet unordered states of matter. One
important idea to emerge in this context is that of a "pseudogap": the fact
that under appropriate circumstances the normal state displays a suppression of
the single particle spectral density near the Fermi level, reminiscent of the
gaps seen in ordered states of matter. While these concepts arose in a solid
state context, it is now being explored in cold gases. This article reviews the
current experimental and theoretical understanding of the normal state of
strongly interacting Fermi gases, with particular focus on the phenomonology
which is traditionally associated with the pseudogap.
| 0 | 1 | 0 | 0 | 0 | 0 |
Resolution-Exact Planner for Thick Non-Crossing 2-Link Robots | We consider the path planning problem for a 2-link robot amidst polygonal
obstacles. Our robot is parametrizable by the lengths $\ell_1, \ell_2>0$ of its
two links, the thickness $\tau \ge 0$ of the links, and an angle $\kappa$ that
constrains the angle between the 2 links to be strictly greater than $\kappa$.
The case $\tau>0$ and $\kappa \ge 0$ corresponds to "thick non-crossing"
robots. This results in a novel 4DOF configuration space ${\mathbb R}^2\times
({\mathbb T}\setminus\Delta(\kappa))$ where ${\mathbb T}$ is the torus and
$\Delta(\kappa)$ the diagonal band of width $\kappa$. We design a
resolution-exact planner for this robot using the framework of Soft Subdivision
Search (SSS). First, we provide an analysis of the space of forbidden angles,
leading to a soft predicate for classifying configuration boxes. We further
exploit the T/R splitting technique which was previously introduced for
self-crossing thin 2-link robots. Our open-source implementation in Core
Library achieves real-time performance for a suite of combinatorially
non-trivial obstacle sets. Experimentally, our algorithm is significantly
better than any of the state-of-art sampling algorithms we looked at, in timing
and in success rate.
| 1 | 0 | 0 | 0 | 0 | 0 |
Easing Embedding Learning by Comprehensive Transcription of Heterogeneous Information Networks | Heterogeneous information networks (HINs) are ubiquitous in real-world
applications. In the meantime, network embedding has emerged as a convenient
tool to mine and learn from networked data. As a result, it is of interest to
develop HIN embedding methods. However, the heterogeneity in HINs introduces
not only rich information but also potentially incompatible semantics, which
poses special challenges to embedding learning in HINs. With the intention to
preserve the rich yet potentially incompatible information in HIN embedding, we
propose to study the problem of comprehensive transcription of heterogeneous
information networks. The comprehensive transcription of HINs also provides an
easy-to-use approach to unleash the power of HINs, since it requires no
additional supervision, expertise, or feature engineering. To cope with the
challenges in the comprehensive transcription of HINs, we propose the HEER
algorithm, which embeds HINs via edge representations that are further coupled
with properly-learned heterogeneous metrics. To corroborate the efficacy of
HEER, we conducted experiments on two large-scale real-words datasets with an
edge reconstruction task and multiple case studies. Experiment results
demonstrate the effectiveness of the proposed HEER model and the utility of
edge representations and heterogeneous metrics. The code and data are available
at this https URL.
| 1 | 0 | 0 | 0 | 0 | 0 |
Generic Dynamical Phase Transition in One-Dimensional Bulk-Driven Lattice Gases with Exclusion | Dynamical phase transitions are crucial features of the fluctuations of
statistical systems, corresponding to boundaries between qualitatively
different mechanisms of maintaining unlikely values of dynamical observables
over long periods of time. They manifest themselves in the form of
non-analyticities in the large deviation function of those observables. In this
paper, we look at bulk-driven exclusion processes with open boundaries. It is
known that the standard asymmetric simple exclusion process exhibits a
dynamical phase transition in the large deviations of the current of particles
flowing through it. That phase transition has been described thanks to specific
calculation methods relying on the model being exactly solvable, but more
general methods have also been used to describe the extreme large deviations of
that current, far from the phase transition. We extend those methods to a large
class of models based on the ASEP, where we add arbitrary spatial
inhomogeneities in the rates and short-range potentials between the particles.
We show that, as for the regular ASEP, the large deviation function of the
current scales differently with the size of the system if one considers very
high or very low currents, pointing to the existence of a dynamical phase
transition between those two regimes: high current large deviations are
extensive in the system size, and the typical states associated to them are
Coulomb gases, which are correlated ; low current large deviations do not
depend on the system size, and the typical states associated to them are
anti-shocks, consistently with a hydrodynamic behaviour. Finally, we illustrate
our results numerically on a simple example, and we interpret the transition in
terms of the current pushing beyond its maximal hydrodynamic value, as well as
relate it to the appearance of Tracy-Widom distributions in the relaxation
statistics of such models.
| 0 | 1 | 0 | 0 | 0 | 0 |
Semianalytical calculation of the zonal-flow oscillation frequency in stellarators | Due to their capability to reduce turbulent transport in magnetized plasmas,
understanding the dynamics of zonal flows is an important problem in the fusion
programme. Since the pioneering work by Rosenbluth and Hinton in axisymmetric
tokamaks, it is known that studying the linear and collisionless relaxation of
zonal flow perturbations gives valuable information and physical insight.
Recently, the problem has been investigated in stellarators and it has been
found that in these devices the relaxation process exhibits a characteristic
feature: a damped oscillation. The frequency of this oscillation might be a
relevant parameter in the regulation of turbulent transport, and therefore its
efficient and accurate calculation is important. Although an analytical
expression can be derived for the frequency, its numerical evaluation is not
simple and has not been exploited systematically so far. Here, a numerical
method for its evaluation is considered, and the results are compared with
those obtained by calculating the frequency from gyrokinetic simulations. This
"semianalytical" approach for the determination of the zonal-flow frequency
reveals accurate and faster than the one based on gyrokinetic simulations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Geometric phase of a moving dipole under a magnetic field at a distance | We predict a geometric quantum phase shift of a moving electric dipole in the
presence of an external magnetic field at a distance. On the basis of the
Lorentz-covariant field interaction approach, we show that a geometric phase
appears under the condition that the dipole is moving in the field-free region,
which is distinct from the topological He-McKellar-Wilkens phase generated by a
direct overlap of the dipole and the field. We discuss the experimental
feasibility of detecting this phase with atomic interferometry and argue that
detection of this phase would result in a deeper understanding of the locality
in quantum electromagnetic interaction.
| 0 | 1 | 0 | 0 | 0 | 0 |
A refined count of Coxeter element factorizations | For well-generated complex reflection groups, Chapuy and Stump gave a simple
product for a generating function counting reflection factorizations of a
Coxeter element by their length. This is refined here to record the number of
reflections used from each orbit of hyperplanes. The proof is case-by-case via
the classification of well-generated groups. It implies a new expression for
the Coxeter number, expressed via data coming from a hyperplane orbit; a
case-free proof of this due to J. Michel is included.
| 0 | 0 | 1 | 0 | 0 | 0 |
Tuning the magnetism of the top-layer FeAs on BaFe$_{2}$As$_{2}$(001): First-principles study | The magnetic properties of BaFe$_{2}$As$_{2}$(001) surface have been studied
by using first-principles electronic structure calculations. We find that for
As-terminated surface the magnetic ground state of the top-layer FeAs is in the
staggered dimer antiferromagnetic (AFM) order, while for Ba-terminated surface
the collinear (single stripe) AFM order is the most stable. When a certain
coverage of Ba or K atoms are deposited onto the As-terminated surface, the
calculated energy differences among different AFM orders for the top-layer FeAs
on BaFe$_{2}$As$_{2}$(001) can be much reduced, indicating enhanced spin
fluctuations. To identify the novel staggered dimer AFM order for the As
termination, we have simulated the scanning tunneling microscopy (STM) image
for this state, which shows a different $\sqrt{2}\times\sqrt{2}$ pattern from
the case of half Ba coverage. Our results suggest: i) the magnetic properties
of the top-layer FeAs on BaFe$_{2}$As$_{2}$(001) can be tuned effectively by
surface doping; ii) both the surface termination and the AFM order in the
top-layer FeAs can affect the STM image of BaFe$_{2}$As$_{2}$(001).
| 0 | 1 | 0 | 0 | 0 | 0 |
Cubature methods to solve BSDEs: Error expansion and complexity control | We obtain an explicit error expansion for the solution of Backward Stochastic
Differential Equations (BSDEs) using the cubature on Wiener spaces method. The
result is proved under a mild strengthening of the assumptions needed for the
application of the cubature method. The explicit expansion can then be used to
construct implementable higher order approximations via Richardson-Romberg
extrapolation. To allow for an effective efficiency improvement of the
interpolated algorithm, we introduce an additional projection on sparse grids,
and study the resulting complexity reduction. Numerical examples are provided
to illustrate our results.
| 0 | 0 | 1 | 0 | 0 | 0 |
An Earth-mass Planet in a 1-AU Orbit around an Ultracool Dwarf | We combine $Spitzer$ and ground-based KMTNet microlensing observations to
identify and precisely measure an Earth-mass ($1.43^{+0.45}_{-0.32} M_\oplus$)
planet OGLE-2016-BLG-1195Lb at $1.16^{+0.16}_{-0.13}$ AU orbiting a
$0.078^{+0.016}_{-0.012} M_\odot$ ultracool dwarf. This is the lowest-mass
microlensing planet to date. At $3.91^{+0.42}_{-0.46}$ kpc, it is the third
consecutive case among the $Spitzer$ "Galactic distribution" planets toward the
Galactic bulge that lies in the Galactic disk as opposed to the bulge itself,
hinting at a skewed distribution of planets. Together with previous
microlensing discoveries, the seven Earth-size planets orbiting the ultracool
dwarf TRAPPIST-1, and the detection of disks around young brown dwarfs,
OGLE-2016-BLG-1195Lb suggests that such planets might be common around
ultracool dwarfs. It therefore sheds light on the formation of both ultracool
dwarfs and planetary systems at the limit of low-mass protoplanetary disks.
| 0 | 1 | 0 | 0 | 0 | 0 |
Distributed Statistical Estimation and Rates of Convergence in Normal Approximation | This paper presents a class of new algorithms for distributed statistical
estimation that exploit divide-and-conquer approach. We show that one of the
key benefits of the divide-and-conquer strategy is robustness, an important
characteristic for large distributed systems. We establish connections between
performance of these distributed algorithms and the rates of convergence in
normal approximation, and prove non-asymptotic deviations guarantees, as well
as limit theorems, for the resulting estimators. Our techniques are illustrated
through several examples: in particular, we obtain new results for the
median-of-means estimator, as well as provide performance guarantees for
distributed maximum likelihood estimation.
| 0 | 0 | 1 | 1 | 0 | 0 |
Connecting HL Tau to the Observed Exoplanet Sample | The Atacama Large Millimeter/submilimeter Array (ALMA) recently revealed a
set of nearly concentric gaps in the protoplanetary disk surrounding the young
star HL Tau. If these are carved by forming gas giants, this provides the first
set of orbital initial conditions for planets as they emerge from their birth
disks. Using N-body integrations, we have followed the evolution of the system
for 5 Gyr to explore the possible outcomes. We find that HL Tau initial
conditions scaled down to the size of typically observed exoplanet orbits
naturally produce several populations in the observed exoplanet sample. First,
for a plausible range of planetary masses, we can match the observed
eccentricity distribution of dynamically excited radial velocity giant planets
with eccentricities $>$ 0.2. Second, we roughly obtain the observed rate of hot
Jupiters around FGK stars. Finally, we obtain a large efficiency of planetary
ejections of $\approx 2$ per HL Tau-like system, but the small fraction of
stars observed to host giant planets makes it hard to match the rate of
free-floating planets inferred from microlensing observations. In view of
upcoming GAIA results, we also provide predictions for the expected mutual
inclination distribution, which is significantly broader than the absolute
inclination distributions typically considered by previous studies.
| 0 | 1 | 0 | 0 | 0 | 0 |
How Should a Robot Assess Risk? Towards an Axiomatic Theory of Risk in Robotics | Endowing robots with the capability of assessing risk and making risk-aware
decisions is widely considered a key step toward ensuring safety for robots
operating under uncertainty. But, how should a robot quantify risk? A natural
and common approach is to consider the framework whereby costs are assigned to
stochastic outcomes - an assignment captured by a cost random variable.
Quantifying risk then corresponds to evaluating a risk metric, i.e., a mapping
from the cost random variable to a real number. Yet, the question of what
constitutes a "good" risk metric has received little attention within the
robotics community. The goal of this paper is to explore and partially address
this question by advocating axioms that risk metrics in robotics applications
should satisfy in order to be employed as rational assessments of risk. We
discuss general representation theorems that precisely characterize the class
of metrics that satisfy these axioms (referred to as distortion risk metrics),
and provide instantiations that can be used in applications. We further discuss
pitfalls of commonly used risk metrics in robotics, and discuss additional
properties that one must consider in sequential decision making tasks. Our hope
is that the ideas presented here will lead to a foundational framework for
quantifying risk (and hence safety) in robotics applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
Subsets and Splits