title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Mixed Bohr radius in several variables | Let $K(B_{\ell_p^n},B_{\ell_q^n}) $ be the $n$-dimensional $(p,q)$-Bohr
radius for holomorphic functions on $\mathbb C^n$. That is,
$K(B_{\ell_p^n},B_{\ell_q^n}) $ denotes the greatest constant $r\geq 0$ such
that for every entire function $f(z)=\sum_{\alpha} c_{\alpha} z^{\alpha}$ in
$n$-complex variables, we have the following (mixed) Bohr-type inequality
$$\sup_{z \in r \cdot B_{\ell_q^n}} \sum_{\alpha} | c_{\alpha} z^{\alpha} |
\leq \sup_{z \in B_{\ell_p^n}} | f(z) |,$$ where $B_{\ell_r^n}$ denotes the
closed unit ball of the $n$-dimensional sequence space $\ell_r^n$.
For every $1 \leq p, q \leq \infty$, we exhibit the exact asymptotic growth
of the $(p,q)$-Bohr radius as $n$ (the number of variables) goes to infinity.
| 0 | 0 | 1 | 0 | 0 | 0 |
Adversarial Examples: Attacks and Defenses for Deep Learning | With rapid progress and significant successes in a wide spectrum of
applications, deep learning is being applied in many safety-critical
environments. However, deep neural networks have been recently found vulnerable
to well-designed input samples, called adversarial examples. Adversarial
examples are imperceptible to human but can easily fool deep neural networks in
the testing/deploying stage. The vulnerability to adversarial examples becomes
one of the major risks for applying deep neural networks in safety-critical
environments. Therefore, attacks and defenses on adversarial examples draw
great attention. In this paper, we review recent findings on adversarial
examples for deep neural networks, summarize the methods for generating
adversarial examples, and propose a taxonomy of these methods. Under the
taxonomy, applications for adversarial examples are investigated. We further
elaborate on countermeasures for adversarial examples and explore the
challenges and the potential solutions.
| 1 | 0 | 0 | 1 | 0 | 0 |
Auto-Keras: Efficient Neural Architecture Search with Network Morphism | Neural architecture search (NAS) has been proposed to automatically tune deep
neural networks, but existing search algorithms usually suffer from expensive
computational cost. Network morphism, which keeps the functionality of a neural
network while changing its neural architecture, could be helpful for NAS by
enabling a more efficient training during the search. In this paper, we propose
a novel framework enabling Bayesian optimization to guide the network morphism
for efficient neural architecture search by introducing a neural network kernel
and a tree-structured acquisition function optimization algorithm, which more
efficiently explores the search space. Intensive experiments have been done to
demonstrate the superior performance of the developed framework over the
state-of-the-art methods. Moreover, we build an open-source AutoML system on
our method, namely Auto-Keras. The system runs in parallel on CPU and GPU, with
an adaptive search strategy for different GPU memory limits.
| 0 | 0 | 0 | 1 | 0 | 0 |
Distributed Representation of Subgraphs | Network embeddings have become very popular in learning effective feature
representations of networks. Motivated by the recent successes of embeddings in
natural language processing, researchers have tried to find network embeddings
in order to exploit machine learning algorithms for mining tasks like node
classification and edge prediction. However, most of the work focuses on
finding distributed representations of nodes, which are inherently ill-suited
to tasks such as community detection which are intuitively dependent on
subgraphs.
Here, we propose sub2vec, an unsupervised scalable algorithm to learn feature
representations of arbitrary subgraphs. We provide means to characterize
similarties between subgraphs and provide theoretical analysis of sub2vec and
demonstrate that it preserves the so-called local proximity. We also highlight
the usability of sub2vec by leveraging it for network mining tasks, like
community detection. We show that sub2vec gets significant gains over
state-of-the-art methods and node-embedding methods. In particular, sub2vec
offers an approach to generate a richer vocabulary of features of subgraphs to
support representation and reasoning.
| 1 | 0 | 0 | 1 | 0 | 0 |
The right tool for the right question --- beyond the encoding versus decoding dichotomy | There are two major questions that neuroimaging studies attempt to answer:
First, how are sensory stimuli represented in the brain (which we term the
stimulus-based setting)? And, second, how does the brain generate cognition
(termed the response-based setting)? There has been a lively debate in the
neuroimaging community whether encoding and decoding models can provide
insights into these questions. In this commentary, we construct two simple and
analytically tractable examples to demonstrate that while an encoding model
analysis helps with the former, neither model is appropriate to satisfactorily
answer the latter question. Consequently, we argue that if we want to
understand how the brain generates cognition, we need to move beyond the
encoding versus decoding dichotomy and instead discuss and develop tools that
are specifically tailored to our endeavour.
| 0 | 0 | 0 | 1 | 0 | 0 |
Recognising Axionic Dark Matter by Compton and de-Broglie Scale Modulation of Pulsar Timing | Light Axionic Dark Matter, motivated by string theory, is increasingly
favored for the "no-WIMP era". Galaxy formation is suppressed below a Jeans
scale, of $\simeq 10^8 M_\odot$ by setting the axion mass to, $m_B \sim
10^{-22}$eV, and the large dark cores of dwarf galaxies are explained as
solitons on the de-Broglie scale. This is persuasive, but detection of the
inherent scalar field oscillation at the Compton frequency, $\omega_B= (2.5{\rm
\, months})^{-1}(m_B/10^{-22}eV)$, would be definitive. By evolving the coupled
Schrödinger-Poisson equation for a Bose-Einstein condensate, we predict the
dark matter is fully modulated by de-Broglie interference, with a dense soliton
core of size $\simeq 150pc$, at the Galactic center. The oscillating field
pressure induces General Relativistic time dilation in proportion to the local
dark matter density and pulsars within this dense core have detectably large
timing residuals, of $\simeq 400nsec/(m_B/10^{-22}eV)$. This is encouraging as
many new pulsars should be discovered near the Galactic center with planned
radio surveys. More generally, over the whole Galaxy, differences in dark
matter density between pairs of pulsars imprints a pairwise Galactocentric
signature that can be distinguished from an isotropic gravitational wave
background.
| 0 | 1 | 0 | 0 | 0 | 0 |
Core of communities in bipartite networks | We use the information present in a bipartite network to detect cores of
communities of each set of the bipartite system. Cores of communities are found
by investigating statistically validated projected networks obtained using
information present in the bipartite network. Cores of communities are highly
informative and robust with respect to the presence of errors or missing
entries in the bipartite network. We assess the statistical robustness of cores
by investigating an artificial benchmark network, the co-authorship network,
and the actor-movie network. The accuracy and precision of the partition
obtained with respect to the reference partition are measured in terms of the
adjusted Rand index and of the adjusted Wallace index respectively. The
detection of cores is highly precise although the accuracy of the methodology
can be limited in some cases.
| 1 | 1 | 0 | 0 | 0 | 0 |
Low-Rank Hidden State Embeddings for Viterbi Sequence Labeling | In textual information extraction and other sequence labeling tasks it is now
common to use recurrent neural networks (such as LSTM) to form rich embedded
representations of long-term input co-occurrence patterns. Representation of
output co-occurrence patterns is typically limited to a hand-designed graphical
model, such as a linear-chain CRF representing short-term Markov dependencies
among successive labels. This paper presents a method that learns embedded
representations of latent output structure in sequence data. Our model takes
the form of a finite-state machine with a large number of latent states per
label (a latent variable CRF), where the state-transition matrix is
factorized---effectively forming an embedded representation of
state-transitions capable of enforcing long-term label dependencies, while
supporting exact Viterbi inference over output labels. We demonstrate accuracy
improvements and interpretable latent structure in a synthetic but complex task
based on CoNLL named entity recognition.
| 1 | 0 | 0 | 0 | 0 | 0 |
Zeroth-Order Online Alternating Direction Method of Multipliers: Convergence Analysis and Applications | In this paper, we design and analyze a new zeroth-order online algorithm,
namely, the zeroth-order online alternating direction method of multipliers
(ZOO-ADMM), which enjoys dual advantages of being gradient-free operation and
employing the ADMM to accommodate complex structured regularizers. Compared to
the first-order gradient-based online algorithm, we show that ZOO-ADMM requires
$\sqrt{m}$ times more iterations, leading to a convergence rate of
$O(\sqrt{m}/\sqrt{T})$, where $m$ is the number of optimization variables, and
$T$ is the number of iterations. To accelerate ZOO-ADMM, we propose two
minibatch strategies: gradient sample averaging and observation averaging,
resulting in an improved convergence rate of $O(\sqrt{1+q^{-1}m}/\sqrt{T})$,
where $q$ is the minibatch size. In addition to convergence analysis, we also
demonstrate ZOO-ADMM to applications in signal processing, statistics, and
machine learning.
| 1 | 0 | 0 | 1 | 0 | 0 |
Stationary solutions for stochastic damped Navier-Stokes equations in $\mathbb R^d$ | We consider the stochastic damped Navier-Stokes equations in $\mathbb R^d$
($d=2,3$), assuming as in our previous work [4] that the covariance of the
noise is not too regular, so Itô calculus cannot be applied in the space of
finite energy vector fields. We prove the existence of an invariant measure
when $d=2$ and of a stationary solution when $d=3$.
| 0 | 0 | 1 | 0 | 0 | 0 |
On compact packings of the plane with circles of three radii | A compact circle-packing $P$ of the Euclidean plane is a set of circles which
bound mutually disjoint open discs with the property that, for every circle
$S\in P$, there exists a maximal indexed set $\{A_{0},\ldots,A_{n-1}\}\subseteq
P$ so that, for every $i\in\{0,\ldots,n-1\}$, the circle $A_{i}$ is tangent to
both circles $S$ and $A_{i+1\mod n}$ .
We show that there exist at most $11462$ pairs $(r,s)$ with $0<s<r<1$ for
which there exist a compact circle-packing of the plane consisting of circles
with radii $s$, $r$ and $1$.
We discuss computing the exact values of such $0<s<r<1$ as roots of
polynomials and exhibit a selection of compact circle-packings consisting of
circles of three radii. We also discuss the apparent infeasibility of computing
all these values on contemporary consumer hardware.
| 0 | 0 | 1 | 0 | 0 | 0 |
Next Basket Prediction using Recurring Sequential Patterns | Nowadays, a hot challenge for supermarket chains is to offer personalized
services for their customers. Next basket prediction, i.e., supplying the
customer a shopping list for the next purchase according to her current needs,
is one of these services. Current approaches are not capable to capture at the
same time the different factors influencing the customer's decision process:
co-occurrency, sequentuality, periodicity and recurrency of the purchased
items. To this aim, we define a pattern Temporal Annotated Recurring Sequence
(TARS) able to capture simultaneously and adaptively all these factors. We
define the method to extract TARS and develop a predictor for next basket named
TBP (TARS Based Predictor) that, on top of TARS, is able to to understand the
level of the customer's stocks and recommend the set of most necessary items.
By adopting the TBP the supermarket chains could crop tailored suggestions for
each individual customer which in turn could effectively speed up their
shopping sessions. A deep experimentation shows that TARS are able to explain
the customer purchase behavior, and that TBP outperforms the state-of-the-art
competitors.
| 1 | 0 | 0 | 0 | 0 | 0 |
Solving nonlinear circuits with pulsed excitation by multirate partial differential equations | In this paper the concept of Multirate Partial Differential Equations (MPDEs)
is applied to obtain an efficient solution for nonlinear low-frequency
electrical circuits with pulsed excitation. The MPDEs are solved by a Galerkin
approach and a conventional time discretization. Nonlinearities are efficiently
accounted for by neglecting the high-frequency components (ripples) of the
state variables and using only their envelope for the evaluation. It is shown
that the impact of this approximation on the solution becomes increasingly
negligible for rising frequency and leads to significant performance gains.
| 1 | 0 | 0 | 0 | 0 | 0 |
Optimal modification of the LRT for the equality of two high-dimensional covariance matrices | This paper considers the optimal modification of the likelihood ratio test
(LRT) for the equality of two high-dimensional covariance matrices. The
classical LRT is not well defined when the dimensions are larger than or equal
to one of the sample sizes. In this paper, an optimally modified test that
works well in cases where the dimensions may be larger than the sample sizes is
proposed. In addition, the test is established under the weakest conditions on
the moments and the dimensions of the samples. We also present weakly
consistent estimators of the fourth moments, which are necessary for the
proposed test, when they are not equal to 3. From the simulation results and
real data analysis, we find that the performances of the proposed statistics
are robust against affine transformations.
| 0 | 0 | 1 | 1 | 0 | 0 |
Feature uncertainty bounding schemes for large robust nonlinear SVM classifiers | We consider the binary classification problem when data are large and subject
to unknown but bounded uncertainties. We address the problem by formulating the
nonlinear support vector machine training problem with robust optimization. To
do so, we analyze and propose two bounding schemes for uncertainties associated
to random approximate features in low dimensional spaces. The proposed
techniques are based on Random Fourier Features and the Nyström methods. The
resulting formulations can be solved with efficient stochastic approximation
techniques such as stochastic (sub)-gradient, stochastic proximal gradient
techniques or their variants.
| 1 | 0 | 0 | 1 | 0 | 0 |
Impact of carrier localization on recombination in InGaN quantum wells and the efficiency of nitride light-emitting diodes: insights from theory and numerical simulations | We examine the effect of carrier localization due to random alloy
fluctuations on the radiative and Auger recombination rates in InGaN quantum
wells as a function of alloy composition, crystal orientation, carrier density,
and temperature. Our results show that alloy fluctuations reduce individual
transition matrix elements by the separate localization of electrons and holes,
but this effect is overcompensated by the additional transitions enabled by
translational symmetry breaking and the resulting lack of momentum
conservation. Hence, we find that localization increases both radiative and
Auger recombination rates, but that Auger recombination rates increase by one
order of magnitude more than radiative rates. Furthermore, we demonstrate that
localization has an overall detrimental effect on the efficiency-droop and
green-gap problems of InGaN LEDs.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Digital Neuromorphic Architecture Efficiently Facilitating Complex Synaptic Response Functions Applied to Liquid State Machines | Information in neural networks is represented as weighted connections, or
synapses, between neurons. This poses a problem as the primary computational
bottleneck for neural networks is the vector-matrix multiply when inputs are
multiplied by the neural network weights. Conventional processing architectures
are not well suited for simulating neural networks, often requiring large
amounts of energy and time. Additionally, synapses in biological neural
networks are not binary connections, but exhibit a nonlinear response function
as neurotransmitters are emitted and diffuse between neurons. Inspired by
neuroscience principles, we present a digital neuromorphic architecture, the
Spiking Temporal Processing Unit (STPU), capable of modeling arbitrary complex
synaptic response functions without requiring additional hardware components.
We consider the paradigm of spiking neurons with temporally coded information
as opposed to non-spiking rate coded neurons used in most neural networks. In
this paradigm we examine liquid state machines applied to speech recognition
and show how a liquid state machine with temporal dynamics maps onto the
STPU-demonstrating the flexibility and efficiency of the STPU for instantiating
neural algorithms.
| 1 | 0 | 0 | 1 | 0 | 0 |
Volume of representations and mapping degree | Given a connected real Lie group and a contractible homogeneous proper
$G$--space $X$ furnished with a $G$--invariant volume form, a real valued
volume can be assigned to any representation $\rho\colon \pi_1(M)\to G$ for any
oriented closed smooth manifold $M$ of the same dimension as $X$. Suppose that
$G$ contains a closed and cocompact semisimple subgroup, it is shown in this
paper that the set of volumes is finite for any given $M$. From a perspective
of model geometries, examples are investigated and applications with mapping
degrees are discussed.
| 0 | 0 | 1 | 0 | 0 | 0 |
Transiting Planets with LSST III: Detection Rate per Year of Operation | The Large Synoptic Survey Telescope (LSST) will generate light curves for
approximately 1 billion stars. Our previous work has demonstrated that, by the
end of the LSST 10 year mission, large numbers of transiting exoplanetary
systems could be recovered using the LSST "deep drilling" cadence. Here we
extend our previous work to examine how the recoverability of transiting
planets over a range of orbital periods and radii evolves per year of LSST
operation. As specific example systems we consider hot Jupiters orbiting
solar-type stars and hot Neptunes orbiting K-Dwarfs at distances from Earth of
several kpc, as well as super-Earths orbiting nearby low-mass M-dwarfs. The
detection of transiting planets increases steadily with the accumulation of
data over time, generally becoming large (greater than 10 percent) after 4 - 6
years of operation. However, we also find that short-period (less than 2 day)
hot Jupiters orbiting G-dwarfs and hot Neptunes orbiting K-dwarfs can already
be discovered within the first 1 - 2 years of LSST operation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Summing coincidence in rare event gamma-ray measurements under an ultra-low background environment | A Monte Carlo method based on the GEANT4 toolkit has been developed to
correct the full-energy peak (FEP) efficiencies of a high purity germanium
(HPGe) detector equipped with a low background shielding system, and moreover
evaluated using summing peaks in a numerical way. It is found that the FEP
efficiencies of $^{60}$Co, $^{133}$Ba and $^{152}$Eu can be improved up to 18\%
by taking the calculated true summing \mbox{coincidence} factors (TSCFs)
correction into account. Counts of summing coincidence $\gamma$ peaks in the
spectrum of $^{152}$Eu can be well reproduced using the corrected efficiency
curve within an accuracy of 3\%.
| 0 | 1 | 0 | 0 | 0 | 0 |
Free constructions and coproducts of d-frames | A general theory of presentations for d-frames does not yet exist. We review
the difficulties and give sufficient conditions for when they can be overcome.
As an application we prove that the category of d-frames is closed under
coproducts.
| 1 | 0 | 1 | 0 | 0 | 0 |
Trace Expressiveness of Timed and Probabilistic Automata | Automata expressiveness is an essential feature in understanding which of the
formalisms available should be chosen for modelling a particular problem.
Probabilistic and stochastic automata are suitable for modelling systems
exhibiting probabilistic behavior and their expressiveness has been studied
relative to non-probabilistic transition systems and Markov chains. In this
paper, we consider previous formalisms of Timed, Probabilistic and Stochastic
Timed Automata, we present our new model of Timed Automata with Polynomial
Delay, we introduce a measure of expressiveness for automata we call trace
expressiveness and we characterize the expressiveness of these models relative
to each other under this new measure.
| 1 | 0 | 0 | 0 | 0 | 0 |
Open data, open review and open dialogue in making social sciences plausible | Nowadays, protecting trust in social sciences also means engaging in open
community dialogue, which helps to safeguard robustness and improve efficiency
of research methods. The combination of open data, open review and open
dialogue may sound simple but implementation in the real world will not be
straightforward. However, in view of Begley and Ellis's (2012) statement that,
"the scientific process demands the highest standards of quality, ethics and
rigour," they are worth implementing. More importantly, they are feasible to
work on and likely will help to restore plausibility to social sciences
research. Therefore, I feel it likely that the triplet of open data, open
review and open dialogue will gradually emerge to become policy requirements
regardless of the research funding source.
| 0 | 0 | 0 | 1 | 0 | 0 |
The isoperimetric problem in the 2-dimensional Finsler space forms with k = 0. II | This paper is a continuation of the second author's previous work. We
investigate the isoperimetric problem in the 2-dimensional Finsler space form
$(F_B, B^2(1))$ with $k=0$ by using the Holmes-Thompson area and prove that the
circle centered the origin achieves the local maximum area of the isoperimetric
problem.
| 0 | 0 | 1 | 0 | 0 | 0 |
Estimating Buildings' Parameters over Time Including Prior Knowledge | Modeling buildings' heat dynamics is a complex process which depends on
various factors including weather, building thermal capacity, insulation
preservation, and residents' behavior. Gray-box models offer a causal inference
of those dynamics expressed in few parameters specific to built environments.
These parameters can provide compelling insights into the characteristics of
building artifacts and have various applications such as forecasting HVAC
usage, indoor temperature control monitoring of built environments, etc. In
this paper, we present a systematic study of modeling buildings' thermal
characteristics and thus derive the parameters of built conditions with a
Bayesian approach. We build a Bayesian state-space model that can adapt and
incorporate buildings' thermal equations and propose a generalized solution
that can easily adapt prior knowledge regarding the parameters. We show that a
faster approximate approach using variational inference for parameter
estimation can provide similar parameters as that of a more time-consuming
Markov Chain Monte Carlo (MCMC) approach. We perform extensive evaluations on
two datasets to understand the generative process and show that the Bayesian
approach is more interpretable. We further study the effects of prior selection
for the model parameters and transfer learning, where we learn parameters from
one season and use them to fit the model in the other. We perform extensive
evaluations on controlled and real data traces to enumerate buildings'
parameter within a 95% credible interval.
| 1 | 0 | 0 | 1 | 0 | 0 |
Decentralized Connectivity-Preserving Deployment of Large-Scale Robot Swarms | We present a decentralized and scalable approach for deployment of a robot
swarm. Our approach tackles scenarios in which the swarm must reach multiple
spatially distributed targets, and enforce the constraint that the robot
network cannot be split. The basic idea behind our work is to construct a
logical tree topology over the physical network formed by the robots. The
logical tree acts as a backbone used by robots to enforce connectivity
constraints. We study and compare two algorithms to form the logical tree:
outwards and inwards. These algorithms differ in the order in which the robots
join the tree: the outwards algorithm starts at the tree root and grows towards
the targets, while the inwards algorithm proceeds in the opposite manner. Both
algorithms perform periodic reconfiguration, to prevent suboptimal topologies
from halting the growth of the tree. Our contributions are (i) The formulation
of the two algorithms; (ii) A comparison of the algorithms in extensive
physics-based simulations; (iii) A validation of our findings through
real-robot experiments.
| 1 | 0 | 0 | 0 | 0 | 0 |
Bell's Inequality and Entanglement in Qubits | We propose an alternative evaluation of quantum entanglement by measuring the
maximum violation of the Bell's inequality without performing a partial trace
operation. This proposal is demonstrated by bridging the maximum violation of
the Bell's inequality and the concurrence of a pure state in an $n$-qubit
system, in which one subsystem only contains one qubit and the state is a
linear combination of two product states. We apply this relation to the ground
states of four qubits in the Wen-Plaquette model and show that they are
maximally entangled. A topological entanglement entropy of the Wen-Plaquette
model could be obtained by relating the upper bound of the maximum violation of
the Bell's inequality to the concurrences of a pure state with respect to
different bipartitions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Stochastic Conjugate Gradient Algorithm with Variance Reduction | Conjugate gradient (CG) methods are a class of important methods for solving
linear equations and nonlinear optimization problems. In this paper, we propose
a new stochastic CG algorithm with variance reduction and we prove its linear
convergence with the Fletcher and Reeves method for strongly convex and smooth
functions. We experimentally demonstrate that the CG with variance reduction
algorithm converges faster than its counterparts for four learning models,
which may be convex, nonconvex or nonsmooth. In addition, its area under the
curve performance on six large-scale data sets is comparable to that of the
LIBLINEAR solver for the L2-regularized L2-loss but with a significant
improvement in computational efficiency
| 1 | 0 | 0 | 1 | 0 | 0 |
Weak in the NEES?: Auto-tuning Kalman Filters with Bayesian Optimization | Kalman filters are routinely used for many data fusion applications including
navigation, tracking, and simultaneous localization and mapping problems.
However, significant time and effort is frequently required to tune various
Kalman filter model parameters, e.g. process noise covariance, pre-whitening
filter models for non-white noise, etc. Conventional optimization techniques
for tuning can get stuck in poor local minima and can be expensive to implement
with real sensor data. To address these issues, a new "black box" Bayesian
optimization strategy is developed for automatically tuning Kalman filters. In
this approach, performance is characterized by one of two stochastic objective
functions: normalized estimation error squared (NEES) when ground truth state
models are available, or the normalized innovation error squared (NIS) when
only sensor data is available. By intelligently sampling the parameter space to
both learn and exploit a nonparametric Gaussian process surrogate function for
the NEES/NIS costs, Bayesian optimization can efficiently identify multiple
local minima and provide uncertainty quantification on its results.
| 1 | 0 | 0 | 1 | 0 | 0 |
Ultra-wide plasmonic tuning of semiconductor metasurface resonators on epsilon near zero media | Fully reconfigurable metasurfaces would enable new classes of optical devices
that provide unprecedented control of electromagnetic beamforms. The principal
challenge for achieving reconfigurability is the need to generate large
tunability of subwavelength, low-Q metasurface resonators. Here, we demonstrate
large refractive index tuning can be efficiently facilitated at mid-infrared
wavelengths using novel temperature-dependent control over free-carrier
refraction. In doped InSb we demonstrate nearly two-fold increase in the
electron effective mass leading to a positive refractive index shift
({\Delta}n>1.5) far greater than conventional thermo-optic effects. In undoped
films we demonstrate more than 10-fold change in the thermal free-carrier
concentration producing a near-unity negative refractive index shift.
Exploiting both effects within a single resonator system, intrinsic InSb wires
on a heavily doped (epsilon near zero) InSb substrate, we demonstrate
dynamically tunable Mie resonances. The observed larger than line-width
resonance shifts ({\Delta}{\lambda}>1.5{\mu}m) suggest new avenues for highly
tunable and reconfigurable mid-infrared semiconductor metasurfaces.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Method for Analysis of Patient Speech in Dialogue for Dementia Detection | We present an approach to automatic detection of Alzheimer's type dementia
based on characteristics of spontaneous spoken language dialogue consisting of
interviews recorded in natural settings. The proposed method employs additive
logistic regression (a machine learning boosting method) on content-free
features extracted from dialogical interaction to build a predictive model. The
model training data consisted of 21 dialogues between patients with Alzheimer's
and interviewers, and 17 dialogues between patients with other health
conditions and interviewers. Features analysed included speech rate,
turn-taking patterns and other speech parameters. Despite relying solely on
content-free features, our method obtains overall accuracy of 86.5\%, a result
comparable to those of state-of-the-art methods that employ more complex
lexical, syntactic and semantic features. While further investigation is
needed, the fact that we were able to obtain promising results using only
features that can be easily extracted from spontaneous dialogues suggests the
possibility of designing non-invasive and low-cost mental health monitoring
tools for use at scale.
| 1 | 0 | 0 | 0 | 0 | 0 |
Resilient Transmission Grid Design: AC Relaxation vs. DC approximation | As illustrated in recent years (Superstorm Sandy, the Northeast Ice Storm of
1998, etc.), extreme weather events pose an enormous threat to the electric
power transmission systems and the associated socio-economic systems that
depend on reliable delivery of electric power. Besides inevitable malfunction
of power grid components, deliberate malicious attacks can cause high risks to
the service. These threats motivate the need for approaches and methods that
improve the resilience of power systems. In this paper, we develop a model and
tractable methods for optimizing the upgrade of transmission systems through a
combination of hardening existing components, adding redundant lines, switches,
generators, and FACTS and phase-shifting devices. While many of these
controllable components are included in traditional design (expansion planning)
problems, we uniquely assess their benefits from a resiliency point of view.
More importantly, perhaps, we evaluate the suitability of using
state-of-the-art AC power flow relaxations versus the common DC approximation
in resilience improvement studies. The resiliency model and algorithms are
tested on a modified version of the RTS-96 (single area) system.
| 1 | 0 | 1 | 0 | 0 | 0 |
VSE++: Improving Visual-Semantic Embeddings with Hard Negatives | We present a new technique for learning visual-semantic embeddings for
cross-modal retrieval. Inspired by hard negative mining, the use of hard
negatives in structured prediction, and ranking loss functions, we introduce a
simple change to common loss functions used for multi-modal embeddings. That,
combined with fine-tuning and use of augmented data, yields significant gains
in retrieval performance. We showcase our approach, VSE++, on MS-COCO and
Flickr30K datasets, using ablation studies and comparisons with existing
methods. On MS-COCO our approach outperforms state-of-the-art methods by 8.8%
in caption retrieval and 11.3% in image retrieval (at R@1).
| 1 | 0 | 0 | 0 | 0 | 0 |
The cooling-off effect of price limits in the Chinese stock markets | In this paper, we investigate the cooling-off effect (opposite to the magnet
effect) from two aspects. Firstly, from the viewpoint of dynamics, we study the
existence of the cooling-off effect by following the dynamical evolution of
some financial variables over a period of time before the stock price hits its
limit. Secondly, from the probability perspective, we investigate, with the
logit model, the existence of the cooling-off effect through analyzing the
high-frequency data of all A-share common stocks traded on the Shanghai Stock
Exchange and the Shenzhen Stock Exchange from 2000 to 2011 and inspecting the
trading period from the opening phase prior to the moment that the stock price
hits its limits. A comparison is made of the properties between up-limit hits
and down-limit hits, and the possible difference will also be compared between
bullish and bearish market state by dividing the whole period into three
alternating bullish periods and three bearish periods. We find that the
cooling-off effect emerges for both up-limit hits and down-limit hits, and the
cooling-off effect of the down-limit hits is stronger than that of the up-limit
hits. The difference of the cooling-off effect between bullish period and
bearish period is quite modest. Moreover, we examine the sub-optimal orders
effect, and infer that the professional individual investors and institutional
investors play a positive role in the cooling-off effects. All these findings
indicate that the price limit trading rule exerts a positive effect on
maintaining the stability of the Chinese stock markets.
| 0 | 0 | 0 | 0 | 0 | 1 |
When Anderson localization makes quantum particles move backward | We unveil a novel and unexpected manifestation of Anderson localization of
matter wave packets that carry a finite average velocity: after an initial
ballistic motion, the packet center-of-mass experiences a retroreflection and
slowly returns to its initial position. We describe this effect both
numerically and analytically in dimension 1, and show that it is destroyed by
weak particle interactions which act as a decoherence process. The
retroreflection is also present in higher dimensions, provided the dynamics is
Anderson localized.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the origin of super-diffusive behavior in a class of non-equilibrium systems | Experiments and simulations have established that dynamics in a class of
living and abiotic systems that are far from equilibrium exhibit super
diffusive behavior at long times, which in some cases (for example evolving
tumor) is preceded by slow glass-like dynamics. By using the evolution of a
collection of tumor cells, driven by mechanical forces and subject to cell
birth and apoptosis, as a case study we show theoretically that on short time
scales the mean square displacement is sub-diffusive due to jamming, whereas at
long times it is super diffusive. The results obtained using stochastic
quantization method, which is needed because of the absence of
fluctuation-dissipation theorem (FDT), show that the super-diffusive behavior
is universal and impervious to the nature of cell-cell interactions.
Surprisingly, the theory also quantitatively accounts for the non-trivial
dynamics observed in simulations of a model soap foam characterized by creation
and destruction of spherical bubbles, which suggests that the two
non-equilibrium systems belong to the same universality class. The theoretical
prediction for the super diffusion exponent is in excellent agreement with
simulations for collective motion of tumor cells and dynamics associated with
soap bubbles.
| 0 | 0 | 0 | 0 | 1 | 0 |
Statistical analysis of the first passage path ensemble of jump processes | The transition mechanism of jump processes between two different subsets in
state space reveals important dynamical information of the processes and
therefore has attracted considerable attention in the past years. In this
paper, we study the first passage path ensemble of both discrete-time and
continuous-time jump processes on a finite state space. The main approach is to
divide each first passage path into nonreactive and reactive segments and to
study them separately. The analysis can be applied to jump processes which are
non-ergodic, as well as continuous-time jump processes where the waiting time
distributions are non-exponential. In the particular case that the jump
processes are both Markovian and ergodic, our analysis elucidates the relations
between the study of the first passage paths and the study of the transition
paths in transition path theory. We provide algorithms to numerically compute
statistics of the first passage path ensemble. The computational complexity of
these algorithms scales with the complexity of solving a linear system, for
which efficient methods are available. Several examples demonstrate the wide
applicability of the derived results across research areas.
| 0 | 0 | 1 | 0 | 0 | 0 |
The SLUGGS Survey: Dark matter fractions at large radii and assembly epochs of early-type galaxies from globular cluster kinematics | We use globular cluster kinematics data, primarily from the SLUGGS survey, to
measure the dark matter fraction ($f_{\rm DM}$) and the average dark matter
density ($\left< \rho_{\rm DM} \right>$) within the inner 5 effective radii
($R_{\rm e}$) for 32 nearby early--type galaxies (ETGs) with stellar mass log
$(M_*/\rm M_\odot)$ ranging from $10.1$ to $11.8$. We compare our results with
a simple galaxy model based on scaling relations as well as with cosmological
hydrodynamical simulations where the dark matter profile has been modified
through various physical processes.
We find a high $f_{\rm DM}$ ($\geq0.6$) within 5~$R_{\rm e}$ in most of our
sample, which we interpret as a signature of a late mass assembly history that
is largely devoid of gas-rich major mergers. However, around log $(M_*/M_\odot)
\sim 11$, there is a wide range of $f_{\rm DM}$ which may be challenging to
explain with any single cosmological model. We find tentative evidence that
lenticulars (S0s), unlike ellipticals, have mass distributions that are similar
to spiral galaxies, with decreasing $f_{\rm DM}$ within 5~$R_{\rm e}$ as galaxy
luminosity increases. However, we do not find any difference between the
$\left< \rho_{\rm DM} \right>$ of S0s and ellipticals in our sample, despite
the differences in their stellar populations. We have also used $\left<
\rho_{\rm DM} \right>$ to infer the epoch of halo assembly ($z{\sim}2-4$). By
comparing the age of their central stars with the inferred epoch of halo
formation, we are able to gain more insight into their mass assembly histories.
Our results suggest a fundamental difference in the dominant late-phase mass
assembly channel between lenticulars and elliptical galaxies.
| 0 | 1 | 0 | 0 | 0 | 0 |
Wave packet dynamics of Bogoliubov quasiparticles: quantum metric effects | We study the dynamics of the Bogoliubov wave packet in superconductors and
calculate the supercurrent carried by the wave packet. We discover an anomalous
contribution to the supercurrent, related to the quantum metric of the Bloch
wave function. This anomalous contribution is most important for flat or
quasiflat bands, as exemplified by the attractive Hubbard models on the Creutz
ladder and sawtooth lattice. Our theoretical framework is general and can be
used to study a wide variety of phenomena, such as spin transport and exciton
transport.
| 0 | 1 | 0 | 0 | 0 | 0 |
The search for superheavy elements: Historical and philosophical perspectives | The heaviest of the transuranic elements known as superheavy elements (SHE)
are produced in nuclear reactions in a few specialized laboratories located in
Germany, the U.S., Russia, and Japan. The history of this branch of physical
science provides several case studies of interest to the philosophy and
sociology of modern science. The story of SHE illuminates the crucial notion of
what constitutes a chemical element, what the criteria are for discovering a
new element, and how an element is assigned a name. The story also cast light
on the sometimes uneasy relationship between physics and chemistry. It is far
from obvious that elements with Z > 110 exist in the same sense as oxygen or
sodium exists. The answers are not given by nature but by international
commissions responsible for the criteria and evaluation of discovery claims.
The works of these commissions and of SHE research in general have often been
controversial.
| 0 | 1 | 0 | 0 | 0 | 0 |
MuLoG, or How to apply Gaussian denoisers to multi-channel SAR speckle reduction? | Speckle reduction is a longstanding topic in synthetic aperture radar (SAR)
imaging. Since most current and planned SAR imaging satellites operate in
polarimetric, interferometric or tomographic modes, SAR images are
multi-channel and speckle reduction techniques must jointly process all
channels to recover polarimetric and interferometric information. The
distinctive nature of SAR signal (complex-valued, corrupted by multiplicative
fluctuations) calls for the development of specialized methods for speckle
reduction. Image denoising is a very active topic in image processing with a
wide variety of approaches and many denoising algorithms available, almost
always designed for additive Gaussian noise suppression. This paper proposes a
general scheme, called MuLoG (MUlti-channel LOgarithm with Gaussian denoising),
to include such Gaussian denoisers within a multi-channel SAR speckle reduction
technique. A new family of speckle reduction algorithms can thus be obtained,
benefiting from the ongoing progress in Gaussian denoising, and offering
several speckle reduction results often displaying method-specific artifacts
that can be dismissed by comparison between results.
| 0 | 0 | 1 | 1 | 0 | 0 |
Radon Transform for Sheaves | We define the Radon transform functor for sheaves and prove that it is an
equivalence after suitable microlocal localizations. As a result, the sheaf
category associated to a Legendrian is invariant under the Radon transform. We
also manage to place the Radon transform and other transforms in microlocal
sheaf theory altogether in a diagram.
| 0 | 0 | 1 | 0 | 0 | 0 |
A type theory for synthetic $\infty$-categories | We propose foundations for a synthetic theory of $(\infty,1)$-categories
within homotopy type theory. We axiomatize a directed interval type, then
define higher simplices from it and use them to probe the internal categorical
structures of arbitrary types. We define Segal types, in which binary
composites exist uniquely up to homotopy; this automatically ensures
composition is coherently associative and unital at all dimensions. We define
Rezk types, in which the categorical isomorphisms are additionally equivalent
to the type-theoretic identities - a "local univalence" condition. And we
define covariant fibrations, which are type families varying functorially over
a Segal type, and prove a "dependent Yoneda lemma" that can be viewed as a
directed form of the usual elimination rule for identity types. We conclude by
studying homotopically correct adjunctions between Segal types, and showing
that for a functor between Rezk types to have an adjoint is a mere proposition.
To make the bookkeeping in such proofs manageable, we use a three-layered
type theory with shapes, whose contexts are extended by polytopes within
directed cubes, which can be abstracted over using "extension types" that
generalize the path-types of cubical type theory. In an appendix, we describe
the motivating semantics in the Reedy model structure on bisimplicial sets, in
which our Segal and Rezk types correspond to Segal spaces and complete Segal
spaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
Strongly Hierarchical Factorization Machines and ANOVA Kernel Regression | High-order parametric models that include terms for feature interactions are
applied to various data mining tasks, where ground truth depends on
interactions of features. However, with sparse data, the high- dimensional
parameters for feature interactions often face three issues: expensive
computation, difficulty in parameter estimation and lack of structure. Previous
work has proposed approaches which can partially re- solve the three issues. In
particular, models with factorized parameters (e.g. Factorization Machines) and
sparse learning algorithms (e.g. FTRL-Proximal) can tackle the first two issues
but fail to address the third. Regarding to unstructured parameters,
constraints or complicated regularization terms are applied such that
hierarchical structures can be imposed. However, these methods make the
optimization problem more challenging. In this work, we propose Strongly
Hierarchical Factorization Machines and ANOVA kernel regression where all the
three issues can be addressed without making the optimization problem more
difficult. Experimental results show the proposed models significantly
outperform the state-of-the-art in two data mining tasks: cold-start user
response time prediction and stock volatility prediction.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the relationship of Mathematics to the real world | In this article, I discuss the relationship of mathematics to the physical
world, and to other spheres of human knowledge. In particular, I argue that
Mathematics is created by human beings, and the number $\pi$ can not be said to
have existed $100,000$ years ago, using the conventional meaning of the word
`exist'.
| 0 | 1 | 1 | 0 | 0 | 0 |
$\mathsf{LLF}_{\cal P}$: a logical framework for modeling external evidence, side conditions, and proof irrelevance using monads | We extend the constructive dependent type theory of the Logical Framework
$\mathsf{LF}$ with monadic, dependent type constructors indexed with predicates
over judgements, called Locks. These monads capture various possible proof
attitudes in establishing the judgment of the object logic encoded by an
$\mathsf{LF}$ type. Standard examples are factoring-out the verification of a
constraint or delegating it to an external oracle, or supplying some
non-apodictic epistemic evidence, or simply discarding the proof witness of a
precondition deeming it irrelevant. This new framework, called Lax Logical
Framework, $\mathsf{LLF}_{\cal P}$, is a conservative extension of
$\mathsf{LF}$, and hence it is the appropriate metalanguage for dealing
formally with side-conditions in rules or external evidence in logical systems.
$\mathsf{LLF}_{\cal P}$ arises once the monadic nature of the lock
type-constructor, ${\cal L}^{\cal P}_{M,\sigma}[\cdot]$, introduced by the
authors in a series of papers, together with Marina Lenisa, is fully exploited.
The nature of the lock monads permits to utilize the very Lock destructor,
${\cal U}^{\cal P}_{M,\sigma}[\cdot]$, in place of Moggi's monadic $let_T$,
thus simplifying the equational theory. The rules for ${\cal U}^{\cal
P}_{M,\sigma}[\cdot]$ permit also the removal of the monad once the constraint
is satisfied. We derive the meta-theory of $\mathsf{LLF}_{\cal P}$ by a novel
indirect method based on the encoding of $\mathsf{LLF}_{\cal P}$ in
$\mathsf{LF}$. We discuss encodings in $\mathsf{LLF}_{\cal P}$ of call-by-value
$\lambda$-calculi, Hoare's Logic, and Fitch-Prawitz Naive Set Theory.
| 1 | 0 | 0 | 0 | 0 | 0 |
Laplace equation for the Dirac, Euler and the harmonic oscillator | In this article, we give the explicit solutions to the Laplace equations
associated to the Dirac operator, Euler operator and the harmonic oscillator in
R.
| 0 | 0 | 1 | 0 | 0 | 0 |
Quantifying the Reality Gap in Robotic Manipulation Tasks | We quantify the accuracy of various simulators compared to a real world
robotic reaching and interaction task. Simulators are used in robotics to
design solutions for real world hardware without the need for physical access.
The `reality gap' prevents solutions developed or learnt in simulation from
performing well, or at at all, when transferred to real-world hardware. Making
use of a Kinova robotic manipulator and a motion capture system, we record a
ground truth enabling comparisons with various simulators, and present
quantitative data for various manipulation-oriented robotic tasks. We show the
relative strengths and weaknesses of numerous contemporary simulators,
highlighting areas of significant discrepancy, and assisting researchers in the
field in their selection of appropriate simulators for their use cases. All
code and parameter listings are publicly available from:
this https URL .
| 1 | 0 | 0 | 0 | 0 | 0 |
On the character degrees of a Sylow $p$-subgroup of a finite Chevalley group $G(p^f)$ over a bad prime | Let $q$ be a power of a prime $p$ and let $U(q)$ be a Sylow $p$-subgroup of a
finite Chevalley group $G(q)$ defined over the field with $q$ elements. We
first give a parametrization of the set $\text{Irr}(U(q))$ of irreducible
characters of $U(q)$ when $G(q)$ is of type $\mathrm{G}_2$. This is uniform for
primes $p \ge 5$, while the bad primes $p=2$ and $p=3$ have to be considered
separately. We then use this result and the contribution of several authors to
show a general result, namely that if $G(q)$ is any finite Chevalley group with
$p$ a bad prime, then there exists a character $\chi \in \text{Irr}(U(q))$ such
that $\chi(1)=q^n/p$ for some $n \in \mathbb{Z}_{\ge_0}$. In particular, for
each $G(q)$ and every bad prime $p$, we construct a family of characters of
such degree as inflation followed by an induction of linear characters of an
abelian subquotient $V(q)$ of $U(q)$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Zero-Delay Rate Distortion via Filtering for Vector-Valued Gaussian Sources | We deal with zero-delay source coding of a vector-valued Gauss-Markov source
subject to a mean-squared error (MSE) fidelity criterion characterized by the
operational zero-delay vector-valued Gaussian rate distortion function (RDF).
We address this problem by considering the nonanticipative RDF (NRDF) which is
a lower bound to the causal optimal performance theoretically attainable (OPTA)
function and operational zero-delay RDF. We recall the realization that
corresponds to the optimal "test-channel" of the Gaussian NRDF, when
considering a vector Gauss-Markov source subject to a MSE distortion in the
finite time horizon. Then, we introduce sufficient conditions to show existence
of solution for this problem in the infinite time horizon. For the asymptotic
regime, we use the asymptotic characterization of the Gaussian NRDF to provide
a new equivalent realization scheme with feedback which is characterized by a
resource allocation (reverse-waterfilling) problem across the dimension of the
vector source. We leverage the new realization to derive a predictive coding
scheme via lattice quantization with subtractive dither and joint memoryless
entropy coding. This coding scheme offers an upper bound to the operational
zero-delay vector-valued Gaussian RDF. When we use scalar quantization, then
for "r" active dimensions of the vector Gauss-Markov source the gap between the
obtained lower and theoretical upper bounds is less than or equal to 0.254r + 1
bits/vector. We further show that it is possible when we use vector
quantization, and assume infinite dimensional Gauss-Markov sources to make the
previous gap to be negligible, i.e., Gaussian NRDF approximates the operational
zero-delay Gaussian RDF. We also extend our results to vector-valued Gaussian
sources of any finite memory under mild conditions. Our theoretical framework
is demonstrated with illustrative numerical experiments.
| 1 | 0 | 0 | 0 | 0 | 0 |
Damping of gravitational waves by matter | We develop a unified description, via the Boltzmann equation, of damping of
gravitational waves by matter, incorporating collisions. We identify two
physically distinct damping mechanisms -- collisional and Landau damping. We
first consider damping in flat spacetime, and then generalize the results to
allow for cosmological expansion. In the first regime, maximal collisional
damping of a gravitational wave, independent of the details of the collisions
in the matter is, as we show, significant only when its wavelength is
comparable to the size of the horizon. Thus damping by intergalactic or
interstellar matter for all but primordial gravitational radiation can be
neglected. Although collisions in matter lead to a shear viscosity, they also
act to erase anisotropic stresses, thus suppressing the damping of
gravitational waves. Damping of primordial gravitational waves remains
possible. We generalize Weinberg's calculation of gravitational wave damping,
now including collisions and particles of finite mass, and interpret the
collisionless limit in terms of Landau damping. While Landau damping of
gravitational waves cannot occur in flat spacetime, the expansion of the
universe allows such damping by spreading the frequency of a gravitational wave
of given wavevector.
| 0 | 1 | 0 | 0 | 0 | 0 |
Calibrations for minimal networks in a covering space setting | In this paper we define a notion of calibration for an equivalent approach to
the classical Steiner problem in a covering space setting and we give some
explicit examples. Moreover we introduce the notion of calibration in families:
the idea is to divide the set of competitors in a suitable way, defining an
appropriate (and weaker) notion of calibration. Then, calibrating the candidate
minimizers in each family and comparing their perimeter, it is possible to find
the minimizers of the minimization problem. Thanks to this procedure we prove
the minimality of the Steiner configurations spanning the vertices of a regular
hexagon and of a regular pentagon.
| 0 | 0 | 1 | 0 | 0 | 0 |
Universal 3D Wearable Fingerprint Targets: Advancing Fingerprint Reader Evaluations | We present the design and manufacturing of high fidelity universal 3D
fingerprint targets, which can be imaged on a variety of fingerprint sensing
technologies, namely capacitive, contact-optical, and contactless-optical.
Universal 3D fingerprint targets enable, for the first time, not only a
repeatable and controlled evaluation of fingerprint readers, but also the
ability to conduct fingerprint reader interoperability studies. Fingerprint
reader interoperability refers to how robust fingerprint recognition systems
are to variations in the images acquired by different types of fingerprint
readers. To build universal 3D fingerprint targets, we adopt a molding and
casting framework consisting of (i) digital mapping of fingerprint images to a
negative mold, (ii) CAD modeling a scaffolding system to hold the negative
mold, (iii) fabricating the mold and scaffolding system with a high resolution
3D printer, (iv) producing or mixing a material with similar electrical,
optical, and mechanical properties to that of the human finger, and (v)
fabricating a 3D fingerprint target using controlled casting. Our experiments
conducted with PIV and Appendix F certified optical (contact and contactless)
and capacitive fingerprint readers demonstrate the usefulness of universal 3D
fingerprint targets for controlled and repeatable fingerprint reader
evaluations and also fingerprint reader interoperability studies.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Velocity of the Propagating Wave for Spatially Coupled Systems with Applications to LDPC Codes | We consider the dynamics of message passing for spatially coupled codes and,
in particular, the set of density evolution equations that tracks the profile
of decoding errors along the spatial direction of coupling. It is known that,
for suitable boundary conditions and after a transient phase, the error profile
exhibits a "solitonic behavior". Namely, a uniquely-shaped wavelike solution
develops, that propagates with constant velocity. Under this assumption we
derive an analytical formula for the velocity in the framework of a continuum
limit of the spatially coupled system. The general formalism is developed for
spatially coupled low-density parity-check codes on general binary memoryless
symmetric channels which form the main system of interest in this work. We
apply the formula for special channels and illustrate that it matches the
direct numerical evaluation of the velocity for a wide range of noise values. A
possible application of the velocity formula to the evaluation of finite size
scaling law parameters is also discussed. We conduct a similar analysis for
general scalar systems and illustrate the findings with applications to
compressive sensing and generalized low-density parity-check codes on the
binary erasure or binary symmetric channels.
| 1 | 1 | 1 | 0 | 0 | 0 |
Voevodsky's conjecture for cubic fourfolds and Gushel-Mukai fourfolds via noncommutative K3 surfaces | In the first part of this paper we will prove the Voevodsky's nilpotence
conjecture for smooth cubic fourfolds and ordinary generic Gushel-Mukai
fourfolds. Then, making use of noncommutative motives, we will prove the
Voevodsky's nilpotence conjecture for generic Gushel-Mukai fourfolds containing
a $\tau$-plane $\G(2,3)$ and for ordinary Gushel-Mukai fourfolds containing a
quintic del Pezzo surface.
| 0 | 0 | 1 | 0 | 0 | 0 |
Projection-Free Bandit Convex Optimization | In this paper, we propose the first computationally efficient projection-free
algorithm for bandit convex optimization (BCO). We show that our algorithm
achieves a sublinear regret of $O(nT^{4/5})$ (where $T$ is the horizon and $n$
is the dimension) for any bounded convex functions with uniformly bounded
gradients. We also evaluate the performance of our algorithm against baselines
on both synthetic and real data sets for quadratic programming, portfolio
selection and matrix completion problems.
| 0 | 0 | 0 | 1 | 0 | 0 |
The Recommendation System to SNS Community for Tourists by Using Altruistic Behaviors | We have already developed the recommendation system of sightseeing
information on SNS by using smartphone based user participatory sensing system.
The system can post the attractive information for tourists to the specified
Facebook page by our developed smartphone application. The users in Facebook,
who are interested in sightseeing, can come flocking through information space
from far and near. However, the activities in the community on SNS are only
supported by the specified people called a hub. We proposed the method of
vitalization of tourist behaviors to give a stimulus to the people. We
developed the simulation system for multi agent system with altruistic
behaviors inspired by the Army Ants. The army ant takes feeding action with
altruistic behaviors to suppress selfish behavior to a common object used by a
plurality of users in common. In this paper, we introduced the altruism
behavior determined by some simulation to vitalize the SNS community. The
efficiency of the revitalization process of the community was investigated by
some experimental simulation results.
| 1 | 0 | 0 | 0 | 0 | 0 |
Constructive Stabilization and Pole Placement by Arbitrary Decentralized Architectures | A seminal result in decentralized control is the development of fixed modes
by Wang and Davison in 1973 - that plant modes which cannot be moved with a
static decentralized controller cannot be moved by a dynamic one either, and
that the other modes which can be moved can be shifted to any chosen location
with arbitrary precision. These results were developed for perfectly
decentralized, or block diagonal, information structure, where each control
input may only depend on a single corresponding measurement. Furthermore, the
results were claimed after a preliminary step was demonstrated, omitting a
rigorous induction for each of these results, and the remaining task is
nontrivial.
In this paper, we consider fixed modes for arbitrary information structures,
where certain control inputs may depend on some measurements but not others. We
provide a comprehensive proof that the modes which cannot be altered by a
static controller with the given structure cannot be moved by a dynamic one
either, and that the modes which can be altered by a static controller with the
given structure can be moved by a dynamic one to any chosen location with
arbitrary precision, thus generalizing and solidifying Wang and Davison's
results.
This shows that a system can be stabilized by a linear time-invariant
controller with the given information structure as long as all of the modes
which are fixed with respect to that structure are in the left half-plane; an
algorithm for synthesizing such a stabilizing decentralized controller is then
distilled from the proof.
| 1 | 0 | 0 | 0 | 0 | 0 |
Hyperpolarizability and operational magic wavelength in an optical lattice clock | Optical clocks benefit from tight atomic confinement enabling extended
interrogation times as well as Doppler- and recoil-free operation. However,
these benefits come at the cost of frequency shifts that, if not properly
controlled, may degrade clock accuracy. Numerous theoretical studies have
predicted optical lattice clock frequency shifts that scale nonlinearly with
trap depth. To experimentally observe and constrain these shifts in an
$^{171}$Yb optical lattice clock, we construct a lattice enhancement cavity
that exaggerates the light shifts. We observe an atomic temperature that is
proportional to the optical trap depth, fundamentally altering the scaling of
trap-induced light shifts and simplifying their parametrization. We identify an
"operational" magic wavelength where frequency shifts are insensitive to
changes in trap depth. These measurements and scaling analysis constitute an
essential systematic characterization for clock operation at the $10^{-18}$
level and beyond.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Unified Approach to Configuration-based Dynamic Analysis of Quadcopters for Optimal Stability | A special type of rotary-wing Unmanned Aerial Vehicles (UAV), called
Quadcopter have prevailed to the civilian use for the past decade. They have
gained significant amount of attention within the UAV community for their
redundancy and ease of control, despite the fact that they fall under an
under-actuated system category. They come in a variety of configurations. The
"+" and "x" configurations were introduced first. Literature pertinent to these
two configurations is vast. However, in this paper, we define 6 additional
possible configurations for a Quadcopter that can be built under either "+" or
"x" setup. These configurations can be achieved by changing the angle that the
axis of rotation for rotors make with the main body, i.e., fuselage. This would
also change the location of the COM with respect to the propellers which can
add to the overall stability. A comprehensive dynamic model for all these
configurations is developed for the first time. The overall stability for these
configurations are addressed. In particular, it is shown that one configuration
can lead to the most statically-stable platform by adopting damping motion in
Roll/Pitch/Yaw, which is described for the first time to the best of our
knowledge.
| 1 | 0 | 0 | 0 | 0 | 0 |
Improving Vision-based Self-positioning in Intelligent Transportation Systems via Integrated Lane and Vehicle Detection | Traffic congestion is a widespread problem. Dynamic traffic routing systems
and congestion pricing are getting importance in recent research. Lane
prediction and vehicle density estimation is an important component of such
systems. We introduce a novel problem of vehicle self-positioning which
involves predicting the number of lanes on the road and vehicle's position in
those lanes using videos captured by a dashboard camera. We propose an
integrated closed-loop approach where we use the presence of vehicles to aid
the task of self-positioning and vice-versa. To incorporate multiple factors
and high-level semantic knowledge into the solution, we formulate this problem
as a Bayesian framework. In the framework, the number of lanes, the vehicle's
position in those lanes and the presence of other vehicles are considered as
parameters. We also propose a bounding box selection scheme to reduce the
number of false detections and increase the computational efficiency. We show
that the number of box proposals decreases by a factor of 6 using the selection
approach. It also results in large reduction in the number of false detections.
The entire approach is tested on real-world videos and is found to give
acceptable results.
| 1 | 0 | 0 | 0 | 0 | 0 |
BPS Algebras, Genus Zero, and the Heterotic Monster | In this note, we expand on some technical issues raised in \cite{PPV} by the
authors, as well as providing a friendly introduction to and summary of our
previous work. We construct a set of heterotic string compactifications to 0+1
dimensions intimately related to the Monstrous moonshine module of Frenkel,
Lepowsky, and Meurman (and orbifolds thereof). Using this model, we review our
physical interpretation of the genus zero property of Monstrous moonshine.
Furthermore, we show that the space of (second-quantized) BPS-states forms a
module over the Monstrous Lie algebras $\mathfrak{m}_g$---some of the first and
most prominent examples of Generalized Kac-Moody algebras---constructed by
Borcherds and Carnahan. In particular, we clarify the structure of the module
present in the second-quantized string theory. We also sketch a proof of our
methods in the language of vertex operator algebras, for the interested
mathematician.
| 0 | 0 | 1 | 0 | 0 | 0 |
Exploiting Nontrivial Connectivity for Automatic Speech Recognition | Nontrivial connectivity has allowed the training of very deep networks by
addressing the problem of vanishing gradients and offering a more efficient
method of reusing parameters. In this paper we make a comparison between
residual networks, densely-connected networks and highway networks on an image
classification task. Next, we show that these methodologies can easily be
deployed into automatic speech recognition and provide significant improvements
to existing models.
| 1 | 0 | 0 | 1 | 0 | 0 |
A unified treatment of multiple testing with prior knowledge using the p-filter | There is a significant literature on methods for incorporating knowledge into
multiple testing procedures so as to improve their power and precision. Some
common forms of prior knowledge include (a) beliefs about which hypotheses are
null, modeled by non-uniform prior weights; (b) differing importances of
hypotheses, modeled by differing penalties for false discoveries; (c) multiple
arbitrary partitions of the hypotheses into (possibly overlapping) groups; and
(d) knowledge of independence, positive or arbitrary dependence between
hypotheses or groups, suggesting the use of more aggressive or conservative
procedures. We present a unified algorithmic framework called p-filter for
global null testing and false discovery rate (FDR) control that allows the
scientist to incorporate all four types of prior knowledge (a)-(d)
simultaneously, recovering a variety of known algorithms as special cases.
| 0 | 0 | 1 | 1 | 0 | 0 |
The flip Markov chain for connected regular graphs | Mahlmann and Schindelhauer (2005) defined a Markov chain which they called
$k$-Flipper, and showed that it is irreducible on the set of all connected
regular graphs of a given degree (at least 3). We study the 1-Flipper chain,
which we call the flip chain, and prove that the flip chain converges rapidly
to the uniform distribution over connected $2r$-regular graphs with $n$
vertices, where $n\geq 8$ and $r = r(n)\geq 2$. Formally, we prove that the
distribution of the flip chain will be within $\varepsilon$ of uniform in total
variation distance after $\text{poly}(n,r,\log(\varepsilon^{-1}))$ steps. This
polynomial upper bound on the mixing time is given explicitly, and improves
markedly on a previous bound given by Feder et al.(2006). We achieve this
improvement by using a direct two-stage canonical path construction, which we
define in a general setting.
This work has applications to decentralised networks based on random regular
connected graphs of even degree, as a self-stabilising protocol in which nodes
spontaneously perform random flips in order to repair the network.
| 1 | 0 | 1 | 0 | 0 | 0 |
FPGA-based ORB Feature Extraction for Real-Time Visual SLAM | Simultaneous Localization And Mapping (SLAM) is the problem of constructing
or updating a map of an unknown environment while simultaneously keeping track
of an agent's location within it. How to enable SLAM robustly and durably on
mobile, or even IoT grade devices, is the main challenge faced by the industry
today. The main problems we need to address are: 1.) how to accelerate the SLAM
pipeline to meet real-time requirements; and 2.) how to reduce SLAM energy
consumption to extend battery life. After delving into the problem, we found
out that feature extraction is indeed the bottleneck of performance and energy
consumption. Hence, in this paper, we design, implement, and evaluate a
hardware ORB feature extractor and prove that our design is a great balance
between performance and energy consumption compared with ARM Krait and Intel
Core i5.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the Transformation of Latent Space in Autoencoders | Noting the importance of the latent variables in inference and learning, we
propose a novel framework for autoencoders based on the homeomorphic
transformation of latent variables --- which could reduce the distance between
vectors in the transformed space, while preserving the topological properties
of the original space --- and investigate the effect of the transformation in
both learning generative models and denoising corrupted data. The results of
our experiments show that the proposed model can work as both a generative
model and a denoising model with improved performance due to the transformation
compared to conventional variational and denoising autoencoders.
| 1 | 0 | 0 | 1 | 0 | 0 |
Log-Convexity of Weighted Area Integral Means of $H^p$ Functions on the Upper Half-plan | In the present work weighted area integral means $M_{p,\varphi}(f;{\mathrm
{Im}}z)$ are studied and it is proved that the function $y\to \log
M_{p,\varphi}(f;y)$ is convex in the case when $f$ belongs to a Hardy space on
the upper half-plane.
| 0 | 0 | 1 | 0 | 0 | 0 |
From Nodal Chain Semimetal To Weyl Semimetal in HfC | Based on first-principles calculations and effective model analysis, we
propose that the WC-type HfC, in the absence of spin-orbit coupling (SOC), can
host a three-dimensional nodal chain semimetal state. Distinguished from the
previous material IrF4 [T. Bzdusek et al., Nature 538, 75 (2016)], the nodal
chain here is protected by mirror reflection symmetries of a simple space
group, while in IrF4 the nonsymmorphic space group with a glide plane is a
necessity. Moreover, in the presence of SOC, the nodal chain in WC type HfC
evolves into Weyl points. In the Brillouin zone, a total of 30 pairs of Weyl
points in three types are obtained through the first-principles calculations.
Besides, the surface states and the pattern of the surface Fermi arcs
connecting these Weyl points are studied, which may be measured by future
experiments.
| 0 | 1 | 0 | 0 | 0 | 0 |
Protein and hydration-water dynamics are decoupled: A new model connecting dynamics and biochemical function is required | Water plays a major role in bio-systems, greatly contributing to determine
their structure, stability and even function. It is well know, for instance,
that proteins require a minimum amount of water to be functionally active.
Since the biological functions of proteins involve changes of conformation, and
sometimes chemical reactions, it is natural to expect a connection of these
functions with dynamical properties of the coupled system of proteins and their
hydration water. However, despite many years of intensive research, the
detailed nature of protein - hydration water interactions, and their effect on
the biochemical activity of proteins through peculiar dynamical effects, is
still partly unknown. In particular, models proposed so far, fail to explain
the full set of experimental data. The well-accepted 'protein dynamical
transition' scenario is based on perfect coupling between the dynamics of
proteins and of their hydration water, which has never been confuted
experimentally. We present high-energy resolution elastic neutron scattering
measurements of the atomistic dynamics of the model protein, lysozyme, in water
that were carried out on the IN16B spectrometer at the Institut Laue-Langevin
in Grenoble, France. These show for the first time that the dynamics of
proteins and of their hydration water are actually de-coupled. This important
result militates against the well-accepted scenario, and requires a new model
to link protein dynamics to the dynamics of its hydration water, and, in turn,
to biochemical function.
| 0 | 1 | 0 | 0 | 0 | 0 |
Lakshmibai-Seshadri paths for hyperbolic Kac-Moody algebras of rank $2$ | Let $\mathfrak{g}$ be a hyperbolic Kac-Moody algebra of rank $2$, and set
$\lambda: = \Lambda_1 - \Lambda_2$, where $\Lambda_1, \Lambda_2$ are the
fundamental weights for $\mathfrak{g}$; note that $\lambda$ is neither dominant
nor antidominant. Let $\mathbb{B}(\lambda)$ be the crystal of all
Lakshmibai-Seshadri paths of shape $\lambda$. We prove that (the crystal graph
of) $\mathbb{B}(\lambda)$ is connected. Furthermore, we give an explicit
description of Lakshmibai-Seshadri paths of shape $\lambda$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Beyond Worst-case: A Probabilistic Analysis of Affine Policies in Dynamic Optimization | Affine policies (or control) are widely used as a solution approach in
dynamic optimization where computing an optimal adjustable solution is usually
intractable. While the worst case performance of affine policies can be
significantly bad, the empirical performance is observed to be near-optimal for
a large class of problem instances. For instance, in the two-stage dynamic
robust optimization problem with linear covering constraints and uncertain
right hand side, the worst-case approximation bound for affine policies is
$O(\sqrt m)$ that is also tight (see Bertsimas and Goyal (2012)), whereas
observed empirical performance is near-optimal. In this paper, we aim to
address this stark-contrast between the worst-case and the empirical
performance of affine policies. In particular, we show that affine policies
give a good approximation for the two-stage adjustable robust optimization
problem with high probability on random instances where the constraint
coefficients are generated i.i.d. from a large class of distributions; thereby,
providing a theoretical justification of the observed empirical performance. On
the other hand, we also present a distribution such that the performance bound
for affine policies on instances generated according to that distribution is
$\Omega(\sqrt m)$ with high probability; however, the constraint coefficients
are not i.i.d.. This demonstrates that the empirical performance of affine
policies can depend on the generative model for instances.
| 0 | 0 | 1 | 0 | 0 | 0 |
Hausdorff operators on modulation and Wiener amalgam spaces | We give the sharp conditions for boundedness of Hausdorff operators on
certain modulation and Wiener amalgam spaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
From Deep to Shallow: Transformations of Deep Rectifier Networks | In this paper, we introduce transformations of deep rectifier networks,
enabling the conversion of deep rectifier networks into shallow rectifier
networks. We subsequently prove that any rectifier net of any depth can be
represented by a maximum of a number of functions that can be realized by a
shallow network with a single hidden layer. The transformations of both deep
rectifier nets and deep residual nets are conducted to demonstrate the
advantages of the residual nets over the conventional neural nets and the
advantages of the deep neural nets over the shallow neural nets. In summary,
for two rectifier nets with different depths but with same total number of
hidden units, the corresponding single hidden layer representation of the
deeper net is much more complex than the corresponding single hidden
representation of the shallower net. Similarly, for a residual net and a
conventional rectifier net with the same structure except for the skip
connections in the residual net, the corresponding single hidden layer
representation of the residual net is much more complex than the corresponding
single hidden layer representation of the conventional net.
| 1 | 0 | 0 | 1 | 0 | 0 |
Second-order Convolutional Neural Networks | Convolutional Neural Networks (CNNs) have been successfully applied to many
computer vision tasks, such as image classification. By performing linear
combinations and element-wise nonlinear operations, these networks can be
thought of as extracting solely first-order information from an input image. In
the past, however, second-order statistics computed from handcrafted features,
e.g., covariances, have proven highly effective in diverse recognition tasks.
In this paper, we introduce a novel class of CNNs that exploit second-order
statistics. To this end, we design a series of new layers that (i) extract a
covariance matrix from convolutional activations, (ii) compute a parametric,
second-order transformation of a matrix, and (iii) perform a parametric
vectorization of a matrix. These operations can be assembled to form a
Covariance Descriptor Unit (CDU), which replaces the fully-connected layers of
standard CNNs. Our experiments demonstrate the benefits of our new
architecture, which outperform the first-order CNNs, while relying on up to 90%
fewer parameters.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the quasi-sure superhedging duality with frictions | We prove the superhedging duality for a discrete-time financial market with
proportional transaction costs under portfolio constraints and model
uncertainty. Frictions are modeled through solvency cones as in the original
model of [Kabanov, Y., Hedging and liquidation under transaction costs in
currency markets. Fin. Stoch., 3(2):237-248, 1999] adapted to the quasi-sure
setup of [Bouchard, B. and Nutz, M., Arbitrage and duality in nondominated
discrete-time models. Ann. Appl. Probab., 25(2):823-859, 2015]. Our results
hold under the condition of No Strict Arbitrage and under the efficient
friction hypothesis.
| 0 | 0 | 0 | 0 | 0 | 1 |
The Gaia-ESO Survey: low-alpha element stars in the Galactic Bulge | We take advantage of the Gaia-ESO Survey iDR4 bulge data to search for
abundance anomalies that could shed light on the composite nature of the Milky
Way bulge. The alpha-elements (Mg, Si, and whenever available, Ca) abundances,
and their trends with Fe abundances have been analysed for a total of 776 bulge
stars. In addition, the aluminum abundances and their ratio to Fe and Mg have
also been examined. Our analysis reveals the existence of low-alpha element
abundance stars with respect to the standard bulge sequence in the [alpha/Fe]
vs. [Fe/H] plane. 18 objects present deviations in [alpha/Fe] ranging from 2.1
to 5.3 sigma with respect to the median standard value. Those stars do not show
Mg-Al anti-correlation patterns. Incidentally, this sign of the existence of
multiple stellar populations is reported firmly for the first time for the
bulge globular cluster NGC 6522. The identified low-alpha abundance stars have
chemical patterns compatible with those of the thin disc. Their link with
massive dwarf galaxies accretion seems unlikely, as larger deviations in alpha
abundance and Al would be expected. The vision of a bulge composite nature and
a complex formation process is reinforced by our results. The used approach, a
multi-method and model-driven analysis of high resolution data seems crucial to
reveal this complexity.
| 0 | 1 | 0 | 0 | 0 | 0 |
On Kedlaya type inequalities for weighted means | In 2016 we proved that for every symmetric, repetition invariant and Jensen
concave mean $\mathscr{M}$ the Kedlaya-type inequality $$
\mathscr{A}\big(x_1,\mathscr{M}(x_1,x_2),\ldots,\mathscr{M}(x_1,\ldots,x_n)\big)\le
\mathscr{M} \big(x_1,
\mathscr{A}(x_1,x_2),\ldots,\mathscr{A}(x_1,\ldots,x_n)\big) $$ holds for an
arbitrary $(x_n)$ ($\mathscr{A}$ stands for the arithmetic mean). We are going
to prove the weighted counterpart of this inequality. More precisely, if
$(x_n)$ is a vector with corresponding (non-normalized) weights $(\lambda_n)$
and $\mathscr{M}_{i=1}^n(x_i,\lambda_i)$ denotes the weighted mean then, under
analogous conditions on $\mathscr{M}$, the inequality $$ \mathscr{A}_{i=1}^n
\big(\mathscr{M}_{j=1}^i (x_j,\lambda_j),\:\lambda_i\big) \le
\mathscr{M}_{i=1}^n \big(\mathscr{A}_{j=1}^i (x_j,\lambda_j),\:\lambda_i\big)
$$ holds for every $(x_n)$ and $(\lambda_n)$ such that the sequence
$(\frac{\lambda_k}{\lambda_1+\cdots+\lambda_k})$ is decreasing.
| 0 | 0 | 1 | 0 | 0 | 0 |
Mesh-free Semi-Lagrangian Methods for Transport on a Sphere Using Radial Basis Functions | We present three new semi-Lagrangian methods based on radial basis function
(RBF) interpolation for numerically simulating transport on a sphere. The
methods are mesh-free and are formulated entirely in Cartesian coordinates,
thus avoiding any irregular clustering of nodes at artificial boundaries on the
sphere and naturally bypassing any apparent artificial singularities associated
with surface-based coordinate systems. For problems involving tracer transport
in a given velocity field, the semi-Lagrangian framework allows these new
methods to avoid the use of any stabilization terms (such as hyperviscosity)
during time-integration, thus reducing the number of parameters that have to be
tuned. The three new methods are based on interpolation using 1) global RBFs,
2) local RBF stencils, and 3) RBF partition of unity. For the latter two of
these methods, we find that it is crucial to include some low degree spherical
harmonics in the interpolants. Standard test cases consisting of solid body
rotation and deformational flow are used to compare and contrast the methods in
terms of their accuracy, efficiency, conservation properties, and
dissipation/dispersion errors. For global RBFs, spectral spatial convergence is
observed for smooth solutions on quasi-uniform nodes, while high-order accuracy
is observed for the local RBF stencil and partition of unity approaches.
| 1 | 0 | 0 | 0 | 0 | 0 |
Spatial Models with the Integrated Nested Laplace Approximation within Markov Chain Monte Carlo | The Integrated Nested Laplace Approximation (INLA) is a convenient way to
obtain approximations to the posterior marginals for parameters in Bayesian
hierarchical models when the latent effects can be expressed as a Gaussian
Markov Random Field (GMRF). In addition, its implementation in the R-INLA
package for the R statistical software provides an easy way to fit models using
INLA in practice. R-INLA implements a number of widely used latent models,
including several spatial models. In addition, R-INLA can fit models in a
fraction of the time than other computer intensive methods (e.g. Markov Chain
Monte Carlo) take to fit the same model.
Although INLA provides a fast approximation to the marginals of the model
parameters, it is difficult to use it with models not implemented in R-INLA. It
is also difficult to make multivariate posterior inference on the parameters of
the model as INLA focuses on the posterior marginals and not the joint
posterior distribution.
In this paper we describe how to use INLA within the Metropolis-Hastings
algorithm to fit spatial models and estimate the joint posterior distribution
of a reduced number of parameters. We will illustrate the benefits of this new
method with two examples on spatial econometrics and disease mapping where
complex spatial models with several spatial structures need to be fitted.
| 0 | 0 | 0 | 1 | 0 | 0 |
Prediction-Constrained Topic Models for Antidepressant Recommendation | Supervisory signals can help topic models discover low-dimensional data
representations that are more interpretable for clinical tasks. We propose a
framework for training supervised latent Dirichlet allocation that balances two
goals: faithful generative explanations of high-dimensional data and accurate
prediction of associated class labels. Existing approaches fail to balance
these goals by not properly handling a fundamental asymmetry: the intended task
is always predicting labels from data, not data from labels. Our new
prediction-constrained objective trains models that predict labels from heldout
data well while also producing good generative likelihoods and interpretable
topic-word parameters. In a case study on predicting depression medications
from electronic health records, we demonstrate improved recommendations
compared to previous supervised topic models and high- dimensional logistic
regression from words alone.
| 1 | 0 | 0 | 1 | 0 | 0 |
Casimir-Polder force fluctuations as spatial probes of dissipation in metals | We study the spatial fluctuations of the Casimir-Polder force experienced by
an atom or a small sphere moved above a metallic plate at fixed separation
distance. We demonstrate that unlike the mean force, the magnitude of these
fluctuations crucially relies on the relaxation of conduction electron in the
metallic bulk, and even achieves values that differ by orders of magnitude
depending on the amount of dissipation. We also discover that fluctuations
suffer a spectacular decrease at large distances in the case of nonzero
temperature.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Unifying View of Explicit and Implicit Feature Maps for Structured Data: Systematic Studies of Graph Kernels | Non-linear kernel methods can be approximated by fast linear ones using
suitable explicit feature maps allowing their application to large scale
problems. To this end, explicit feature maps of kernels for vectorial data have
been extensively studied. As many real-world data is structured, various
kernels for complex data like graphs have been proposed. Indeed, many of them
directly compute feature maps. However, the kernel trick is employed when the
number of features is very large or the individual vertices of graphs are
annotated by real-valued attributes.
Can we still compute explicit feature maps efficiently under these
circumstances? Triggered by this question, we investigate how general
convolution kernels are composed from base kernels and construct corresponding
feature maps. We apply our results to widely used graph kernels and analyze for
which kernels and graph properties computation by explicit feature maps is
feasible and actually more efficient. In particular, we derive feature maps for
random walk and subgraph matching kernels and apply them to real-world graphs
with discrete labels. Thereby, our theoretical results are confirmed
experimentally by observing a phase transition when comparing running time with
respect to label diversity, walk lengths and subgraph size, respectively.
Moreover, we derive approximative, explicit feature maps for state-of-the-art
kernels supporting real-valued attributes including the GraphHopper and Graph
Invariant kernels. In extensive experiments we show that our approaches often
achieve a classification accuracy close to the exact methods based on the
kernel trick, but require only a fraction of their running time.
| 1 | 0 | 0 | 1 | 0 | 0 |
Predicting Adolescent Suicide Attempts with Neural Networks | Though suicide is a major public health problem in the US, machine learning
methods are not commonly used to predict an individual's risk of
attempting/committing suicide. In the present work, starting with an anonymized
collection of electronic health records for 522,056 unique, California-resident
adolescents, we develop neural network models to predict suicide attempts. We
frame the problem as a binary classification problem in which we use a
patient's data from 2006-2009 to predict either the presence (1) or absence (0)
of a suicide attempt in 2010. After addressing issues such as severely
imbalanced classes and the variable length of a patient's history, we build
neural networks with depths varying from two to eight hidden layers. For test
set observations where we have at least five ED/hospital visits' worth of data
on a patient, our depth-4 model achieves a sensitivity of 0.703, specificity of
0.980, and AUC of 0.958.
| 1 | 0 | 0 | 1 | 0 | 0 |
Designing RNA Secondary Structures is Hard | An RNA sequence is a word over an alphabet on four elements $\{A,C,G,U\}$
called bases. RNA sequences fold into secondary structures where some bases
match one another while others remain unpaired. Pseudoknot-free secondary
structures can be represented as well-parenthesized expressions with additional
dots, where pairs of matching parentheses symbolize paired bases and dots,
unpaired bases. The two fundamental problems in RNA algorithmic are to predict
how sequences fold within some model of energy and to design sequences of bases
which will fold into targeted secondary structures. Predicting how a given RNA
sequence folds into a pseudoknot-free secondary structure is known to be
solvable in cubic time since the eighties and in truly subcubic time by a
recent result of Bringmann et al. (FOCS 2016). As a stark contrast, it is
unknown whether or not designing a given RNA secondary structure is a tractable
task; this has been raised as a challenging open question by Anne Condon (ICALP
2003). Because of its crucial importance in a number of fields such as
pharmaceutical research and biochemistry, there are dozens of heuristics and
software libraries dedicated to RNA secondary structure design. It is therefore
rather surprising that the computational complexity of this central problem in
bioinformatics has been unsettled for decades.
In this paper we show that, in the simplest model of energy which is the
Watson-Crick model the design of secondary structures is NP-complete if one
adds natural constraints of the form: index $i$ of the sequence has to be
labeled by base $b$. This negative result suggests that the same lower bound
holds for more realistic models of energy. It is noteworthy that the additional
constraints are by no means artificial: they are provided by all the RNA design
pieces of software and they do correspond to the actual practice.
| 1 | 0 | 0 | 0 | 0 | 0 |
Towards Understanding the Impact of Human Mobility on Police Allocation | Motivated by recent findings that human mobility is proxy for crime behavior
in big cities and that there is a superlinear relationship between the people's
movement and crime, this article aims to evaluate the impact of how these
findings influence police allocation. More precisely, we shed light on the
differences between an allocation strategy, in which the resources are
distributed by clusters of floating population, and conventional allocation
strategies, in which the police resources are distributed by an Administrative
Area (typically based on resident population). We observed a substantial
difference in the distributions of police resources allocated following these
strategies, what evidences the imprecision of conventional police allocation
methods.
| 1 | 1 | 0 | 0 | 0 | 0 |
Normality and Related Properties of Forcing Algebras | We present a sufficient condition for irreducibility of forcing algebras and
study the (non)-reducedness phenomenon. Furthermore, we prove a criterion for
normality for forcing algebras over a polynomial base ring with coefficients in
a perfect field. This gives a geometrical normality criterion for algebraic
(forcing) varieties over algebraically closed fields. Besides, we examine in
detail an specific (enlightening) example with several forcing equations.
Finally, we compute explicitly the normalization of a particular forcing
algebra by means of finding explicitly the generators of the ideal defining it
as an affine ring.
| 0 | 0 | 1 | 0 | 0 | 0 |
Training Triplet Networks with GAN | Triplet networks are widely used models that are characterized by good
performance in classification and retrieval tasks. In this work we propose to
train a triplet network by putting it as the discriminator in Generative
Adversarial Nets (GANs). We make use of the good capability of representation
learning of the discriminator to increase the predictive quality of the model.
We evaluated our approach on Cifar10 and MNIST datasets and observed
significant improvement on the classification performance using the simple k-nn
method.
| 1 | 0 | 0 | 1 | 0 | 0 |
Recovering Pairwise Interactions Using Neural Networks | Recovering pairwise interactions, i.e. pairs of input features whose joint
effect on an output is different from the sum of their marginal effects, is
central in many scientific applications. We conceptualize a solution to this
problem as a two-stage procedure: first, we model the relationship between the
features and the output using a flexible hybrid neural network; second, we
detect feature interactions from the trained model. For the second step we
propose a simple and intuitive interaction measure (IM), which has no specific
requirements on the machine learning model used in the first step, only that it
defines a mapping from an input to an output. And in a special case it reduces
to the averaged Hessian of the input-output mapping. Importantly, our method
upper bounds the interaction recovery error with the error of the learning
model, which ensures that we can improve the recovered interactions by training
a more accurate model. We present analyses of simulated and real-world data
which demonstrate the benefits of our method compared to available
alternatives, and theoretically analyse its properties and relation to other
methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
Boundary Algebraic Bethe Ansatz for a nineteen vertex model with $U_{q}[\mathrm{osp}(2|2)^{(2)}] symmetry$ | The boundary algebraic Bethe Ansatz for a supersymmetric nineteen
vertex-model constructed from a three-dimensional representation of the twisted
quantum affine Lie superalgebra $U_{q}[\mathrm{osp}(2|2)^{(2)}]$ is presented.
The eigenvalues and eigenvectors of Sklyanin's transfer matrix, with diagonal
reflection $K$-matrices, are calculated and the corresponding Bethe Ansatz
equations are obtained.
| 0 | 1 | 0 | 0 | 0 | 0 |
Optimal cost for strengthening or destroying a given network | Strengthening or destroying a network is a very important issue in designing
resilient networks or in planning attacks against networks including planning
strategies to immunize a network against diseases, viruses etc.. Here we
develop a method for strengthening or destroying a random network with a
minimum cost. We assume a correlation between the cost required to strengthen
or destroy a node and the degree of the node. Accordingly, we define a cost
function c(k), which is the cost of strengthening or destroying a node with
degree k. Using the degrees $k$ in a network and the cost function c(k), we
develop a method for defining a list of priorities of degrees, and for choosing
the right group of degrees to be strengthened or destroyed that minimizes the
total price of strengthening or destroying the entire network. We find that the
list of priorities of degrees is universal and independent of the network's
degree distribution, for all kinds of random networks. The list of priorities
is the same for both strengthening a network and for destroying a network with
minimum cost. However, in spite of this similarity there is a difference
between their p_c - the critical fraction of nodes that has to be functional,
to guarantee the existence of a giant component in the network.
| 0 | 1 | 0 | 0 | 0 | 0 |
Direct Optimization through $\arg \max$ for Discrete Variational Auto-Encoder | Reparameterization of variational auto-encoders with continuous latent spaces
is an effective method for reducing the variance of their gradient estimates.
However, using the same approach when latent variables are discrete is
problematic, due to the resulting non-differentiable objective. In this work,
we present a direct optimization method that propagates gradients through a
non-differentiable $\arg \max$ prediction operation. We apply this method to
discrete variational auto-encoders, by modeling a discrete random variable by
the $\arg \max$ function of the Gumbel-Max perturbation model.
| 0 | 0 | 0 | 1 | 0 | 0 |
Orbit classification in the Hill problem: I. The classical case | The case of the classical Hill problem is numerically investigated by
performing a thorough and systematic classification of the initial conditions
of the orbits. More precisely, the initial conditions of the orbits are
classified into four categories: (i) non-escaping regular orbits; (ii) trapped
chaotic orbits; (iii) escaping orbits; and (iv) collision orbits. In order to
obtain a more general and complete view of the orbital structure of the
dynamical system our exploration takes place in both planar (2D) and the
spatial (3D) version of the Hill problem. For the 2D system we numerically
integrate large sets of initial conditions in several types of planes, while
for the system with three degrees of freedom, three-dimensional distributions
of initial conditions of orbits are examined. For distinguishing between
ordered and chaotic bounded motion the Smaller ALingment Index (SALI) method is
used. We managed to locate the several bounded basins, as well as the basins of
escape and collision and also to relate them with the corresponding escape and
collision time of the orbits. Our numerical calculations indicate that the
overall orbital dynamics of the Hamiltonian system is a complicated but highly
interested problem. We hope our contribution to be useful for a further
understanding of the orbital properties of the classical Hill problem.
| 0 | 1 | 0 | 0 | 0 | 0 |
Bio-Inspired Local Information-Based Control for Probabilistic Swarm Distribution Guidance | This paper addresses a task allocation problem for a large-scale robotic
swarm, namely swarm distribution guidance problem. Unlike most of the existing
frameworks handling this problem, the proposed framework suggests utilising
local information available to generate its time-varying stochastic policies.
As each agent requires only local consistency on information with neighbouring
agents, rather than the global consistency, the proposed framework offers
various advantages, e.g., a shorter timescale for using new information and
potential to incorporate an asynchronous decision-making process. We perform
theoretical analysis on the properties of the proposed framework. From the
analysis, it is proved that the framework can guarantee the convergence to the
desired density distribution even using local information while maintaining
advantages of global-information-based approaches. The design requirements for
these advantages are explicitly listed in this paper. This paper also provides
specific examples of how to implement the framework developed. The results of
numerical experiments confirm the effectiveness and comparability of the
proposed framework, compared with the global-information-based framework.
| 0 | 0 | 1 | 1 | 0 | 0 |
Simultaneous Confidence Band for Partially Linear Panel Data Models with Fixed Effects | In this paper, we construct the simultaneous confidence band (SCB) for the
nonparametric component in partially linear panel data models with fixed
effects. We remove the fixed effects, and further obtain the estimators of
parametric and nonparametric components, which do not depend on the fixed
effects. We establish the asymptotic distribution of their maximum absolute
deviation between the estimated nonparametric component and the true
nonparametric component under some suitable conditions, and hence the result
can be used to construct the simultaneous confidence band of the nonparametric
component. Based on the asymptotic distribution, it becomes difficult for the
construction of the simultaneous confidence band. The reason is that the
asymptotic distribution involves the estimators of the asymptotic bias and
conditional variance, and the choice of the bandwidth for estimating the second
derivative of nonparametric function. Clearly, these will cause computational
burden and accumulative errors. To overcome these problems, we propose a
Bootstrap method to construct simultaneous confidence band. Simulation studies
indicate that the proposed Bootstrap method exhibits better performance under
the limited samples.
| 0 | 0 | 0 | 1 | 0 | 0 |
Tent--Shaped Surface Morphologies of Silicon: Texturization by Metal Induced Etching | Nano--metal/semiconductor junction dependent porosification of silicon (Si)
has been studied here. The silicon (Si) nanostructures (NS) have been textured
on n-- and p-- type silicon wafers using Ag and Au metal nano particles induced
chemical etching. The combinations of n--Si/Ag and p--Si/Au form ohmic contact
and result in the same texturization on the Si surface on porosification where
tent--shaped morphology has been observed consistently with n-- and p--type Si.
Whereas, porosification result in different surface texturization for other two
combinations (p--Si/Ag and n--Si/Au) where Schottkey contacts are formed.
Quantitative analysis have been done using ImageJ to process the SEM images of
SiNS, which confirms that the tent like SiNS are formed when etching of silicon
wafer is done by AgNPs and AuNPs on n and p type Si wafer respectively. These
easily prepared sharp tent--shaped Si NSs can be used for enhanced field
emission applications.
| 0 | 1 | 0 | 0 | 0 | 0 |
Attention networks for image-to-text | The paper approaches the problem of image-to-text with attention-based
encoder-decoder networks that are trained to handle sequences of characters
rather than words. We experiment on lines of text from a popular handwriting
database with different attention mechanisms for the decoder. The model trained
with softmax attention achieves the lowest test error, outperforming several
other RNN-based models. Our results show that softmax attention is able to
learn a linear alignment whereas the alignment generated by sigmoid attention
is linear but much less precise.
| 0 | 0 | 0 | 1 | 0 | 0 |
A Monocular Vision System for Playing Soccer in Low Color Information Environments | Humanoid soccer robots perceive their environment exclusively through
cameras. This paper presents a monocular vision system that was originally
developed for use in the RoboCup Humanoid League, but is expected to be
transferable to other soccer leagues. Recent changes in the Humanoid League
rules resulted in a soccer environment with less color coding than in previous
years, which makes perception of the game situation more challenging. The
proposed vision system addresses these challenges by using brightness and
texture for the detection of the required field features and objects. Our
system is robust to changes in lighting conditions, and is designed for
real-time use on a humanoid soccer robot. This paper describes the main
components of the detection algorithms in use, and presents experimental
results from the soccer field, using ROS and the igus Humanoid Open Platform as
a testbed. The proposed vision system was used successfully at RoboCup 2015.
| 1 | 0 | 0 | 0 | 0 | 0 |
Low-shot learning with large-scale diffusion | This paper considers the problem of inferring image labels from images when
only a few annotated examples are available at training time. This setup is
often referred to as low-shot learning, where a standard approach is to
re-train the last few layers of a convolutional neural network learned on
separate classes for which training examples are abundant. We consider a
semi-supervised setting based on a large collection of images to support label
propagation. This is possible by leveraging the recent advances on large-scale
similarity graph construction.
We show that despite its conceptual simplicity, scaling label propagation up
to hundred millions of images leads to state of the art accuracy in the
low-shot learning regime.
| 1 | 0 | 0 | 1 | 0 | 0 |
Instrumentation and its Interaction with the Secondary Beam for the Fermilab Muon Campus | The Fermilab Muon Campus will host the Muon g-2 experiment - a world class
experiment dedicated to the search for signals of new physics. Strict demands
are placed on beam diagnostics in order to ensure delivery of high quality
beams to the storage ring with minimal losses. In this study, we briefly
describe the available secondary beam diagnostics for the Fermilab Muon Campus.
Then, with the aid of numerical simulations we detail their interaction with
the secondary beam. Finally, we compare our results against theoretical
findings.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.