text
stringlengths 57
2.88k
| labels
sequencelengths 6
6
|
---|---|
Title: Informed Sampling for Asymptotically Optimal Path Planning (Consolidated Version),
Abstract: Anytime almost-surely asymptotically optimal planners, such as RRT*,
incrementally find paths to every state in the search domain. This is
inefficient once an initial solution is found as then only states that can
provide a better solution need to be considered. Exact knowledge of these
states requires solving the problem but can be approximated with heuristics.
This paper formally defines these sets of states and demonstrates how they
can be used to analyze arbitrary planning problems. It uses the well-known
$L^2$ norm (i.e., Euclidean distance) to analyze minimum-path-length problems
and shows that existing approaches decrease in effectiveness factorially (i.e.,
faster than exponentially) with state dimension. It presents a method to
address this curse of dimensionality by directly sampling the prolate
hyperspheroids (i.e., symmetric $n$-dimensional ellipses) that define the $L^2$
informed set.
The importance of this direct informed sampling technique is demonstrated
with Informed RRT*. This extension of RRT* has less theoretical dependence on
state dimension and problem size than existing techniques and allows for linear
convergence on some problems. It is shown experimentally to find better
solutions faster than existing techniques on both abstract planning problems
and HERB, a two-arm manipulation robot. | [
1,
0,
0,
0,
0,
0
] |
Title: Holistic Planimetric prediction to Local Volumetric prediction for 3D Human Pose Estimation,
Abstract: We propose a novel approach to 3D human pose estimation from a single depth
map. Recently, convolutional neural network (CNN) has become a powerful
paradigm in computer vision. Many of computer vision tasks have benefited from
CNNs, however, the conventional approach to directly regress 3D body joint
locations from an image does not yield a noticeably improved performance. In
contrast, we formulate the problem as estimating per-voxel likelihood of key
body joints from a 3D occupancy grid. We argue that learning a mapping from
volumetric input to volumetric output with 3D convolution consistently improves
the accuracy when compared to learning a regression from depth map to 3D joint
coordinates. We propose a two-stage approach to reduce the computational
overhead caused by volumetric representation and 3D convolution: Holistic 2D
prediction and Local 3D prediction. In the first stage, Planimetric Network
(P-Net) estimates per-pixel likelihood for each body joint in the holistic 2D
space. In the second stage, Volumetric Network (V-Net) estimates the per-voxel
likelihood of each body joints in the local 3D space around the 2D estimations
of the first stage, effectively reducing the computational cost. Our model
outperforms existing methods by a large margin in publicly available datasets. | [
1,
0,
0,
0,
0,
0
] |
Title: Automated design of collective variables using supervised machine learning,
Abstract: Selection of appropriate collective variables for enhancing sampling of
molecular simulations remains an unsolved problem in computational biophysics.
In particular, picking initial collective variables (CVs) is particularly
challenging in higher dimensions. Which atomic coordinates or transforms there
of from a list of thousands should one pick for enhanced sampling runs? How
does a modeler even begin to pick starting coordinates for investigation? This
remains true even in the case of simple two state systems and only increases in
difficulty for multi-state systems. In this work, we solve the initial CV
problem using a data-driven approach inspired by the filed of supervised
machine learning. In particular, we show how the decision functions in
supervised machine learning (SML) algorithms can be used as initial CVs
(SML_cv) for accelerated sampling. Using solvated alanine dipeptide and
Chignolin mini-protein as our test cases, we illustrate how the distance to the
Support Vector Machines' decision hyperplane, the output probability estimates
from Logistic Regression, the outputs from deep neural network classifiers, and
other classifiers may be used to reversibly sample slow structural transitions.
We discuss the utility of other SML algorithms that might be useful for
identifying CVs for accelerating molecular simulations. | [
0,
0,
0,
1,
1,
0
] |
Title: Berry-Esseen Theorem and Quantitative homogenization for the Random Conductance Model with degenerate Conductances,
Abstract: We study the random conductance model on the lattice $\mathbb{Z}^d$, i.e. we
consider a linear, finite-difference, divergence-form operator with random
coefficients and the associated random walk under random conductances. We allow
the conductances to be unbounded and degenerate elliptic, but they need to
satisfy a strong moment condition and a quantified ergodicity assumption in
form of a spectral gap estimate. As a main result we obtain in dimension $d\geq
3$ quantitative central limit theorems for the random walk in form of a
Berry-Esseen estimate with speed $t^{-\frac 1 5+\varepsilon}$ for $d\geq 4$ and
$t^{-\frac{1}{10}+\varepsilon}$ for $d=3$. Additionally, in the uniformly
elliptic case in low dimensions $d=2,3$ we improve the rate in a quantitative
Berry-Esseen theorem recently obtained by Mourrat. As a central analytic
ingredient, for $d\geq 3$ we establish near-optimal decay estimates on the
semigroup associated with the environment process. These estimates also play a
central role in quantitative stochastic homogenization and extend some recent
results by Gloria, Otto and the second author to the degenerate elliptic case. | [
0,
0,
1,
0,
0,
0
] |
Title: Detecting Multiple Communities Using Quantum Annealing on the D-Wave System,
Abstract: A very important problem in combinatorial optimization is partitioning a
network into communities of densely connected nodes; where the connectivity
between nodes inside a particular community is large compared to the
connectivity between nodes belonging to different ones. This problem is known
as community detection, and has become very important in various fields of
science including chemistry, biology and social sciences. The problem of
community detection is a twofold problem that consists of determining the
number of communities and, at the same time, finding those communities. This
drastically increases the solution space for heuristics to work on, compared to
traditional graph partitioning problems. In many of the scientific domains in
which graphs are used, there is the need to have the ability to partition a
graph into communities with the ``highest quality'' possible since the presence
of even small isolated communities can become crucial to explain a particular
phenomenon. We have explored community detection using the power of quantum
annealers, and in particular the D-Wave 2X and 2000Q machines. It turns out
that the problem of detecting at most two communities naturally fits into the
architecture of a quantum annealer with almost no need of reformulation. This
paper addresses a systematic study of detecting two or more communities in a
network using a quantum annealer. | [
1,
0,
0,
0,
0,
0
] |
Title: Evaluating Feature Importance Estimates,
Abstract: Estimating the influence of a given feature to a model prediction is
challenging. We introduce ROAR, RemOve And Retrain, a benchmark to evaluate the
accuracy of interpretability methods that estimate input feature importance in
deep neural networks. We remove a fraction of input features deemed to be most
important according to each estimator and measure the change to the model
accuracy upon retraining. The most accurate estimator will identify inputs as
important whose removal causes the most damage to model performance relative to
all other estimators. This evaluation produces thought-provoking results -- we
find that several estimators are less accurate than a random assignment of
feature importance. However, averaging a set of squared noisy estimators (a
variant of a technique proposed by Smilkov et al. (2017)), leads to significant
gains in accuracy for each method considered and far outperforms such a random
guess. | [
0,
0,
0,
1,
0,
0
] |
Title: Distributed dynamic modeling and monitoring for large-scale industrial processes under closed-loop control,
Abstract: For large-scale industrial processes under closed-loop control, process
dynamics directly resulting from control action are typical characteristics and
may show different behaviors between real faults and normal changes of
operating conditions. However, conventional distributed monitoring approaches
do not consider the closed-loop control mechanism and only explore static
characteristics, which thus are incapable of distinguishing between real
process faults and nominal changes of operating conditions, leading to
unnecessary alarms. In this regard, this paper proposes a distributed
monitoring method for closed-loop industrial processes by concurrently
exploring static and dynamic characteristics. First, the large-scale
closed-loop process is decomposed into several subsystems by developing a
sparse slow feature analysis (SSFA) algorithm which capture changes of both
static and dynamic information. Second, distributed models are developed to
separately capture static and dynamic characteristics from the local and global
aspects. Based on the distributed monitoring system, a two-level monitoring
strategy is proposed to check different influences on process characteristics
resulting from changes of the operating conditions and control action, and thus
the two changes can be well distinguished from each other. Case studies are
conducted based on both benchmark data and real industrial process data to
illustrate the effectiveness of the proposed method. | [
1,
0,
0,
1,
0,
0
] |
Title: Learning to compress and search visual data in large-scale systems,
Abstract: The problem of high-dimensional and large-scale representation of visual data
is addressed from an unsupervised learning perspective. The emphasis is put on
discrete representations, where the description length can be measured in bits
and hence the model capacity can be controlled. The algorithmic infrastructure
is developed based on the synthesis and analysis prior models whose
rate-distortion properties, as well as capacity vs. sample complexity
trade-offs are carefully optimized. These models are then extended to
multi-layers, namely the RRQ and the ML-STC frameworks, where the latter is
further evolved as a powerful deep neural network architecture with fast and
sample-efficient training and discrete representations. For the developed
algorithms, three important applications are developed. First, the problem of
large-scale similarity search in retrieval systems is addressed, where a
double-stage solution is proposed leading to faster query times and shorter
database storage. Second, the problem of learned image compression is targeted,
where the proposed models can capture more redundancies from the training
images than the conventional compression codecs. Finally, the proposed
algorithms are used to solve ill-posed inverse problems. In particular, the
problems of image denoising and compressive sensing are addressed with
promising results. | [
1,
0,
0,
1,
0,
0
] |
Title: Simple Round Compression for Parallel Vertex Cover,
Abstract: Recently, Czumaj et.al. (arXiv 2017) presented a parallel (almost)
$2$-approximation algorithm for the maximum matching problem in only
$O({(\log\log{n})^2})$ rounds of the massive parallel computation (MPC)
framework, when the memory per machine is $O(n)$. The main approach in their
work is a way of compressing $O(\log{n})$ rounds of a distributed algorithm for
maximum matching into only $O({(\log\log{n})^2})$ MPC rounds.
In this note, we present a similar algorithm for the closely related problem
of approximating the minimum vertex cover in the MPC framework. We show that
one can achieve an $O(\log{n})$ approximation to minimum vertex cover in only
$O(\log\log{n})$ MPC rounds when the memory per machine is $O(n)$. Our
algorithm for vertex cover is similar to the maximum matching algorithm of
Czumaj et.al. but avoids many of the intricacies in their approach and as a
result admits a considerably simpler analysis (at a cost of a worse
approximation guarantee). We obtain this result by modifying a previous
parallel algorithm by Khanna and the author (SPAA 2017) for vertex cover that
allowed for compressing $O(\log{n})$ rounds of a distributed algorithm into
constant MPC rounds when the memory allowed per machine is $O(n\sqrt{n})$. | [
1,
0,
0,
0,
0,
0
] |
Title: ELDAR, a new method to identify AGN in multi-filter surveys: the ALHAMBRA test-case,
Abstract: We present ELDAR, a new method that exploits the potential of medium- and
narrow-band filter surveys to securely identify active galactic nuclei (AGN)
and determine their redshifts. Our methodology improves on traditional
approaches by looking for AGN emission lines expected to be identified against
the continuum, thanks to the width of the filters. To assess its performance,
we apply ELDAR to the data of the ALHAMBRA survey, which covered an effective
area of $2.38\,{\rm deg}^2$ with 20 contiguous medium-band optical filters down
to F814W$\simeq 24.5$. Using two different configurations of ELDAR in which we
require the detection of at least 2 and 3 emission lines, respectively, we
extract two catalogues of type-I AGN. The first is composed of 585 sources
($79\,\%$ of them spectroscopically-unknown) down to F814W$=22.5$ at $z_{\rm
phot}>1$, which corresponds to a surface density of $209\,{\rm deg}^{-2}$. In
the second, the 494 selected sources ($83\,\%$ of them
spectroscopically-unknown) reach F814W$=23$ at $z_{\rm phot}>1.5$, for a
corresponding number density of $176\,{\rm deg}^{-2}$. Then, using samples of
spectroscopically-known AGN in the ALHAMBRA fields, for the two catalogues we
estimate a completeness of $73\,\%$ and $67\,\%$, and a redshift precision of
$1.01\,\%$ and $0.86\,\%$ (with outliers fractions of $8.1\,\%$ and $5.8\,\%$).
At $z>2$, where our selection performs best, we reach $85\,\%$ and $77\,\%$
completeness and we find no contamination from galaxies. | [
0,
1,
0,
0,
0,
0
] |
Title: Identification of multiple hard X-ray sources in solar flares: A Bayesian analysis of the February 20 2002 event,
Abstract: The hard X-ray emission in a solar flare is typically characterized by a
number of discrete sources, each with its own spectral, temporal, and spatial
variability. Establishing the relationship amongst these sources is critical to
determine the role of each in the energy release and transport processes that
occur within the flare. In this paper we present a novel method to identify and
characterize each source of hard X-ray emission. The method permits a
quantitative determination of the most likely number of subsources present, and
of the relative probabilities that the hard X-ray emission in a given subregion
of the flare is represented by a complicated multiple source structure or by a
simpler single source. We apply the method to a well-studied flare on
2002~February~20 in order to assess competing claims as to the number of
chromospheric footpoint sources present, and hence to the complexity of the
underlying magnetic geometry/toplogy. Contrary to previous claims of the need
for multiple sources to account for the chromospheric hard X-ray emission at
different locations and times, we find that a simple
two-footpoint-plus-coronal-source model is the most probable explanation for
the data. We also find that one of the footpoint sources moves quite rapidly
throughout the event, a factor that presumably complicated previous analyses.
The inferred velocity of the footpoint corresponds to a very high induced
electric field, compatible with those in thin reconnecting current sheets. | [
0,
0,
0,
1,
0,
0
] |
Title: A General Framework for Robust Interactive Learning,
Abstract: We propose a general framework for interactively learning models, such as
(binary or non-binary) classifiers, orderings/rankings of items, or clusterings
of data points. Our framework is based on a generalization of Angluin's
equivalence query model and Littlestone's online learning model: in each
iteration, the algorithm proposes a model, and the user either accepts it or
reveals a specific mistake in the proposal. The feedback is correct only with
probability $p > 1/2$ (and adversarially incorrect with probability $1 - p$),
i.e., the algorithm must be able to learn in the presence of arbitrary noise.
The algorithm's goal is to learn the ground truth model using few iterations.
Our general framework is based on a graph representation of the models and
user feedback. To be able to learn efficiently, it is sufficient that there be
a graph $G$ whose nodes are the models and (weighted) edges capture the user
feedback, with the property that if $s, s^*$ are the proposed and target
models, respectively, then any (correct) user feedback $s'$ must lie on a
shortest $s$-$s^*$ path in $G$. Under this one assumption, there is a natural
algorithm reminiscent of the Multiplicative Weights Update algorithm, which
will efficiently learn $s^*$ even in the presence of noise in the user's
feedback.
From this general result, we rederive with barely any extra effort classic
results on learning of classifiers and a recent result on interactive
clustering; in addition, we easily obtain new interactive learning algorithms
for ordering/ranking. | [
1,
0,
0,
0,
0,
0
] |
Title: Predicting Gravitational Lensing by Stellar Remnants,
Abstract: Gravitational lensing provides a means to measure mass that does not rely on
detecting and analysing light from the lens itself. Compact objects are ideal
gravitational lenses, because they have relatively large masses and are dim. In
this paper we describe the prospects for predicting lensing events generated by
the local population of compact objects, consisting of 250 neutron stars, 5
black holes, and approximately 35,000 white dwarfs. By focusing on a population
of nearby compact objects with measured proper motions and known distances from
us, we can measure their masses by studying the characteristics of any lensing
event they generate. Here we concentrate on shifts in the position of a
background source due to lensing by a foreground compact object. With HST,
JWST, and Gaia, measurable centroid shifts caused by lensing are relatively
frequent occurrences. We find that 30-50 detectable events per decade are
expected for white dwarfs. Because relatively few neutron stars and black holes
have measured distances and proper motions, it is more difficult to compute
realistic rates for them. However, we show that at least one isolated neutron
star has likely produced detectable events during the past several decades.
This work is particularly relevant to the upcoming data releases by the Gaia
mission and also to data that will be collected by JWST. Monitoring predicted
microlensing events will not only help to determine the masses of compact
objects, but will also potentially discover dim companions to these stellar
remnants, including orbiting exoplanets. | [
0,
1,
0,
0,
0,
0
] |
Title: Weak separation properties for closed subgroups of locally compact groups,
Abstract: Three separation properties for a closed subgroup $H$ of a locally compact
group $G$ are studied: (1) the existence of a bounded approximate indicator for
$H$, (2) the existence of a completely bounded invariant projection of
$VN\left(G\right)$ onto $VN_{H}\left(G\right)$, and (3) the approximability of
the characteristic function $\chi_{H}$ by functions in $M_{cb}A\left(G\right)$
with respect to the weak$^{*}$ topology of $M_{cb}A\left(G_{d}\right)$. We show
that the $H$-separation property of Kaniuth and Lau is characterized by the
existence of certain bounded approximate indicators for $H$ and that a
discretized analogue of the $H$-separation property is equivalent to (3).
Moreover, we give a related characterization of amenability of $H$ in terms of
any group $G$ containing $H$ as a closed subgroup. The weak amenability of $G$
or that $G_{d}$ satisfies the approximation property, in combination with the
existence of a natural projection (in the sense of Lau and Ülger), are shown
to suffice to conclude (3). Several consequences of (2) involving the
cb-multiplier completion of $A\left(G\right)$ are given. Finally, a convolution
technique for averaging over the closed subgroup $H$ is developed and used to
weaken a condition for the existence of a bounded approximate indicator for
$H$. | [
0,
0,
1,
0,
0,
0
] |
Title: A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics,
Abstract: Machine learning (ML) is increasingly deployed in real world contexts,
supplying actionable insights and forming the basis of automated
decision-making systems. While issues resulting from biases pre-existing in
training data have been at the center of the fairness debate, these systems are
also affected by technical and emergent biases, which often arise as
context-specific artifacts of implementation. This position paper interprets
technical bias as an epistemological problem and emergent bias as a dynamical
feedback phenomenon. In order to stimulate debate on how to change machine
learning practice to effectively address these issues, we explore this broader
view on bias, stress the need to reflect on epistemology, and point to
value-sensitive design methodologies to revisit the design and implementation
process of automated decision-making systems. | [
1,
0,
0,
1,
0,
0
] |
Title: Solution to the relaxation problem for a gas with a distribution function dependent on the velocity modulus,
Abstract: The paper presents a solution to the Boltzmann kinetic equation based on the
construction of its discrete conservative model. Discrete analogue of the
collision integral is presented as a contraction of a tensor, which is
independent from the initial distribution function, colliding with a tensor
composed of medium densities in the cells. Numerical implementation of the
discrete model is demonstrated on the example of the isotropic gas relaxation
problem applied to the hard spheres model. The key feature of the method is
independence of the collision tensor components from the distribution function.
Consequently the components of the collision tensor are calculated once for
various initial distribution functions, which substantially increases
performance of the suggested method. | [
0,
1,
0,
0,
0,
0
] |
Title: Robust Regression for Automatic Fusion Plasma Analysis based on Generative Modeling,
Abstract: The first step to realize automatic experimental data analysis for fusion
plasma experiments is fitting noisy data of temperature and density spatial
profiles, which are obtained routinely. However, it has been difficult to
construct algorithms that fit all the data without over- and under-fitting. In
this paper, we show that this difficulty originates from the lack of knowledge
of the probability distribution that the measurement data follow. We
demonstrate the use of a machine learning technique to estimate the data
distribution and to construct an optimal generative model. We show that the
fitting algorithm based on the generative modeling outperforms classical
heuristic methods in terms of the stability as well as the accuracy. | [
0,
0,
0,
1,
0,
0
] |
Title: Optimizing Long Short-Term Memory Recurrent Neural Networks Using Ant Colony Optimization to Predict Turbine Engine Vibration,
Abstract: This article expands on research that has been done to develop a recurrent
neural network (RNN) capable of predicting aircraft engine vibrations using
long short-term memory (LSTM) neurons. LSTM RNNs can provide a more
generalizable and robust method for prediction over analytical calculations of
engine vibration, as analytical calculations must be solved iteratively based
on specific empirical engine parameters, making this approach ungeneralizable
across multiple engines. In initial work, multiple LSTM RNN architectures were
proposed, evaluated and compared. This research improves the performance of the
most effective LSTM network design proposed in the previous work by using a
promising neuroevolution method based on ant colony optimization (ACO) to
develop and enhance the LSTM cell structure of the network. A parallelized
version of the ACO neuroevolution algorithm has been developed and the evolved
LSTM RNNs were compared to the previously used fixed topology. The evolved
networks were trained on a large database of flight data records obtained from
an airline containing flights that suffered from excessive vibration. Results
were obtained using MPI (Message Passing Interface) on a high performance
computing (HPC) cluster, evolving 1000 different LSTM cell structures using 168
cores over 4 days. The new evolved LSTM cells showed an improvement of 1.35%,
reducing prediction error from 5.51% to 4.17% when predicting excessive engine
vibrations 10 seconds in the future, while at the same time dramatically
reducing the number of weights from 21,170 to 11,810. | [
1,
0,
0,
0,
0,
0
] |
Title: PhyShare: Sharing Physical Interaction in Virtual Reality,
Abstract: We present PhyShare, a new haptic user interface based on actuated robots.
Virtual reality has recently been gaining wide adoption, and an effective
haptic feedback in these scenarios can strongly support user's sensory in
bridging virtual and physical world. Since participants do not directly observe
these robotic proxies, we investigate the multiple mappings between physical
robots and virtual proxies that can utilize the resources needed to provide a
well rounded VR experience. PhyShare bots can act either as directly touchable
objects or invisible carriers of physical objects, depending on different
scenarios. They also support distributed collaboration, allowing remotely
located VR collaborators to share the same physical feedback. | [
1,
0,
0,
0,
0,
0
] |
Title: Kneser ranks of random graphs and minimum difference representations,
Abstract: Every graph $G=(V,E)$ is an induced subgraph of some Kneser graph of rank
$k$, i.e., there is an assignment of (distinct) $k$-sets $v \mapsto A_v$ to the
vertices $v\in V$ such that $A_u$ and $A_v$ are disjoint if and only if $uv\in
E$. The smallest such $k$ is called the Kneser rank of $G$ and denoted by
$f_{\rm Kneser}(G)$. As an application of a result of Frieze and Reed
concerning the clique cover number of random graphs we show that for constant
$0< p< 1$ there exist constants $c_i=c_i(p)>0$, $i=1,2$ such that with high
probability \[ c_1 n/(\log n)< f_{\rm Kneser}(G) < c_2 n/(\log n). \] We apply
this for other graph representations defined by Boros, Gurvich and Meshulam. A
{\em $k$-min-difference representation} of a graph $G$ is an assignment of a
set $A_i$ to each vertex $i\in V(G)$ such that \[ ij\in E(G) \,\,
\Leftrightarrow \, \, \min \{|A_i\setminus A_j|,|A_j\setminus A_i| \}\geq k. \]
The smallest $k$ such that there exists a $k$-min-difference representation of
$G$ is denoted by $f_{\min}(G)$. Balogh and Prince proved in 2009 that for
every $k$ there is a graph $G$ with $f_{\min}(G)\geq k$. We prove that there
are constants $c''_1, c''_2>0$ such that $c''_1 n/(\log n)< f_{\min}(G) <
c''_2n/(\log n)$ holds for almost all bipartite graphs $G$ on $n+n$ vertices. | [
0,
0,
1,
0,
0,
0
] |
Title: Visualizing spreading phenomena on complex networks,
Abstract: Graph drawings are useful tools for exploring the structure and dynamics of
data that can be represented by pair-wise relationships among a set of objects.
Typical real-world social, biological or technological networks exhibit high
complexity resulting from a large number and broad heterogeneity of objects and
relationships. Thus, mapping these networks into a low-dimensional space to
visualize the dynamics of network-driven processes is a challenging task. Often
we want to analyze how a single node is influenced by or is influencing its
local network as the source of a spreading process. Here I present a network
layout algorithm for graphs with millions of nodes that visualizes spreading
phenomena from the perspective of a single node. The algorithm consists of
three stages to allow for an interactive graph exploration: First, a global
solution for the network layout is found in spherical space that minimizes
distance errors between all nodes. Second, a focal node is interactively
selected, and distances to this node are further optimized. Third, node
coordinates are mapped to a circular representation and drawn with additional
features to represent the network-driven phenomenon. The effectiveness and
scalability of this method are shown for a large collaboration network of
scientists, where we are interested in the citation dynamics around a focal
author. | [
1,
0,
0,
0,
0,
0
] |
Title: In silico evolution of signaling networks using rule-based models: bistable response dynamics,
Abstract: One of the ultimate goals in biology is to understand the design principles
of biological systems. Such principles, if they exist, can help us better
understand complex, natural biological systems and guide the engineering of de
novo ones. Towards deciphering design principles, in silico evolution of
biological systems with proper abstraction is a promising approach. Here, we
demonstrate the application of in silico evolution combined with rule-based
modelling for exploring design principles of cellular signaling networks. This
application is based on a computational platform, called BioJazz, which allows
in silico evolution of signaling networks with unbounded complexity. We provide
a detailed introduction to BioJazz architecture and implementation and describe
how it can be used to evolve and/or design signaling networks with defined
dynamics. For the latter, we evolve signaling networks with switch-like
response dynamics and demonstrate how BioJazz can result in new biological
insights on network structures that can endow bistable response dynamics. This
example also demonstrated both the power of BioJazz in evolving and designing
signaling networks and its limitations at the current stage of development. | [
0,
0,
0,
0,
1,
0
] |
Title: Deep Recurrent Neural Networks for seizure detection and early seizure detection systems,
Abstract: Epilepsy is common neurological diseases, affecting about 0.6-0.8 % of world
population. Epileptic patients suffer from chronic unprovoked seizures, which
can result in broad spectrum of debilitating medical and social consequences.
Since seizures, in general, occur infrequently and are unpredictable, automated
seizure detection systems are recommended to screen for seizures during
long-term electroencephalogram (EEG) recordings. In addition, systems for early
seizure detection can lead to the development of new types of intervention
systems that are designed to control or shorten the duration of seizure events.
In this article, we investigate the utility of recurrent neural networks (RNNs)
in designing seizure detection and early seizure detection systems. We propose
a deep learning framework via the use of Gated Recurrent Unit (GRU) RNNs for
seizure detection. We use publicly available data in order to evaluate our
method and demonstrate very promising evaluation results with overall accuracy
close to 100 %. We also systematically investigate the application of our
method for early seizure warning systems. Our method can detect about 98% of
seizure events within the first 5 seconds of the overall epileptic seizure
duration. | [
1,
0,
0,
0,
0,
0
] |
Title: QUICKAR: Automatic Query Reformulation for Concept Location using Crowdsourced Knowledge,
Abstract: During maintenance, software developers deal with numerous change requests
made by the users of a software system. Studies show that the developers find
it challenging to select appropriate search terms from a change request during
concept location. In this paper, we propose a novel technique--QUICKAR--that
automatically suggests helpful reformulations for a given query by leveraging
the crowdsourced knowledge from Stack Overflow. It determines semantic
similarity or relevance between any two terms by analyzing their adjacent word
lists from the programming questions of Stack Overflow, and then suggests
semantically relevant queries for concept location. Experiments using 510
queries from two software systems suggest that our technique can improve or
preserve the quality of 76% of the initial queries on average which is
promising. Comparison with one baseline technique validates our preliminary
findings, and also demonstrates the potential of our technique. | [
1,
0,
0,
0,
0,
0
] |
Title: Learned Belief-Propagation Decoding with Simple Scaling and SNR Adaptation,
Abstract: We consider the weighted belief-propagation (WBP) decoder recently proposed
by Nachmani et al. where different weights are introduced for each Tanner graph
edge and optimized using machine learning techniques. Our focus is on
simple-scaling models that use the same weights across certain edges to reduce
the storage and computational burden. The main contribution is to show that
simple scaling with few parameters often achieves the same gain as the full
parameterization. Moreover, several training improvements for WBP are proposed.
For example, it is shown that minimizing average binary cross-entropy is
suboptimal in general in terms of bit error rate (BER) and a new "soft-BER"
loss is proposed which can lead to better performance. We also investigate
parameter adapter networks (PANs) that learn the relation between the
signal-to-noise ratio and the WBP parameters. As an example, for the (32,16)
Reed-Muller code with a highly redundant parity-check matrix, training a PAN
with soft-BER loss gives near-maximum-likelihood performance assuming simple
scaling with only three parameters. | [
1,
0,
0,
1,
0,
0
] |
Title: Thick-medium model of transverse pattern formation in optically excited cold two-level atoms with a feedback mirror,
Abstract: We study a pattern forming instability in a laser driven optically thick
cloud of cold two-level atoms with a planar feedback mirror. A theoretical
model is developed, enabling a full analysis of transverse patterns in a medium
with saturable nonlinearity, taking into account diffraction within the medium,
and both the transmission and reflection gratings. Focus of the analysis is on
combined treatment of nonlinear propagation in a diffractively- and
optically-thick medium and the boundary condition given by feedback. We
demonstrate explicitly how diffraction within the medium breaks the degeneracy
of Talbot modes inherent in thin slice models. Existence of envelope curves
bounding all possible pattern formation thresholds is predicted. The importance
of envelope curves and their interaction with threshold curves is illustrated
by experimental observation of a sudden transition between length scales as
mirror displacement is varied. | [
0,
1,
0,
0,
0,
0
] |
Title: A reinvestigation of the giant Rashba-split states on Bi-covered Si(111),
Abstract: We study the electronic and spin structures of the giant Rashba-split surface
states of the Bi/Si(111)-($\sqrt{3} \times \sqrt{3}$)R30 trimer phase by means
of spin- and angle-resolved photoelectron spectroscopy (spin-ARPES). Supported
by tight-binding calculations of the surface state dispersion and spin
orientation, our findings show that the spin experiences a vortex-like
structure around the $\bar{\Gamma}$-point of the surface Brillouin zone - in
accordance with the standard Rashba model. Moreover, we find no evidence of a
spin vortex around the $\bar{\mathrm{K}}$-point in the hexagonal Brillouin
zone, and thus no peculiar Rashba split around this point, something that has
been suggested by previous works. Rather the opposite, our results show that
the spin structure around $\bar{\mathrm{K}}$ can be fully understood by taking
into account the symmetry of the Brillouin zone and the intersection of spin
vortices centered around the $\bar{\Gamma}$-points in neighboring Brillouin
zones. As a result, the spin structure is consistently explained within the
standard framework of the Rashba model although the spin-polarized surface
states experience a more complex dispersion compared to free-electron like
parabolic states. | [
0,
1,
0,
0,
0,
0
] |
Title: Performance Evaluation of Spectrum Mobility in Multi-homed Mobile IPv6 Cognitive Radio Cellular Networks,
Abstract: Technological developments alongside VLSI achievements enable mobile devices
to be equipped with multiple radio interfaces which is known as multihoming. On
the other hand, the combination of various wireless access technologies, known
as Next Generation Wireless Networks (NGWNs) has been introduced to provide
continuous connection to mobile devices in any time and location. Cognitive
radio networks as a part of NGWNs aroused to overcome spectrum inefficiency and
spectrum scarcity issues. In order to provide seamless and ubiquitous
connection across heterogeneous wireless access networks in the context of
cognitive radio networks, utilizing Mobile IPv6 is beneficial. In this paper, a
mobile device equipped with two radio interfaces is considered in order to
evaluate performance of spectrum handover in terms of handover latency. The
analytical results show that the proposed model can achieve better performance
compared to other related mobility management protocols mainly in terms of
handover latency. | [
1,
0,
0,
0,
0,
0
] |
Title: Tunable Quantum Criticality and Super-ballistic Transport in a `Charge' Kondo Circuit,
Abstract: Quantum phase transitions are ubiquitous in many exotic behaviors of
strongly-correlated materials. However the microscopic complexity impedes their
quantitative understanding. Here, we observe thoroughly and comprehend the rich
strongly-correlated physics in two profoundly dissimilar regimes of quantum
criticality. With a circuit implementing a quantum simulator for the
three-channel Kondo model, we reveal the universal scalings toward different
low-temperature fixed points and along the multiple crossovers from quantum
criticality. Notably, an unanticipated violation of the maximum conductance for
ballistic free electrons is uncovered. The present charge pseudospin
implementation of a Kondo impurity opens access to a broad variety of
strongly-correlated phenomena. | [
0,
1,
0,
0,
0,
0
] |
Title: Exploring Cosmic Origins with CORE: Survey requirements and mission design,
Abstract: Future observations of cosmic microwave background (CMB) polarisation have
the potential to answer some of the most fundamental questions of modern
physics and cosmology. In this paper, we list the requirements for a future CMB
polarisation survey addressing these scientific objectives, and discuss the
design drivers of the CORE space mission proposed to ESA in answer to the "M5"
call for a medium-sized mission. The rationale and options, and the
methodologies used to assess the mission's performance, are of interest to
other future CMB mission design studies. CORE is designed as a near-ultimate
CMB polarisation mission which, for optimal complementarity with ground-based
observations, will perform the observations that are known to be essential to
CMB polarisation scienceand cannot be obtained by any other means than a
dedicated space mission. | [
0,
1,
0,
0,
0,
0
] |
Title: Influence of material parameters on the performance of niobium based superconducting RF cavities,
Abstract: A detailed thermal analysis of a Niobium (Nb) based superconducting radio
frequency (SRF) cavity in a liquid helium bath is presented, taking into
account the temperature and magnetic field dependence of the surface resistance
and thermal conductivity in the superconducting state of the starting Nb
material (for SRF cavity fabrication) with different impurity levels. The drop
in SRF cavity quality factor (Q_0) in the high acceleration gradient regime
(before ultimate breakdown of the SRF cavity) is studied in details. It is
argued that the high field Q_0-drop in SRF cavity is considerably influenced by
the intrinsic material parameters such as electrical conductivity, and thermal
diffusivity. The detail analysis also shows that the current specification on
the purity of niobium material for SRF cavity fabrication is somewhat over
specified. Niobium material with a relatively low purity can very well serve
the purpose for the accelerators dedicated for spallation neutron source (SNS)
or accelerator driven sub-critical system (ADSS) applications, where the
required accelerating gradient is typically up to 20 MV/m,. This information
will have important implication towards the cost reduction of superconducting
technology based particle accelerators for various applications. | [
0,
1,
0,
0,
0,
0
] |
Title: Shear Viscosity of Uniform Fermi Gases with Population Imbalance,
Abstract: The shear viscosity plays an important role in studies of transport phenomena
in ultracold Fermi gases and serves as a diagnostic of various microscopic
theories. Due to the complicated phase structures of population-imbalanced
Fermi gases, past works mainly focus on unpolarized Fermi gases. Here we
investigate the shear viscosity of homogeneous, population-imbalanced Fermi
gases with tunable attractive interactions at finite temperatures by using a
pairing fluctuation theory for thermodynamical quantities and a gauge-invariant
linear response theory for transport coefficients. In the unitary and BEC
regimes, the shear viscosity increases with the polarization because the excess
majority fermions cause gapless excitations acting like a normal fluid. In the
weak BEC regime the excess fermions also suppress the noncondensed pairs at low
polarization, and we found a minimum in the ratio of shear viscosity and
relaxation time. To help constrain the relaxation time from linear response
theory, we derive an exact relation connecting some thermodynamic quantities
and transport coefficients at the mean-field level for unitary Fermi
superfluids with population imbalance. An approximate relation beyond
mean-field theory is proposed and only exhibits mild deviations from numerical
results. | [
0,
1,
0,
0,
0,
0
] |
Title: Equivalence of Intuitionistic Inductive Definitions and Intuitionistic Cyclic Proofs under Arithmetic,
Abstract: A cyclic proof system gives us another way of representing inductive
definitions and efficient proof search. In 2011 Brotherston and Simpson
conjectured the equivalence between the provability of the classical cyclic
proof system and that of the classical system of Martin-Lof's inductive
definitions.
This paper studies the conjecture for intuitionistic logic.
This paper first points out that the countermodel of FOSSACS 2017 paper by
the same authors shows the conjecture for intuitionistic logic is false in
general. Then this paper shows the conjecture for intuitionistic logic is true
under arithmetic, namely, the provability of the intuitionistic cyclic proof
system is the same as that of the intuitionistic system of Martin-Lof's
inductive definitions when both systems contain Heyting arithmetic HA.
For this purpose, this paper also shows that HA proves Podelski-Rybalchenko
theorem for induction and Kleene-Brouwer theorem for induction. These results
immediately give another proof to the conjecture under arithmetic for classical
logic shown in LICS 2017 paper by the same authors. | [
1,
0,
1,
0,
0,
0
] |
Title: Further and stronger analogy between sampling and optimization: Langevin Monte Carlo and gradient descent,
Abstract: In this paper, we revisit the recently established theoretical guarantees for
the convergence of the Langevin Monte Carlo algorithm of sampling from a smooth
and (strongly) log-concave density. We improve the existing results when the
convergence is measured in the Wasserstein distance and provide further
insights on the very tight relations between, on the one hand, the Langevin
Monte Carlo for sampling and, on the other hand, the gradient descent for
optimization. Finally, we also establish guarantees for the convergence of a
version of the Langevin Monte Carlo algorithm that is based on noisy
evaluations of the gradient. | [
0,
0,
1,
1,
0,
0
] |
Title: Probing Spatial Locality in Ionic Liquids with the Grand Canonical Adaptive Resolution Molecular Dynamics Technique,
Abstract: We employ the Grand Canonical Adaptive Resolution Molecular Dynamics
Technique (GC-AdResS) to test the spatial locality of the 1-ethyl 3-methyl
imidazolium chloride liquid. In GC-AdResS atomistic details are kept only in an
open sub-region of the system while the environment is treated at
coarse-grained level, thus if spatial quantities calculated in such a
sub-region agree with the equivalent quantities calculated in a full atomistic
simulation then the atomistic degrees of freedom outside the sub-region play a
negligible role. The size of the sub-region fixes the degree of spatial
locality of a certain quantity. We show that even for sub-regions whose radius
corresponds to the size of a few molecules, spatial properties are reasonably
{reproduced} thus suggesting a higher degree of spatial locality, a hypothesis
put forward also by other {researchers} and that seems to play an important
role for the characterization of fundamental properties of a large class of
ionic liquids. | [
0,
1,
0,
0,
0,
0
] |
Title: Towards automation of data quality system for CERN CMS experiment,
Abstract: Daily operation of a large-scale experiment is a challenging task,
particularly from perspectives of routine monitoring of quality for data being
taken. We describe an approach that uses Machine Learning for the automated
system to monitor data quality, which is based on partial use of data qualified
manually by detector experts. The system automatically classifies marginal
cases: both of good an bad data, and use human expert decision to classify
remaining "grey area" cases.
This study uses collision data collected by the CMS experiment at LHC in
2010. We demonstrate that proposed workflow is able to automatically process at
least 20\% of samples without noticeable degradation of the result. | [
1,
0,
0,
0,
0,
0
] |
Title: Solving a non-linear model of HIV infection for CD4+T cells by combining Laplace transformation and Homotopy analysis,
Abstract: The aim of this paper is to find the approximate solution of HIV infection
model of CD4+T cells. For this reason, the homotopy analysis transform method
(HATM) is applied. The presented method is combination of traditional homotopy
analysis method (HAM) and the Laplace transformation. The convergence of
presented method is discussed by preparing a theorem which shows the
capabilities of method. The numerical results are shown for different values of
iterations. Also, the regions of convergence are demonstrated by plotting
several h-curves. Furthermore in order to show the efficiency and accuracy of
method, the residual error for different iterations are presented. | [
0,
0,
0,
0,
1,
0
] |
Title: On Store Languages of Language Acceptors,
Abstract: It is well known that the "store language" of every pushdown automaton -- the
set of store configurations (state and stack contents) that can appear as an
intermediate step in accepting computations -- is a regular language. Here many
models of language acceptors with various data structures are examined, along
with a study of their store languages. For each model, an attempt is made to
find the simplest model that accepts their store languages. Some connections
between store languages of one-way and two-way machines generally are
demonstrated, as with connections between nondeterministic and deterministic
machines. A nice application of these store language results is also presented,
showing a general technique for proving families accepted by many deterministic
models are closed under right quotient with regular languages, resolving some
open questions (and significantly simplifying proofs for others that are known)
in the literature. Lower bounds on the space complexity for recognizing store
languages for the languages to be non-regular are obtained. | [
1,
0,
0,
0,
0,
0
] |
Title: Exchange constants in molecule-based magnets derived from density functional methods,
Abstract: Cu(pyz)(NO3)2 is a quasi one-dimensional molecular antiferromagnet that
exhibits three dimensional long-range magnetic order below TN=110 mK due to the
presence of weak inter-chain exchange couplings. Here we compare calculations
of the three largest exchange coupling constants in this system using two
techniques based on plane-wave basis-set density functional theory: (i) a dimer
fragment approach and (ii) an approach using periodic boundary conditions. The
calculated values of the large intrachain coupling constant are found to be
consistent with experiment, showing the expected level of variation between
different techniques and implementations. However, the interchain coupling
constants are found to be smaller than the current limits on the resolution of
the calculations. This is due to the computational limitations on convergence
of absolute energy differences with respect to basis set, which are larger than
the inter-chain couplings themselves. Our results imply that errors resulting
from such limitations are inherent in the evaluation of small exchange
constants in systems of this sort, and that many previously reported results
should therefore be treated with caution. | [
0,
1,
0,
0,
0,
0
] |
Title: An Extended Relevance Model for Session Search,
Abstract: The session search task aims at best serving the user's information need
given her previous search behavior during the session. We propose an extended
relevance model that captures the user's dynamic information need in the
session. Our relevance modelling approach is directly driven by the user's
query reformulation (change) decisions and the estimate of how much the user's
search behavior affects such decisions. Overall, we demonstrate that, the
proposed approach significantly boosts session search performance. | [
1,
0,
0,
0,
0,
0
] |
Title: Hierarchical Graph Representation Learning with Differentiable Pooling,
Abstract: Recently, graph neural networks (GNNs) have revolutionized the field of graph
representation learning through effectively learned node embeddings, and
achieved state-of-the-art results in tasks such as node classification and link
prediction. However, current GNN methods are inherently flat and do not learn
hierarchical representations of graphs---a limitation that is especially
problematic for the task of graph classification, where the goal is to predict
the label associated with an entire graph. Here we propose DiffPool, a
differentiable graph pooling module that can generate hierarchical
representations of graphs and can be combined with various graph neural network
architectures in an end-to-end fashion. DiffPool learns a differentiable soft
cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a
set of clusters, which then form the coarsened input for the next GNN layer.
Our experimental results show that combining existing GNN methods with DiffPool
yields an average improvement of 5-10% accuracy on graph classification
benchmarks, compared to all existing pooling approaches, achieving a new
state-of-the-art on four out of five benchmark data sets. | [
1,
0,
0,
1,
0,
0
] |
Title: Ranking and Cooperation in Real-World Complex Networks,
Abstract: People participate and activate in online social networks and thus tremendous
amount of network data is generated; data regarding their interactions,
interests and activities. Some people search for specific questions through
online social platforms such as forums and they may receive a suitable response
via experts. To categorize people as experts and to evaluate their willingness
to cooperate, one can use ranking and cooperation problems from complex
networks. In this paper, we investigate classical ranking algorithms besides
the prisoner dilemma game to simulate cooperation and defection of agents. We
compute the correlation among the node rank and node cooperativity via three
strategies. The first strategy is involved in node level; however, other
strategies are calculated regarding neighborhood of nodes. We find out
correlations among specific ranking algorithms and cooperativtiy of nodes. Our
observations may be applied to estimate the propensity of people (experts) to
cooperate in future based on their ranking values. | [
1,
0,
0,
0,
0,
0
] |
Title: Remarks on defective Fano manifolds,
Abstract: This note continues our previous work on special secant defective
(specifically, conic connected and local quadratic entry locus) and dual
defective manifolds. These are now well understood, except for the prime Fano
ones. Here we add a few remarks on this case, completing the results in our
papers \cite{LQEL I}, \cite{LQEL II}, \cite{CC}, \cite{HC} and \cite{DD}; see
also the recent book \cite{Russo} | [
0,
0,
1,
0,
0,
0
] |
Title: Precision of the ENDGame: Mixed-precision arithmetic in the iterative solver of the Unified Model,
Abstract: The Met Office's weather and climate simulation code the Unified Model is
used for both operational Numerical Weather Prediction and Climate modelling.
The computational performance of the model running on parallel supercomputers
is a key consideration. A Krylov sub-space solver is employed to solve the
equations of the dynamical core of the model, known as ENDGame. These describe
the evolution of the Earth's atmosphere. Typically, 64-bit precision is used
throughout weather and climate applications. This work presents a
mixed-precision implementation of the solver, the beneficial effect on run-time
and the impact on solver convergence. The complex interplay of errors arising
from accumulated round-off in floating-point arithmetic and other numerical
effects is discussed. A careful analysis is required, however, the
mixed-precision solver is now employed in the operational forecast to satisfy
run-time constraints without compromising the accuracy of the solution. | [
1,
0,
0,
0,
0,
0
] |
Title: Intra-Cluster Autonomous Coverage Optimization For Dense LTE-A Networks,
Abstract: Self Organizing Networks (SONs) are considered as vital deployments towards
upcoming dense cellular networks. From a mobile carrier point of view,
continuous coverage optimization is critical for better user perceptions. The
majority of SON contributions introduce novel algorithms that optimize specific
performance metrics. However, they require extensive processing delays and
advanced knowledge of network statistics that may not be available. In this
work, a progressive Autonomous Coverage Optimization (ACO) method combined with
adaptive cell dimensioning is proposed. The proposed method emphasizes the fact
that the effective cell coverage is a variant on actual user distributions. ACO
algorithm builds a generic Space-Time virtual coverage map per cell to detect
coverage holes in addition to limited or extended coverage conditions.
Progressive levels of optimization are followed to timely resolve coverage
issues with maintaining optimization stability. Proposed ACO is verified under
both simulations and practical deployment in a pilot cluster for a worldwide
mobile carrier. Key Performance Indicators show that proposed ACO method
significantly enhances system coverage and performance. | [
1,
0,
0,
0,
0,
0
] |
Title: Two-dimensional topological nodal line semimetal in layered $X_2Y$ ($X$ = Ca, Sr, and Ba; $Y$ = As, Sb, and Bi),
Abstract: In topological semimetals the Dirac points can form zero-dimensional and
one-dimensional manifolds, as predicted for Dirac/Weyl semimetals and
topological nodal line semimetals, respectively. Here, based on
first-principles calculations, we predict a topological nodal line semimetal
phase in the two-dimensional compounds $X_2Y$ ($X$=Ca, Sr, and Ba; $Y$=As, Sb,
and Bi) in the absence of spin-orbit coupling (SOC) with a band inversion at
the M point. The mirror symmetry as well as the electrostatic interaction, that
can be engineered via strain, are responsible for the nontrivial phase. In
addition, we demonstrate that the exotic edge states can be also obtained
without and with SOC although a tiny gap appears at the nodal line for the bulk
states when SOC is included. | [
0,
1,
0,
0,
0,
0
] |
Title: A Real-Valued Modal Logic,
Abstract: A many-valued modal logic is introduced that combines the usual Kripke frame
semantics of the modal logic K with connectives interpreted locally at worlds
by lattice and group operations over the real numbers. A labelled tableau
system is provided and a coNEXPTIME upper bound obtained for checking validity
in the logic. Focussing on the modal-multiplicative fragment, the labelled
tableau system is then used to establish completeness for a sequent calculus
that admits cut-elimination and an axiom system that extends the multiplicative
fragment of Abelian logic. | [
1,
0,
0,
0,
0,
0
] |
Title: Embedded tori with prescribed mean curvature,
Abstract: We construct a sequence of compact, oriented, embedded, two-dimensional
surfaces of genus one into Euclidean 3-space with prescribed, almost constant,
mean curvature of the form $H(X)=1+{A}{|X|^{-\gamma}}$ for $|X|$ large, when
$A<0$ and $\gamma\in(0,2)$. Such surfaces are close to sections of unduloids
with small necksize, folded along circumferences centered at the origin and
with larger and larger radii. The construction involves a deep study of the
corresponding Jacobi operators, an application of the Lyapunov-Schmidt
reduction method and some variational argument. | [
0,
0,
1,
0,
0,
0
] |
Title: On the reducibility of induced representations for classical p-adic groups and related affine Hecke algebras,
Abstract: Let $\pi $ be an irreducible smooth complex representation of a general
linear $p$-adic group and let $\sigma $ be an irreducible complex supercuspidal
representation of a classical $p$-adic group of a given type, so that
$\pi\otimes\sigma $ is a representation of a standard Levi subgroup of a
$p$-adic classical group of higher rank. We show that the reducibility of the
representation of the appropriate $p$-adic classical group obtained by
(normalized) parabolic induction from $\pi\otimes\sigma $ does not depend on
$\sigma $, if $\sigma $ is "separated" from the supercuspidal support of $\pi
$. (Here, "separated" means that, for each factor $\rho $ of a representation
in the supercuspidal support of $\pi $, the representation parabolically
induced from $\rho\otimes\sigma $ is irreducible.) This was conjectured by E.
Lapid and M. Tadić. (In addition, they proved, using results of C. Jantzen,
that this induced representation is always reducible if the supercuspidal
support is not separated.)
More generally, we study, for a given set $I$ of inertial orbits of
supercuspidal representations of $p$-adic general linear groups, the category
$\CC _{I,\sigma}$ of smooth complex finitely generated representations of
classical $p$-adic groups of fixed type, but arbitrary rank, and supercuspidal
support given by $\sigma $ and $I$, show that this category is equivalent to a
category of finitely generated right modules over a direct sum of tensor
products of extended affine Hecke algebras of type $A$, $B$ and $D$ and
establish functoriality properties, relating categories with disjoint $I$'s. In
this way, we extend results of C. Jantzen who proved a bijection between
irreducible representations corresponding to these categories. The proof of the
above reducibility result is then based on Hecke algebra arguments, using
Kato's exotic geometry. | [
0,
0,
1,
0,
0,
0
] |
Title: On derivations with respect to finite sets of smooth functions,
Abstract: The purpose of this paper is to show that functions that derivate the
two-variable product function and one of the exponential, trigonometric or
hyperbolic functions are also standard derivations. The more general problem
considered is to describe finite sets of differentiable functions such that
derivations with respect to this set are automatically standard derivations. | [
0,
0,
1,
0,
0,
0
] |
Title: A new magnetic phase in the nickelate perovskite TlNiO$_3$,
Abstract: The RNiO$_3$ perovskites are known to order antiferromagnetically below a
material-dependent Néel temperature $T_\text{N}$. We report experimental
evidence indicating the existence of a second magnetically-ordered phase in
TlNiO$_3$ above $T_\text{N} = 104$ K, obtained using nuclear magnetic resonance
and muon spin rotation spectroscopy. The new phase, which persists up to a
temperature $T_\text{N}^* = 202$ K, is suppressed by the application of an
external magnetic field of approximately 1 T. It is not yet known if such a
phase also exists in other perovskite nickelates. | [
0,
1,
0,
0,
0,
0
] |
Title: On the Inverse of Forward Adjacency Matrix,
Abstract: During routine state space circuit analysis of an arbitrarily connected set
of nodes representing a lossless LC network, a matrix was formed that was
observed to implicitly capture connectivity of the nodes in a graph similar to
the conventional incidence matrix, but in a slightly different manner. This
matrix has only 0, 1 or -1 as its elements. A sense of direction (of the graph
formed by the nodes) is inherently encoded in the matrix because of the
presence of -1. It differs from the incidence matrix because of leaving out the
datum node from the matrix. Calling this matrix as forward adjacency matrix, it
was found that its inverse also displays useful and interesting physical
properties when a specific style of node-indexing is adopted for the nodes in
the graph. The graph considered is connected but does not have any closed
loop/cycle (corresponding to closed loop of inductors in a circuit) as with its
presence the matrix is not invertible. Incidentally, by definition the graph
being considered is a tree. The properties of the forward adjacency matrix and
its inverse, along with rigorous proof, are presented. | [
1,
0,
0,
0,
0,
0
] |
Title: Constructing and Understanding New and Old Scales on Slide Rules,
Abstract: We discuss the practical problems arising when constructing any (new or old)
scales on slide rules, i.e. realizing the theory in the practice. This might
help anyone in planning and realizing (mainly the magnitude and labeling of)
new scales on slide rules in the future. In Sections 1-7 we deal with technical
problems, Section 8 is devoted to the relationship among different scales. In
the last Section we provide an interesting fact as a surprise to those readers
who wish to skip this long article. | [
0,
0,
1,
0,
0,
0
] |
Title: Non-compact subsets of the Zariski space of an integral domain,
Abstract: Let $V$ be a minimal valuation overring of an integral domain $D$ and let
$\mathrm{Zar}(D)$ be the Zariski space of the valuation overrings of $D$.
Starting from a result in the theory of semistar operations, we prove a
criterion under which the set $\mathrm{Zar}(D)\setminus\{V\}$ is not compact.
We then use it to prove that, in many cases, $\mathrm{Zar}(D)$ is not a
Noetherian space, and apply it to the study of the spaces of Kronecker function
rings and of Noetherian overrings. | [
0,
0,
1,
0,
0,
0
] |
Title: Finite temperature Green's function approach for excited state and thermodynamic properties of cool to warm dense matter,
Abstract: We present a finite-temperature extension of the retarded cumulant Green's
function for calculations of exited-state and thermodynamic properties of
electronic systems. The method incorporates a cumulant to leading order in the
screened Coulomb interaction $W$ and improves excited state properties compared
to the $GW$ approximation of many-body perturbation theory. Results for the
homogeneous electron gas are presented for a wide range of densities and
temperatures, from cool to warm dense matter regime, which reveal several
hitherto unexpected properties. For example, correlation effects remain strong
at high $T$ while the exchange-correlation energy becomes small. In addition,
the spectral function broadens and damping increases with temperature, blurring
the usual quasi-particle picture. Similarly Compton scattering exhibits
substantial many-body corrections that persist at normal densities and
intermediate $T$. Results for exchange-correlation energies and potentials are
in good agreement with existing theories and finite-temperature DFT
functionals. | [
0,
1,
0,
0,
0,
0
] |
Title: Geometric Embedding of Path and Cycle Graphs in Pseudo-convex Polygons,
Abstract: Given a graph $ G $ with $ n $ vertices and a set $ S $ of $ n $ points in
the plane, a point-set embedding of $ G $ on $ S $ is a planar drawing such
that each vertex of $ G $ is mapped to a distinct point of $ S $. A
straight-line point-set embedding is a point-set embedding with no edge bends
or curves. The point-set embeddability problem is NP-complete, even when $ G $
is $ 2 $-connected and $ 2 $-outerplanar. It has been solved polynomially only
for a few classes of planar graphs. Suppose that $ S $ is the set of vertices
of a simple polygon. A straight-line polygon embedding of a graph is a
straight-line point-set embedding of the graph onto the vertices of the polygon
with no crossing between edges of graph and the edges of polygon. In this
paper, we present $ O(n) $-time algorithms for polygon embedding of path and
cycle graphs in simple convex polygon and same time algorithms for polygon
embedding of path and cycle graphs in a large type of simple polygons where $n$
is the number of vertices of the polygon. | [
1,
0,
0,
0,
0,
0
] |
Title: Simulating Brain Signals: Creating Synthetic EEG Data via Neural-Based Generative Models for Improved SSVEP Classification,
Abstract: Despite significant recent progress in the area of Brain-Computer Interface,
there are numerous shortcomings associated with collecting
Electroencephalography (EEG) signals in real-world environments. These include,
but are not limited to, subject and session data variance, long and arduous
calibration processes and performance generalisation issues across
differentsubjects or sessions. This implies that many downstream applications,
including Steady State Visual Evoked Potential (SSVEP) based classification
systems, can suffer from a shortage of reliable data. Generating meaningful and
realistic synthetic data can therefore be of significant value in circumventing
this problem. We explore the use of modern neural-based generative models
trained on a limited quantity of EEG data collected from different subjects to
generate supplementary synthetic EEG signal vectors subsequently utilised to
train an SSVEP classifier. Extensive experimental analyses demonstrate the
efficacy of our generated data, leading to significant improvements across a
variety of evaluations, with the crucial task of cross-subject generalisation
improving by over 35% with the use of synthetic data. | [
0,
0,
0,
0,
1,
0
] |
Title: An FPTAS for the parametric knapsack problem,
Abstract: In this paper, we investigate the parametric knapsack problem, in which the
item profits are affine functions depending on a real-valued parameter. The aim
is to provide a solution for all values of the parameter. It is well-known that
any exact algorithm for the problem may need to output an exponential number of
knapsack solutions. We present a fully polynomial-time approximation scheme
(FPTAS) for the problem that, for any desired precision $\varepsilon \in
(0,1)$, computes $(1-\varepsilon)$-approximate solutions for all values of the
parameter. This is the first FPTAS for the parametric knapsack problem that
does not require the slopes and intercepts of the affine functions to be
non-negative but works for arbitrary integral values. Our FPTAS outputs
$\mathcal{O}(\frac{n^2}{\varepsilon})$ knapsack solutions and runs in strongly
polynomial-time $\mathcal{O}(\frac{n^4}{\varepsilon^2})$. Even for the special
case of positive input data, this is the first FPTAS with a strongly polynomial
running time. We also show that this time bound can be further improved to
$\mathcal{O}(\frac{n^2}{\varepsilon} \cdot A(n,\varepsilon))$, where
$A(n,\varepsilon)$ denotes the running time of any FPTAS for the traditional
(non-parametric) knapsack problem. | [
1,
0,
1,
0,
0,
0
] |
Title: Saliency Benchmarking Made Easy: Separating Models, Maps and Metrics,
Abstract: Dozens of new models on fixation prediction are published every year and
compared on open benchmarks such as MIT300 and LSUN. However, progress in the
field can be difficult to judge because models are compared using a variety of
inconsistent metrics. Here we show that no single saliency map can perform well
under all metrics. Instead, we propose a principled approach to solve the
benchmarking problem by separating the notions of saliency models, maps and
metrics. Inspired by Bayesian decision theory, we define a saliency model to be
a probabilistic model of fixation density prediction and a saliency map to be a
metric-specific prediction derived from the model density which maximizes the
expected performance on that metric given the model density. We derive these
optimal saliency maps for the most commonly used saliency metrics (AUC, sAUC,
NSS, CC, SIM, KL-Div) and show that they can be computed analytically or
approximated with high precision. We show that this leads to consistent
rankings in all metrics and avoids the penalties of using one saliency map for
all metrics. Our method allows researchers to have their model compete on many
different metrics with state-of-the-art in those metrics: "good" models will
perform well in all metrics. | [
1,
0,
0,
1,
0,
0
] |
Title: Distributed Framework for Optimal Demand Distribution in Self-Balancing Microgrid,
Abstract: This study focusses on self-balancing microgrids to smartly utilize and
prevent overdrawing of available power capacity of the grid. A distributed
framework for automated distribution of optimal power demand is proposed, where
all building in a microgrid dynamically and simultaneously adjusts their own
power consumption to reach their individual optimal power demands while
cooperatively striving to maintain the overall grid stable. Emphasis has been
given to aspects of algorithm that yields lower time of convergence and is
demonstrated through quantitative and qualitative analysis of simulation
results. | [
1,
0,
0,
0,
0,
0
] |
Title: Fast, precise, and widely tunable frequency control of an optical parametric oscillator referenced to a frequency comb,
Abstract: Optical frequency combs (OFC) provide a convenient reference for the
frequency stabilization of continuous-wave lasers. We demonstrate a frequency
control method relying on tracking over a wide range and stabilizing the beat
note between the laser and the OFC. The approach combines fast frequency ramps
on a millisecond timescale in the entire mode-hop free tuning range of the
laser and precise stabilization to single frequencies. We apply it to a
commercially available optical parametric oscillator (OPO) and demonstrate
tuning over more than 60 GHz with a ramping speed up to 3 GHz/ms. Frequency
ramps spanning 15 GHz are performed in less than 10 ms, with the OPO instantly
relocked to the OFC after the ramp at any desired frequency. The developed
control hardware and software is able to stabilize the OPO to sub-MHz precision
and to perform sequences of fast frequency ramps automatically. | [
0,
1,
0,
0,
0,
0
] |
Title: Compact-Like Operators in Lattice-Normed Spaces,
Abstract: A linear operator $T$ between two lattice-normed spaces is said to be
$p$-compact if, for any $p$-bounded net $x_\alpha$, the net $Tx_\alpha$ has a
$p$-convergent subnet. $p$-Compact operators generalize several known classes
of operators such as compact, weakly compact, order weakly compact,
$AM$-compact operators, etc. Similar to $M$-weakly and $L$-weakly compact
operators, we define $p$-$M$-weakly and $p$-$L$-weakly compact operators and
study some of their properties. We also study $up$-continuous and $up$-compact
operators between lattice-normed vector lattices. | [
0,
0,
1,
0,
0,
0
] |
Title: Near-infrared spectroscopy of 5 ultra-massive galaxies at 1.7 < z < 2.7,
Abstract: We present the results of a pilot near-infrared (NIR) spectroscopic campaign
of five very massive galaxies ($\log(\text{M}_\star/\text{M}_\odot)>11.45$) in
the range of $1.7<z<2.7$. We measure an absorption feature redshift for one
galaxy at $z_\text{spec}=2.000\pm0.006$. For the remaining galaxies, we combine
the photometry with the continuum from the spectra to estimate continuum
redshifts and stellar population properties. We define a continuum redshift
($z_{\rm cont}$ ) as one in which the redshift is estimated probabilistically
using EAZY from the combination of catalog photometry and the observed
spectrum. We derive the uncertainties on the stellar population synthesis
properties using a Monte Carlo simulation and examine the correlations between
the parameters with and without the use of the spectrum in the modeling of the
spectral energy distributions (SEDs). The spectroscopic constraints confirm the
extreme stellar masses of the galaxies in our sample. We find that three out of
five galaxies are quiescent (star formation rate of $\lesssim 1
M_\odot~yr^{-1}$) with low levels of dust obscuration ($A_{\rm V} < 1$) , that
one galaxy displays both high levels of star formation and dust obscuration
(${\rm SFR} \approx 300 M_\odot~{\rm yr}^{-1}$, $A_{\rm V} \approx 1.7$~mag),
and that the remaining galaxy has properties that are intermediate between the
quiescent and star-forming populations. | [
0,
1,
0,
0,
0,
0
] |
Title: A hybrid deep learning approach for medical relation extraction,
Abstract: Mining relationships between treatment(s) and medical problem(s) is vital in
the biomedical domain. This helps in various applications, such as decision
support system, safety surveillance, and new treatment discovery. We propose a
deep learning approach that utilizes both word level and sentence-level
representations to extract the relationships between treatment and problem.
While deep learning techniques demand a large amount of data for training, we
make use of a rule-based system particularly for relationship classes with
fewer samples. Our final relations are derived by jointly combining the results
from deep learning and rule-based models. Our system achieved a promising
performance on the relationship classes of I2b2 2010 relation extraction task. | [
0,
0,
0,
1,
0,
0
] |
Title: Performance Optimization of Network Coding Based Communication and Reliable Storage in Internet of Things,
Abstract: Internet or things (IoT) is changing our daily life rapidly. Although new
technologies are emerging everyday and expanding their influence in this
rapidly growing area, many classic theories can still find their places. In
this paper, we study the important applications of the classic network coding
theory in two important components of Internet of things, including the IoT
core network, where data is sensed and transmitted, and the distributed cloud
storage, where the data generated by the IoT core network is stored. First we
propose an adaptive network coding (ANC) scheme in the IoT core network to
improve the transmission efficiency. We demonstrate the efficacy of the scheme
and the performance advantage over existing schemes through simulations. %Next
we study the application of network coding in the distributed cloud storage.
Next we introduce the optimal storage allocation problem in the network coding
based distributed cloud storage, which aims at searching for the most reliable
allocation that distributes the $n$ data components into $N$ data centers,
given the failure probability $p$ of each data center. Then we propose a
polynomial-time optimal storage allocation (OSA) scheme to solve the problem.
Both the theoretical analysis and the simulation results show that the storage
reliability could be greatly improved by the OSA scheme. | [
1,
0,
0,
0,
0,
0
] |
Title: Time-dependent probability density functions and information geometry in stochastic logistic and Gompertz models,
Abstract: A probabilistic description is essential for understanding growth processes
far from equilibrium. In this paper, we compute time-dependent Probability
Density Functions (PDFs) in order to investigate stochastic logistic and
Gompertz models, which are two of the most popular growth models. We consider
different types of short-correlated internal (multiplicative) and external
(additive) stochastic noises and compare the time-dependent PDFs in the two
models, elucidating the effects of the additive and multiplicative noises on
the form of PDFs. We demonstrate an interesting transition from a unimodal to a
bimodal PDF as the multiplicative noise increases for a fixed value of the
additive noise. A much weaker (leaky) attractor in the Gompertz model leads to
a significant (singular) growth of the population of a very small size. We
point out the limitation of using stationary PDFs, mean value and variance in
understanding statistical properties of the growth far from equilibrium,
highlighting the importance of time-dependent PDFs. We further compare these
two models from the perspective of information change that occurs during the
growth process. Specifically, we define an infinitesimal distance at any time
by comparing two PDFs at times infinitesimally apart and sum these distances in
time. The total distance along the trajectory quantifies the total number of
different states that the system undergoes in time, and is called the
information length. We show that the time-evolution of the two models become
more similar when measured in units of the information length and point out the
merit of using the information length in unifying and understanding the dynamic
evolution of different growth processes. | [
0,
1,
0,
0,
0,
0
] |
Title: A Local-Search Algorithm for Steiner Forest,
Abstract: In the Steiner Forest problem, we are given a graph and a collection of
source-sink pairs, and the goal is to find a subgraph of minimum total length
such that all pairs are connected. The problem is APX-Hard and can be
2-approximated by, e.g., the elegant primal-dual algorithm of Agrawal, Klein,
and Ravi from 1995.
We give a local-search-based constant-factor approximation for the problem.
Local search brings in new techniques to an area that has for long not seen any
improvements and might be a step towards a combinatorial algorithm for the more
general survivable network design problem. Moreover, local search was an
essential tool to tackle the dynamic MST/Steiner Tree problem, whereas dynamic
Steiner Forest is still wide open.
It is easy to see that any constant factor local search algorithm requires
steps that add/drop many edges together. We propose natural local moves which,
at each step, either (a) add a shortest path in the current graph and then drop
a bunch of inessential edges, or (b) add a set of edges to the current
solution. This second type of moves is motivated by the potential function we
use to measure progress, combining the cost of the solution with a penalty for
each connected component. Our carefully-chosen local moves and potential
function work in tandem to eliminate bad local minima that arise when using
more traditional local moves. | [
1,
0,
0,
0,
0,
0
] |
Title: Fundamental bounds on MIMO antennas,
Abstract: Antenna current optimization is often used to analyze the optimal performance
of antennas. Antenna performance can be quantified in e.g., minimum Q-factor
and efficiency. The performance of MIMO antennas is more involved and, in
general, a single parameter is not sufficient to quantify it. Here, the
capacity of an idealized channel is used as the main performance quantity. An
optimization problem in the current distribution for optimal capacity, measured
in spectral efficiency, given a fixed Q-factor and efficiency is formulated as
a semi-definite optimization problem. A model order reduction based on
characteristic and energy modes is employed to improve the computational
efficiency. The performance bound is illustrated by solving the optimization
problem numerically for rectangular plates and spherical shells. | [
0,
1,
1,
0,
0,
0
] |
Title: DeepProteomics: Protein family classification using Shallow and Deep Networks,
Abstract: The knowledge regarding the function of proteins is necessary as it gives a
clear picture of biological processes. Nevertheless, there are many protein
sequences found and added to the databases but lacks functional annotation. The
laboratory experiments take a considerable amount of time for annotation of the
sequences. This arises the need to use computational techniques to classify
proteins based on their functions. In our work, we have collected the data from
Swiss-Prot containing 40433 proteins which is grouped into 30 families. We pass
it to recurrent neural network(RNN), long short term memory(LSTM) and gated
recurrent unit(GRU) model and compare it by applying trigram with deep neural
network and shallow neural network on the same dataset. Through this approach,
we could achieve maximum of around 78% accuracy for the classification of
protein families. | [
0,
0,
0,
1,
0,
0
] |
Title: Designing Deterministic Polynomial-Space Algorithms by Color-Coding Multivariate Polynomials,
Abstract: In recent years, several powerful techniques have been developed to design
{\em randomized} polynomial-space parameterized algorithms. In this paper, we
introduce an enhancement of color coding to design deterministic
polynomial-space parameterized algorithms. Our approach aims at reducing the
number of random choices by exploiting the special structure of a solution.
Using our approach, we derive the following deterministic algorithms (see
Introduction for problem definitions).
1. Polynomial-space $O^*(3.86^k)$-time (exponential-space $O^*(3.41^k)$-time)
algorithm for {\sc $k$-Internal Out-Branching}, improving upon the previously
fastest {\em exponential-space} $O^*(5.14^k)$-time algorithm for this problem.
2. Polynomial-space $O^*((2e)^{k+o(k)})$-time (exponential-space
$O^*(4.32^k)$-time) algorithm for {\sc $k$-Colorful Out-Branching} on
arc-colored digraphs and {\sc $k$-Colorful Perfect Matching} on planar
edge-colored graphs.
To obtain our polynomial space algorithms, we show that $(n,k,\alpha
k)$-splitters ($\alpha\ge 1$) and in particular $(n,k)$-perfect hash families
can be enumerated one by one with polynomial delay. | [
1,
0,
0,
0,
0,
0
] |
Title: String Attractors,
Abstract: Let $S$ be a string of length $n$. In this paper we introduce the notion of
\emph{string attractor}: a subset of the string's positions $[1,n]$ such that
every distinct substring of $S$ has an occurrence crossing one of the
attractor's elements. We first show that the minimum attractor's size yields
upper-bounds to the string's repetitiveness as measured by its linguistic
complexity and by the length of its longest repeated substring. We then prove
that all known compressors for repetitive strings induce a string attractor
whose size is bounded by their associated repetitiveness measure, and can
therefore be considered as approximations of the smallest one. Using further
reductions, we derive the approximation ratios of these compressors with
respect to the smallest attractor and solve several open problems related to
the asymptotic relations between repetitiveness measures (in particular,
between the the sizes of the Lempel-Ziv factorization, the run-length
Burrows-Wheeler transform, the smallest grammar, and the smallest macro
scheme). These reductions directly provide approximation algorithms for the
smallest string attractor. We then apply string attractors to solve efficiently
a fundamental problem in the field of compressed computation: we present a
universal compressed data structure for text extraction that improves existing
strategies simultaneously for \emph{all} known dictionary compressors and that,
by recent lower bounds, almost matches the optimal running time within the
resulting space. To conclude, we consider generalizations of string attractors
to labeled graphs, show that the attractor problem is NP-complete on trees, and
provide a logarithmic approximation computable in polynomial time. | [
1,
0,
0,
0,
0,
0
] |
Title: Methods for Mapping Forest Disturbance and Degradation from Optical Earth Observation Data: a Review,
Abstract: Purpose of review: This paper presents a review of the current state of the
art in remote sensing based monitoring of forest disturbances and forest
degradation from optical Earth Observation data. Part one comprises an overview
of currently available optical remote sensing sensors, which can be used for
forest disturbance and degradation mapping. Part two reviews the two main
categories of existing approaches: classical image-to-image change detection
and time series analysis. Recent findings: With the launch of the Sentinel-2a
satellite and available Landsat imagery, time series analysis has become the
most promising but also most demanding category of degradation mapping
approaches. Four time series classification methods are distinguished. The
methods are explained and their benefits and drawbacks are discussed. A
separate chapter presents a number of recent forest degradation mapping studies
for two different ecosystems: temperate forests with a geographical focus on
Europe and tropical forests with a geographical focus on Africa. Summary: The
review revealed that a wide variety of methods for the detection of forest
degradation is already available. Today, the main challenge is to transfer
these approaches to high resolution time series data from multiple sensors.
Future research should also focus on the classification of disturbance types
and the development of robust up-scalable methods to enable near real time
disturbance mapping in support of operational reactive measures. | [
1,
0,
0,
0,
0,
0
] |
Title: Testing for Global Network Structure Using Small Subgraph Statistics,
Abstract: We study the problem of testing for community structure in networks using
relations between the observed frequencies of small subgraphs. We propose a
simple test for the existence of communities based only on the frequencies of
three-node subgraphs. The test statistic is shown to be asymptotically normal
under a null assumption of no community structure, and to have power
approaching one under a composite alternative hypothesis of a degree-corrected
stochastic block model. We also derive a version of the test that applies to
multivariate Gaussian data. Our approach achieves near-optimal detection rates
for the presence of community structure, in regimes where the signal-to-noise
is too weak to explicitly estimate the communities themselves, using existing
computationally efficient algorithms. We demonstrate how the method can be
effective for detecting structure in social networks, citation networks for
scientific articles, and correlations of stock returns between companies on the
S\&P 500. | [
0,
0,
1,
1,
0,
0
] |
Title: A new method of correcting radial velocity time series for inhomogeneous convection,
Abstract: Magnetic activity strongly impacts stellar RVs and the search for small
planets. We showed previously that in the solar case it induces RV variations
with an amplitude over the cycle on the order of 8 m/s, with signals on short
and long timescales. The major component is the inhibition of the convective
blueshift due to plages. We explore a new approach to correct for this major
component of stellar radial velocities in the case of solar-type stars. The
convective blueshift depends on line depths; we use this property to develop a
method that will characterize the amplitude of this effect and to correct for
this RV component. We build realistic RV time series corresponding to RVs
computed using different sets of lines, including lines in different depth
ranges. We characterize the performance of the method used to reconstruct the
signal without the convective component and the detection limits derived from
the residuals. We identified a set of lines which, combined with a global set
of lines, allows us to reconstruct the convective component with a good
precision and to correct for it. For the full temporal sampling, the power in
the range 100-500~d significantly decreased, by a factor of 100 for a RV noise
below 30 cm/s. We also studied the impact of noise contributions other than the
photon noise, which lead to uncertainties on the RV computation, as well as the
impact of the temporal sampling. We found that these other sources of noise do
not greatly alter the quality of the correction, although they need a better
noise level to reach a similar performance level. A very good correction of the
convective component can be achieved providing very good RV noise levels
combined with a very good instrumental stability and realistic granulation
noise. Under the conditions considered in this paper, detection limits at 480~d
lower than 1 MEarth could be achieved for RV noise below 15 cm/s. | [
0,
1,
0,
0,
0,
0
] |
Title: Differential Testing for Variational Analyses: Experience from Developing KConfigReader,
Abstract: Differential testing to solve the oracle problem has been applied in many
scenarios where multiple supposedly equivalent implementations exist, such as
multiple implementations of a C compiler. If the multiple systems disagree on
the output for a given test input, we have likely discovered a bug without
every having to specify what the expected output is. Research on variational
analyses (or variability-aware or family-based analyses) can benefit from
similar ideas. The goal of most variational analyses is to perform an analysis,
such as type checking or model checking, over a large number of configurations
much faster than an existing traditional analysis could by analyzing each
configuration separately. Variational analyses are very suitable for
differential testing, since the existence nonvariational analysis can provide
the oracle for test cases that would otherwise be tedious or difficult to
write. In this experience paper, I report how differential testing has helped
in developing KConfigReader, a tool for translating the Linux kernel's kconfig
model into a propositional formula. Differential testing allows us to quickly
build a large test base and incorporate external tests that avoided many
regressions during development and made KConfigReader likely the most precise
kconfig extraction tool available. | [
1,
0,
0,
0,
0,
0
] |
Title: Simultaneously Learning Neighborship and Projection Matrix for Supervised Dimensionality Reduction,
Abstract: Explicitly or implicitly, most of dimensionality reduction methods need to
determine which samples are neighbors and the similarity between the neighbors
in the original highdimensional space. The projection matrix is then learned on
the assumption that the neighborhood information (e.g., the similarity) is
known and fixed prior to learning. However, it is difficult to precisely
measure the intrinsic similarity of samples in high-dimensional space because
of the curse of dimensionality. Consequently, the neighbors selected according
to such similarity might and the projection matrix obtained according to such
similarity and neighbors are not optimal in the sense of classification and
generalization. To overcome the drawbacks, in this paper we propose to let the
similarity and neighbors be variables and model them in low-dimensional space.
Both the optimal similarity and projection matrix are obtained by minimizing a
unified objective function. Nonnegative and sum-to-one constraints on the
similarity are adopted. Instead of empirically setting the regularization
parameter, we treat it as a variable to be optimized. It is interesting that
the optimal regularization parameter is adaptive to the neighbors in
low-dimensional space and has intuitive meaning. Experimental results on the
YALE B, COIL-100, and MNIST datasets demonstrate the effectiveness of the
proposed method. | [
0,
0,
0,
1,
0,
0
] |
Title: Joint Multichannel Deconvolution and Blind Source Separation,
Abstract: Blind Source Separation (BSS) is a challenging matrix factorization problem
that plays a central role in multichannel imaging science. In a large number of
applications, such as astrophysics, current unmixing methods are limited since
real-world mixtures are generally affected by extra instrumental effects like
blurring. Therefore, BSS has to be solved jointly with a deconvolution problem,
which requires tackling a new inverse problem: deconvolution BSS (DBSS). In
this article, we introduce an innovative DBSS approach, called DecGMCA, based
on sparse signal modeling and an efficient alternative projected least square
algorithm. Numerical results demonstrate that the DecGMCA algorithm performs
very well on simulations. It further highlights the importance of jointly
solving BSS and deconvolution instead of considering these two problems
independently. Furthermore, the performance of the proposed DecGMCA algorithm
is demonstrated on simulated radio-interferometric data. | [
1,
0,
0,
1,
0,
0
] |
Title: Bimodule monomorphism categories and RSS equivalences via cotilting modules,
Abstract: The monomorphism category $\mathscr{S}(A, M, B)$ induced by a bimodule
$_AM_B$ is the subcategory of $\Lambda$-mod consisting of
$\left[\begin{smallmatrix} X\\ Y\end{smallmatrix}\right]_{\phi}$ such that
$\phi: M\otimes_B Y\rightarrow X$ is a monic $A$-map, where
$\Lambda=\left[\begin{smallmatrix} A&M\\0&B \end{smallmatrix}\right]$. In
general, it is not the monomorphism categories induced by quivers. It could
describe the Gorenstein-projective $\m$-modules. This monomorphism category is
a resolving subcategory of $\modcat{\Lambda}$ if and only if $M_B$ is
projective. In this case, it has enough injective objects and Auslander-Reiten
sequences, and can be also described as the left perpendicular category of a
unique basic cotilting $\Lambda$-module. If $M$ satisfies the condition ${\rm
(IP)}$, then the stable category of $\mathscr{S}(A, M, B)$ admits a recollement
of additive categories, which is in fact a recollement of singularity
categories if $\mathscr{S}(A, M, B)$ is a {\rm Frobenius} category.
Ringel-Schmidmeier-Simson equivalence between $\mathscr{S}(A, M, B)$ and its
dual is introduced. If $M$ is an exchangeable bimodule, then an {\rm RSS}
equivalence is given by a $\Lambda$-$\Lambda$ bimodule which is a two-sided
cotilting $\Lambda$-module with a special property; and the Nakayama functor
$\mathcal N_\m$ gives an {\rm RSS} equivalence if and only if both $A$ and $B$
are Frobenius algebras. | [
0,
0,
1,
0,
0,
0
] |
Title: The Vanishing viscosity limit for some symmetric flows,
Abstract: The focus of this paper is on the analysis of the boundary layer and the
associated vanishing viscosity limit for two classes of flows with symmetry,
namely, Plane-Parallel Channel Flows and Parallel Pipe Flows. We construct
explicit boundary layer correctors, which approximate the difference between
the Navier-Stokes and the Euler solutions. Using properties of these
correctors, we establish convergence of the Navier-Stokes solution to the Euler
solution as viscosity vanishes with optimal rates of convergence. In addition,
we investigate vorticity production on the boundary in the limit of vanishing
viscosity. Our work significantly extends prior work in the literature. | [
0,
0,
1,
0,
0,
0
] |
Title: Anisotropy of transport in bulk Rashba metals,
Abstract: The recent experimental discovery of three-dimensional (3D) materials hosting
a strong Rashba spin-orbit coupling calls for the theoretical investigation of
their transport properties. Here we study the zero temperature dc conductivity
of a 3D Rashba metal in the presence of static diluted impurities. We show
that, at variance with the two-dimensional case, in 3D systems spin-orbit
coupling affects dc charge transport in all density regimes. We find in
particular that the effect of spin-orbit interaction strongly depends on the
direction of the current, and we show that this yields strongly anisotropic
transport characteristics. In the dominant spin-orbit coupling regime where
only the lowest band is occupied, the SO-induced conductivity anisotropy is
governed entirely by the anomalous component of the renormalized current. We
propose that measurements of the conductivity anisotropy in bulk Rashba metals
may give a direct experimental assessment of the spin-orbit strength. | [
0,
1,
0,
0,
0,
0
] |
Title: A quick guide for student-driven community genome annotation,
Abstract: High quality gene models are necessary to expand the molecular and genetic
tools available for a target organism, but these are available for only a
handful of model organisms that have undergone extensive curation and
experimental validation over the course of many years. The majority of gene
models present in biological databases today have been identified in draft
genome assemblies using automated annotation pipelines that are frequently
based on orthologs from distantly related model organisms. Manual curation is
time consuming and often requires substantial expertise, but is instrumental in
improving gene model structure and identification. Manual annotation may seem
to be a daunting and cost-prohibitive task for small research communities but
involving undergraduates in community genome annotation consortiums can be
mutually beneficial for both education and improved genomic resources. We
outline a workflow for efficient manual annotation driven by a team of
primarily undergraduate annotators. This model can be scaled to large teams and
includes quality control processes through incremental evaluation. Moreover, it
gives students an opportunity to increase their understanding of genome biology
and to participate in scientific research in collaboration with peers and
senior researchers at multiple institutions. | [
0,
0,
0,
0,
1,
0
] |
Title: Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data,
Abstract: In the past few years, Convolutional Neural Networks (CNNs) have been
achieving state-of-the-art performance on a variety of problems. Many companies
employ resources and money to generate these models and provide them as an API,
therefore it is in their best interest to protect them, i.e., to avoid that
someone else copies them. Recent studies revealed that state-of-the-art CNNs
are vulnerable to adversarial examples attacks, and this weakness indicates
that CNNs do not need to operate in the problem domain (PD). Therefore, we
hypothesize that they also do not need to be trained with examples of the PD in
order to operate in it.
Given these facts, in this paper, we investigate if a target black-box CNN
can be copied by persuading it to confess its knowledge through random
non-labeled data. The copy is two-fold: i) the target network is queried with
random data and its predictions are used to create a fake dataset with the
knowledge of the network; and ii) a copycat network is trained with the fake
dataset and should be able to achieve similar performance as the target
network.
This hypothesis was evaluated locally in three problems (facial expression,
object, and crosswalk classification) and against a cloud-based API. In the
copy attacks, images from both non-problem domain and PD were used. All copycat
networks achieved at least 93.7% of the performance of the original models with
non-problem domain data, and at least 98.6% using additional data from the PD.
Additionally, the copycat CNN successfully copied at least 97.3% of the
performance of the Microsoft Azure Emotion API. Our results show that it is
possible to create a copycat CNN by simply querying a target network as
black-box with random non-labeled data. | [
0,
0,
0,
1,
0,
0
] |
Title: Medoids in almost linear time via multi-armed bandits,
Abstract: Computing the medoid of a large number of points in high-dimensional space is
an increasingly common operation in many data science problems. We present an
algorithm Med-dit which uses O(n log n) distance evaluations to compute the
medoid with high probability. Med-dit is based on a connection with the
multi-armed bandit problem. We evaluate the performance of Med-dit empirically
on the Netflix-prize and the single-cell RNA-Seq datasets, containing hundreds
of thousands of points living in tens of thousands of dimensions, and observe a
5-10x improvement in performance over the current state of the art. Med-dit is
available at this https URL | [
1,
0,
0,
1,
0,
0
] |
Title: Early stopping for statistical inverse problems via truncated SVD estimation,
Abstract: We consider truncated SVD (or spectral cut-off, projection) estimators for a
prototypical statistical inverse problem in dimension $D$. Since calculating
the singular value decomposition (SVD) only for the largest singular values is
much less costly than the full SVD, our aim is to select a data-driven
truncation level $\widehat m\in\{1,\ldots,D\}$ only based on the knowledge of
the first $\widehat m$ singular values and vectors. We analyse in detail
whether sequential {\it early stopping} rules of this type can preserve
statistical optimality. Information-constrained lower bounds and matching upper
bounds for a residual based stopping rule are provided, which give a clear
picture in which situation optimal sequential adaptation is feasible. Finally,
a hybrid two-step approach is proposed which allows for classical oracle
inequalities while considerably reducing numerical complexity. | [
0,
0,
1,
1,
0,
0
] |
Title: Algorithmic Theory of ODEs and Sampling from Well-conditioned Logconcave Densities,
Abstract: Sampling logconcave functions arising in statistics and machine learning has
been a subject of intensive study. Recent developments include analyses for
Langevin dynamics and Hamiltonian Monte Carlo (HMC). While both approaches have
dimension-independent bounds for the underlying $\mathit{continuous}$ processes
under sufficiently strong smoothness conditions, the resulting discrete
algorithms have complexity and number of function evaluations growing with the
dimension. Motivated by this problem, in this paper, we give a general
algorithm for solving multivariate ordinary differential equations whose
solution is close to the span of a known basis of functions (e.g., polynomials
or piecewise polynomials). The resulting algorithm has polylogarithmic depth
and essentially tight runtime - it is nearly linear in the size of the
representation of the solution.
We apply this to the sampling problem to obtain a nearly linear
implementation of HMC for a broad class of smooth, strongly logconcave
densities, with the number of iterations (parallel depth) and gradient
evaluations being $\mathit{polylogarithmic}$ in the dimension (rather than
polynomial as in previous work). This class includes the widely-used loss
function for logistic regression with incoherent weight matrices and has been
subject of much study recently. We also give a faster algorithm with $
\mathit{polylogarithmic~depth}$ for the more general and standard class of
strongly convex functions with Lipschitz gradient. These results are based on
(1) an improved contraction bound for the exact HMC process and (2) logarithmic
bounds on the degree of polynomials that approximate solutions of the
differential equations arising in implementing HMC. | [
1,
0,
0,
0,
0,
0
] |
Title: The PSLQ Algorithm for Empirical Data,
Abstract: The celebrated integer relation finding algorithm PSLQ has been successfully
used in many applications. PSLQ was only analyzed theoretically for exact input
data, however, when the input data are irrational numbers, they must be
approximate ones due to the finite precision of the computer. When the
algorithm takes empirical data (inexact data with error bounded) instead of
exact real numbers as its input, how do we theoretically ensure the output of
the algorithm to be an exact integer relation?
In this paper, we investigate the PSLQ algorithm for empirical data as its
input. Firstly, we give a termination condition for this case. Secondly, we
analyze a perturbation on the hyperplane matrix constructed from the input data
and hence disclose a relationship between the accuracy of the input data and
the output quality (an upper bound on the absolute value of the inner product
of the exact data and the computed integer relation), which naturally leads to
an error control strategy for PSLQ. Further, we analyze the complexity bound of
the PSLQ algorithm for empirical data. Examples on transcendental numbers and
algebraic numbers show the meaningfulness of our error control strategy. | [
1,
0,
1,
0,
0,
0
] |
Title: Tensor Completion Algorithms in Big Data Analytics,
Abstract: Tensor completion is a problem of filling the missing or unobserved entries
of partially observed tensors. Due to the multidimensional character of tensors
in describing complex datasets, tensor completion algorithms and their
applications have received wide attention and achievement in areas like data
mining, computer vision, signal processing, and neuroscience. In this survey,
we provide a modern overview of recent advances in tensor completion algorithms
from the perspective of big data analytics characterized by diverse variety,
large volume, and high velocity. We characterize these advances from four
perspectives: general tensor completion algorithms, tensor completion with
auxiliary information (variety), scalable tensor completion algorithms
(volume), and dynamic tensor completion algorithms (velocity). Further, we
identify several tensor completion applications on real-world data-driven
problems and present some common experimental frameworks popularized in the
literature. Our goal is to summarize these popular methods and introduce them
to researchers and practitioners for promoting future research and
applications. We conclude with a discussion of key challenges and promising
research directions in this community for future exploration. | [
1,
0,
0,
1,
0,
0
] |
Title: On Gallai's and Hajós' Conjectures for graphs with treewidth at most 3,
Abstract: A path (resp. cycle) decomposition of a graph $G$ is a set of edge-disjoint
paths (resp. cycles) of $G$ that covers the edge set of $G$. Gallai (1966)
conjectured that every graph on $n$ vertices admits a path decomposition of
size at most $\lfloor (n+1)/2\rfloor$, and Hajós (1968) conjectured that
every Eulerian graph on $n$ vertices admits a cycle decomposition of size at
most $\lfloor (n-1)/2\rfloor$. Gallai's Conjecture was verified for many
classes of graphs. In particular, Lovász (1968) verified this conjecture for
graphs with at most one vertex of even degree, and Pyber (1996) verified it for
graphs in which every cycle contains a vertex of odd degree. Hajós'
Conjecture, on the other hand, was verified only for graphs with maximum degree
$4$ and for planar graphs. In this paper, we verify Gallai's and Hajós'
Conjectures for graphs with treewidth at most $3$. Moreover, we show that the
only graphs with treewidth at most $3$ that do not admit a path decomposition
of size at most $\lfloor n/2\rfloor$ are isomorphic to $K_3$ or $K_5-e$.
Finally, we use the technique developed in this paper to present new proofs for
Gallai's and Hajós' Conjectures for graphs with maximum degree at most $4$,
and for planar graphs with girth at least $6$. | [
1,
0,
0,
0,
0,
0
] |
Title: Band and correlated insulators of cold fermions in a mesoscopic lattice,
Abstract: We investigate the transport properties of neutral, fermionic atoms passing
through a one-dimensional quantum wire containing a mesoscopic lattice. The
lattice is realized by projecting individually controlled, thin optical
barriers on top of a ballistic conductor. Building an increasingly longer
lattice, one site after another, we observe and characterize the emergence of a
band insulating phase, demonstrating control over quantum-coherent transport.
We explore the influence of atom-atom interactions and show that the insulating
state persists as contact interactions are tuned from moderately to strongly
attractive. Using bosonization and classical Monte-Carlo simulations we analyze
such a model of interacting fermions and find good qualitative agreement with
the data. The robustness of the insulating state supports the existence of a
Luther-Emery liquid in the one-dimensional wire. Our work realizes a tunable,
site-controlled lattice Fermi gas strongly coupled to reservoirs, which is an
ideal test bed for non-equilibrium many-body physics. | [
0,
1,
0,
0,
0,
0
] |
Title: Stochasticity from function - why the Bayesian brain may need no noise,
Abstract: An increasing body of evidence suggests that the trial-to-trial variability
of spiking activity in the brain is not mere noise, but rather the reflection
of a sampling-based encoding scheme for probabilistic computing. Since the
precise statistical properties of neural activity are important in this
context, many models assume an ad-hoc source of well-behaved, explicit noise,
either on the input or on the output side of single neuron dynamics, most often
assuming an independent Poisson process in either case. However, these
assumptions are somewhat problematic: neighboring neurons tend to share
receptive fields, rendering both their input and their output correlated; at
the same time, neurons are known to behave largely deterministically, as a
function of their membrane potential and conductance. We suggest that spiking
neural networks may, in fact, have no need for noise to perform sampling-based
Bayesian inference. We study analytically the effect of auto- and
cross-correlations in functionally Bayesian spiking networks and demonstrate
how their effect translates to synaptic interaction strengths, rendering them
controllable through synaptic plasticity. This allows even small ensembles of
interconnected deterministic spiking networks to simultaneously and
co-dependently shape their output activity through learning, enabling them to
perform complex Bayesian computation without any need for noise, which we
demonstrate in silico, both in classical simulation and in neuromorphic
emulation. These results close a gap between the abstract models and the
biology of functionally Bayesian spiking networks, effectively reducing the
architectural constraints imposed on physical neural substrates required to
perform probabilistic computing, be they biological or artificial. | [
0,
0,
0,
0,
1,
0
] |
Title: Very Asymmetric Collider for Dark Matter Search below 1 GeV,
Abstract: Current searches for a dark photon in the mass range below 1 GeV require an
electron-positron collider with a luminosity at the level of at least $10^{34}$
cm$^{-2}$s$^{-1}$. The challenge is that, at such low energies, the collider
luminosity rapidly drops off due to increase in the beam sizes, strong mutual
focusing of the colliding beams, and enhancement of collective effects. Using
recent advances in accelerator technology such as the nano-beam scheme of
SuperKEK-B, high-current Energy Recovery Linacs (ERL), and magnetized beams, we
propose a new configuration of an electron-positron collider based on a
positron storage ring and an electron ERL. It allows one to achieve a
luminosity of $>10^{34}$ cm$^{-2}$s$^{-1}$ at the center of momentum energy of
<1 GeV. We present general considerations and a specific example of such a
facility using the parameters of the SuperKEK-B positron storage ring and
Cornell ERL project. | [
0,
1,
0,
0,
0,
0
] |
Title: Some divisibility properties of binomial coefficients,
Abstract: In this paper, we gave some properties of binomial coefficient. | [
0,
0,
1,
0,
0,
0
] |
Title: An Unsupervised Method for Estimating the Global Horizontal Irradiance from Photovoltaic Power Measurements,
Abstract: In this paper, we present a method to determine the global horizontal
irradiance (GHI) from the power measurements of one or more PV systems, located
in the same neighborhood. The method is completely unsupervised and is based on
a physical model of a PV plant. The precise assessment of solar irradiance is
pivotal for the forecast of the electric power generated by photovoltaic (PV)
plants. However, on-ground measurements are expensive and are generally not
performed for small and medium-sized PV plants. Satellite-based services
represent a valid alternative to on site measurements, but their space-time
resolution is limited. Results from two case studies located in Switzerland are
presented. The performance of the proposed method at assessing GHI is compared
with that of free and commercial satellite services. Our results show that the
presented method is generally better than satellite-based services, especially
at high temporal resolutions. | [
0,
0,
0,
1,
0,
0
] |
Title: Strong anisotropy effect in iron-based superconductor CaFe$_{0.882}$Co$_{0.118}$AsF,
Abstract: The anisotropy of the Fe-based superconductors is much smaller than that of
the cuprates and the theoretical calculations. A credible understanding for
this experimental fact is still lacking up to now. Here we experimentally study
the magnetic-field-angle dependence of electronic resistivity in the
superconducting phase of iron-based superconductor
CaFe$_{0.882}$Co$_{0.118}$AsF, and find the strongest anisotropy effect of the
upper critical field among the iron-based superconductors based on the
framework of Ginzburg-Landau theory. The evidences of energy band structure and
charge density distribution from electronic structure calculations demonstrate
that the observed strong anisotropic effect mainly comes from the strong ionic
bonding in between the ions of Ca$^{2+}$ and F$^-$, which weakens the
interlayer coupling between the layers of FeAs and CaF. This finding provides a
significant insight into the nature of experimentally observed strong
anisotropic effect of electronic resistivity, and also paves an avenue to
design exotic two dimensional artificial unconventional superconductors in
future. | [
0,
1,
0,
0,
0,
0
] |
Title: Example of C-rigid polytopes which are not B-rigid,
Abstract: A simple polytope $P$ is said to be \emph{B-rigid} if its combinatorial
structure is characterized by its Tor-algebra, and is said to be \emph{C-rigid}
if its combinatorial structure is characterized by the cohomology ring of a
quasitoric manifold over $P$. It is known that a B-rigid simple polytope is
C-rigid. In this paper, we, further, show that the B-rigidity is not equivalent
to the C-rigidity. | [
0,
0,
1,
0,
0,
0
] |
Title: Dynamic Switching Networks: A Dynamic, Non-local, and Time-independent Approach to Emergence,
Abstract: The concept of emergence is a powerful concept to explain very complex
behaviour by simple underling rules. Existing approaches of producing emergent
collective behaviour have many limitations making them unable to account for
the complexity we see in the real world. In this paper we propose a new
dynamic, non-local, and time independent approach that uses a network like
structure to implement the laws or the rules, where the mathematical equations
representing the rules are converted to a series of switching decisions carried
out by the network on the particles moving in the network. The proposed
approach is used to generate patterns with different types of symmetry. | [
1,
0,
0,
0,
0,
0
] |
Title: The Trouvé group for spaces of test functions,
Abstract: The Trouvé group $\mathcal G_{\mathcal A}$ from image analysis consists of
the flows at a fixed time of all time-dependent vectors fields of a given
regularity $\mathcal A(\mathbb R^d,\mathbb R^d)$. For a multitude of regularity
classes $\mathcal A$, we prove that the Trouvé group $\mathcal G_{\mathcal
A}$ coincides with the connected component of the identity of the group of
orientation preserving diffeomorphims of $\mathbb R^d$ which differ from the
identity by a mapping of class $\mathcal A$. We thus conclude that $\mathcal
G_{\mathcal A}$ has a natural regular Lie group structure. In many cases we
show that the mapping which takes a time-dependent vector field to its flow is
continuous. As a consequence we obtain that the scale of Bergman spaces on the
polystrip with variable width is stable under solving ordinary differential
equations. | [
0,
0,
1,
0,
0,
0
] |
Title: Robust Motion Planning employing Signal Temporal Logic,
Abstract: Motion planning classically concerns the problem of accomplishing a goal
configuration while avoiding obstacles. However, the need for more
sophisticated motion planning methodologies, taking temporal aspects into
account, has emerged. To address this issue, temporal logics have recently been
used to formulate such advanced specifications. This paper will consider Signal
Temporal Logic in combination with Model Predictive Control. A robustness
metric, called Discrete Average Space Robustness, is introduced and used to
maximize the satisfaction of specifications which results in a natural
robustness against noise. The comprised optimization problem is convex and
formulated as a Linear Program. | [
1,
0,
0,
0,
0,
0
] |
Title: COREclust: a new package for a robust and scalable analysis of complex data,
Abstract: In this paper, we present a new R package COREclust dedicated to the
detection of representative variables in high dimensional spaces with a
potentially limited number of observations. Variable sets detection is based on
an original graph clustering strategy denoted CORE-clustering algorithm that
detects CORE-clusters, i.e. variable sets having a user defined size range and
in which each variable is very similar to at least another variable.
Representative variables are then robustely estimate as the CORE-cluster
centers. This strategy is entirely coded in C++ and wrapped by R using the Rcpp
package. A particular effort has been dedicated to keep its algorithmic cost
reasonable so that it can be used on large datasets. After motivating our work,
we will explain the CORE-clustering algorithm as well as a greedy extension of
this algorithm. We will then present how to use it and results obtained on
synthetic and real data. | [
0,
0,
0,
1,
0,
0
] |
Title: Deep Neural Networks,
Abstract: Deep Neural Networks (DNNs) are universal function approximators providing
state-of- the-art solutions on wide range of applications. Common perceptual
tasks such as speech recognition, image classification, and object tracking are
now commonly tackled via DNNs. Some fundamental problems remain: (1) the lack
of a mathematical framework providing an explicit and interpretable
input-output formula for any topology, (2) quantification of DNNs stability
regarding adversarial examples (i.e. modified inputs fooling DNN predictions
whilst undetectable to humans), (3) absence of generalization guarantees and
controllable behaviors for ambiguous patterns, (4) leverage unlabeled data to
apply DNNs to domains where expert labeling is scarce as in the medical field.
Answering those points would provide theoretical perspectives for further
developments based on a common ground. Furthermore, DNNs are now deployed in
tremendous societal applications, pushing the need to fill this theoretical gap
to ensure control, reliability, and interpretability. | [
1,
0,
0,
1,
0,
0
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.