title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
A Review on Quantile Regression for Stochastic Computer Experiments | We report on an empirical study of the main strategies for conditional
quantile estimation in the context of stochastic computer experiments. To
ensure adequate diversity, six metamodels are presented, divided into three
categories based on order statistics, functional approaches, and those of
Bayesian inspiration. The metamodels are tested on several problems
characterized by the size of the training set, the input dimension, the
quantile order and the value of the probability density function in the
neighborhood of the quantile. The metamodels studied reveal good contrasts in
our set of 480 experiments, enabling several patterns to be extracted. Based on
our results, guidelines are proposed to allow users to select the best method
for a given problem.
| 1 | 0 | 0 | 1 | 0 | 0 |
Scalability of Voltage-Controlled Filamentary and Nanometallic Resistance Memories | Much effort has been devoted to device and materials engineering to realize
nanoscale resistance random access memory (RRAM) for practical applications,
but there still lacks a rational physical basis to be relied on to design
scalable devices spanning many length scales. In particular, the critical
switching criterion is not clear for RRAM devices in which resistance changes
are limited to localized nanoscale filaments that experience concentrated heat,
electric current and field. Here, we demonstrate voltage-controlled resistance
switching for macro and nano devices in both filamentary RRAM and nanometallic
RRAM, the latter switches uniformly and does not require forming. As a result,
using a constant current density as the compliance, we have achieved
area-scalability for the low resistance state of the filamentary RRAM, and for
both the low and high resistance states of the nanometallic RRAM. This finding
will help design area-scalable RRAM at the nanoscale.
| 0 | 1 | 0 | 0 | 0 | 0 |
Functional geometry of protein-protein interaction networks | Motivation: Protein-protein interactions (PPIs) are usually modelled as
networks. These networks have extensively been studied using graphlets, small
induced subgraphs capturing the local wiring patterns around nodes in networks.
They revealed that proteins involved in similar functions tend to be similarly
wired. However, such simple models can only represent pairwise relationships
and cannot fully capture the higher-order organization of protein interactions,
including protein complexes. Results: To model the multi-sale organization of
these complex biological systems, we utilize simplicial complexes from
computational geometry. The question is how to mine these new representations
of PPI networks to reveal additional biological information. To address this,
we define simplets, a generalization of graphlets to simplicial complexes. By
using simplets, we define a sensitive measure of similarity between simplicial
complex network representations that allows for clustering them according to
their data types better than clustering them by using other state-of-the-art
measures, e.g., spectral distance, or facet distribution distance. We model
human and baker's yeast PPI networks as simplicial complexes that capture PPIs
and protein complexes as simplices. On these models, we show that our newly
introduced simplet-based methods cluster proteins by function better than the
clustering methods that use the standard PPI networks, uncovering the new
underlying functional organization of the cell. We demonstrate the existence of
the functional geometry in the PPI data and the superiority of our
simplet-based methods to effectively mine for new biological information hidden
in the complexity of the higher order organization of PPI networks.
| 0 | 0 | 0 | 0 | 1 | 0 |
A Two-Layer Component-Based Allocation for Embedded Systems with GPUs | Component-based development is a software engineering paradigm that can
facilitate the construction of embedded systems and tackle its complexities.
The modern embedded systems have more and more demanding requirements. One way
to cope with such versatile and growing set of requirements is to employ
heterogeneous processing power, i.e., CPU-GPU architectures. The new CPU-GPU
embedded boards deliver an increased performance but also introduce additional
complexity and challenges. In this work, we address the component-to-hardware
allocation for CPU-GPU embedded systems. The allocation for such systems is
much complex due to the increased amount of GPU-related information. For
example, while in traditional embedded systems the allocation mechanism may
consider only the CPU memory usage of components to find an appropriate
allocation scheme, in heterogeneous systems, the GPU memory usage needs also to
be taken into account in the allocation process. This paper aims at decreasing
the component-to-hardware allocation complexity by introducing a 2-layer
component-based architecture for heterogeneous embedded systems. The detailed
CPU-GPU information of the system is abstracted at a high-layer by compacting
connected components into single units that behave as regular components. The
allocator, based on the compacted information received from the high-level
layer, computes, with a decreased complexity, feasible allocation schemes. In
the last part of the paper, the 2-layer allocation method is evaluated using an
existing embedded system demonstrator; namely, an underwater robot.
| 1 | 0 | 0 | 0 | 0 | 0 |
Sales Forecast in E-commerce using Convolutional Neural Network | Sales forecast is an essential task in E-commerce and has a crucial impact on
making informed business decisions. It can help us to manage the workforce,
cash flow and resources such as optimizing the supply chain of manufacturers
etc. Sales forecast is a challenging problem in that sales is affected by many
factors including promotion activities, price changes, and user preferences
etc. Traditional sales forecast techniques mainly rely on historical sales data
to predict future sales and their accuracies are limited. Some more recent
learning-based methods capture more information in the model to improve the
forecast accuracy. However, these methods require case-by-case manual feature
engineering for specific commercial scenarios, which is usually a difficult,
time-consuming task and requires expert knowledge. To overcome the limitations
of existing methods, we propose a novel approach in this paper to learn
effective features automatically from the structured data using the
Convolutional Neural Network (CNN). When fed with raw log data, our approach
can automatically extract effective features from that and then forecast sales
using those extracted features. We test our method on a large real-world
dataset from CaiNiao.com and the experimental results validate the
effectiveness of our method.
| 1 | 0 | 0 | 0 | 0 | 0 |
On normalization of inconsistency indicators in pairwise comparisons | In this study, we provide mathematical and practice-driven justification for
using $[0,1]$ normalization of inconsistency indicators in pairwise
comparisons. The need for normalization, as well as problems with the lack of
normalization, are presented. A new type of paradox of infinity is described.
| 1 | 0 | 0 | 0 | 0 | 0 |
Gamma factors of intertwining periods and distinction for inner forms of $\GL(n)$ | Let $F$ be a $p$-adic fied, $E$ be a quadratic extension of $F$, and $D$ be
an $F$-division algebra of odd index. Set $H=\mathrm{GL}m,D)$ and
$G=\mathrm{GL}(m,D\otimes_F E)$, we carry out a fine study of local
intertwining open periods attached to $H$-distinguished induced representations
of inner forms of $G$. These objects have been studied globally in \cite{JLR}
and \cite{LR}, and locally in \cite{BD08}. Here we give sufficient conditions
for the local intertwining periods to have singularities. By a local/global
method, we also compute in terms of Asai gamma factors the proportionality
constants involved in their functional equations with respect to certain
intertwining operators. As a consequence, we classify distinguished unitary and
ladder representations of $G$, extending respectively the results of \cite{M14}
and \cite{G15} for $D=F$, which both relied at some crucial step on the theory
of Bernstein-Zelevinsky derivatives. We make use of one of the main results of
\cite{BP17} in our setting, which in the case of the group $G$, asserts that
the Jacquet-Langlands correspondence preserves distinction. Such a result is
for discrete series representations, but our method in fact allows us to use it
only for cuspidal representations of $G$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Spin-flip scattering selection in a controlled molecular junction | A simple double-decker molecule with magnetic anisotropy, nickelocene, is
attached to the metallic tip of a low-temperature scanning tunneling
microscope. In the presence of a Cu(100) surface, the conductance around the
Fermi energy is governed by spin-flip scattering, the nature of which is
determined by the tunneling barrier thickness. The molecular tip exhibits
inelastic spin-flip scattering in the tunneling regime, while in the contact
regime a Kondo ground state is stabilized causing an order of magnitude change
in the zero-bias conductance. First principle calculations show that
nickelocene reversibly switches from a spin 1 to 1/2 between the two transport
regimes.
| 0 | 1 | 0 | 0 | 0 | 0 |
Mean-Field Sparse Jurdjevic--Quinn Control | We consider nonlinear transport equations with non-local velocity, describing
the time-evolution of a measure, which in practice may represent the density of
a crowd. Such equations often appear by taking the mean-field limit of
finite-dimensional systems modelling collective dynamics. We first give a sense
to dissipativity of these mean-field equations in terms of Lie derivatives of a
Lyapunov function depending on the measure. Then, we address the problem of
controlling such equations by means of a time-varying bounded control action
localized on a time-varying control subset with bounded Lebesgue measure
(sparsity space constraint). Finite-dimensional versions are given by
control-affine systems, which can be stabilized by the well known
Jurdjevic--Quinn procedure. In this paper, assuming that the uncontrolled
dynamics are dissipative, we develop an approach in the spirit of the classical
Jurdjevic--Quinn theorem, showing how to steer the system to an invariant
sublevel of the Lyapunov function. The control function and the control domain
are designed in terms of the Lie derivatives of the Lyapunov function, and
enjoy sparsity properties in the sense that the control support is small.
Finally, we show that our result applies to a large class of kinetic equations
modelling multi-agent dynamics.
| 0 | 0 | 1 | 0 | 0 | 0 |
Sampling from Social Networks with Attributes | Sampling from large networks represents a fundamental challenge for social
network research. In this paper, we explore the sensitivity of different
sampling techniques (node sampling, edge sampling, random walk sampling, and
snowball sampling) on social networks with attributes. We consider the special
case of networks (i) where we have one attribute with two values (e.g., male
and female in the case of gender), (ii) where the size of the two groups is
unequal (e.g., a male majority and a female minority), and (iii) where nodes
with the same or different attribute value attract or repel each other (i.e.,
homophilic or heterophilic behavior). We evaluate the different sampling
techniques with respect to conserving the position of nodes and the visibility
of groups in such networks. Experiments are conducted both on synthetic and
empirical social networks. Our results provide evidence that different network
sampling techniques are highly sensitive with regard to capturing the expected
centrality of nodes, and that their accuracy depends on relative group size
differences and on the level of homophily that can be observed in the network.
We conclude that uninformed sampling from social networks with attributes thus
can significantly impair the ability of researchers to draw valid conclusions
about the centrality of nodes and the visibility or invisibility of groups in
social networks.
| 1 | 1 | 0 | 0 | 0 | 0 |
Exploiting Investors Social Network for Stock Prediction in China's Market | Recent works have shown that social media platforms are able to influence the
trends of stock price movements. However, existing works have majorly focused
on the U.S. stock market and lacked attention to certain emerging countries
such as China, where retail investors dominate the market. In this regard, as
retail investors are prone to be influenced by news or other social media,
psychological and behavioral features extracted from social media platforms are
thought to well predict stock price movements in the China's market. Recent
advances in the investor social network in China enables the extraction of such
features from web-scale data. In this paper, on the basis of tweets from
Xueqiu, a popular Chinese Twitter-like social platform specialized for
investors, we analyze features with regard to collective sentiment and
perception on stock relatedness and predict stock price movements by employing
nonlinear models. The features of interest prove to be effective in our
experiments.
| 0 | 0 | 0 | 0 | 0 | 1 |
Singing Style Transfer Using Cycle-Consistent Boundary Equilibrium Generative Adversarial Networks | Can we make a famous rap singer like Eminem sing whatever our favorite song?
Singing style transfer attempts to make this possible, by replacing the vocal
of a song from the source singer to the target singer. This paper presents a
method that learns from unpaired data for singing style transfer using
generative adversarial networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Deep Learning Based 6 Degree-of-Freedom Localization Method for Endoscopic Capsule Robots | We present a robust deep learning based 6 degrees-of-freedom (DoF)
localization system for endoscopic capsule robots. Our system mainly focuses on
localization of endoscopic capsule robots inside the GI tract using only visual
information captured by a mono camera integrated to the robot. The proposed
system is a 23-layer deep convolutional neural network (CNN) that is capable to
estimate the pose of the robot in real time using a standard CPU. The dataset
for the evaluation of the system was recorded inside a surgical human stomach
model with realistic surface texture, softness, and surface liquid properties
so that the pre-trained CNN architecture can be transferred confidently into a
real endoscopic scenario. An average error of 7:1% and 3:4% for translation and
rotation has been obtained, respectively. The results accomplished from the
experiments demonstrate that a CNN pre-trained with raw 2D endoscopic images
performs accurately inside the GI tract and is robust to various challenges
posed by reflection distortions, lens imperfections, vignetting, noise, motion
blur, low resolution, and lack of unique landmarks to track.
| 1 | 0 | 0 | 0 | 0 | 0 |
On a cross-diffusion system arising in image denosing | We study a generalization of a cross-diffusion problem deduced from a
nonlinear complex-variable diffusion model for signal and image denoising.
We prove the existence of weak solutions of the time-independent problem with
fidelity terms under mild conditions on the data problem. Then, we show that
this translates on the well-posedness of a quasi-steady state approximation of
the evolution problem, and also prove the existence of weak solutions of the
latter under more restrictive hypothesis.
We finally perform some numerical simulations for image denoising, comparing
the performance of the cross-diffusion model and its corresponding scalar
Perona-Malik equation.
| 0 | 0 | 1 | 0 | 0 | 0 |
Locally Repairable Codes with Multiple $(r_{i}, δ_{i})$-Localities | In distributed storage systems, locally repairable codes (LRCs) are
introduced to realize low disk I/O and repair cost. In order to tolerate
multiple node failures, the LRCs with \emph{$(r, \delta)$-locality} are further
proposed. Since hot data is not uncommon in a distributed storage system, both
Zeh \emph{et al.} and Kadhe \emph{et al.} focus on the LRCs with \emph{multiple
localities or unequal localities} (ML-LRCs) recently, which said that the
localities among the code symbols can be different. ML-LRCs are attractive and
useful in reducing repair cost for hot data. In this paper, we generalize the
ML-LRCs to the $(r,\delta)$-locality case of multiple node failures, and define
an LRC with multiple $(r_{i}, \delta_{i})_{i\in [s]}$ localities ($s\ge 2$),
where $r_{1}\leq r_{2}\leq\dots\leq r_{s}$ and
$\delta_{1}\geq\delta_{2}\geq\dots\geq\delta_{s}\geq2$. Such codes ensure that
some hot data could be repaired more quickly and have better failure-tolerance
in certain cases because of relatively smaller $r_{i}$ and larger $\delta_{i}$.
Then, we derive a Singleton-like upper bound on the minimum distance for the
proposed LRCs by employing the regenerating-set technique. Finally, we obtain a
class of explicit and structured constructions of optimal ML-LRCs, and further
extend them to the cases of multiple $(r_{i}, \delta)_{i\in [s]}$ localities.
| 1 | 0 | 0 | 0 | 0 | 0 |
Reconciling Enumerative and Symbolic Search in Syntax-Guided Synthesis | Syntax-guided synthesis aims to find a program satisfying semantic
specification as well as user-provided structural hypothesis. For syntax-guided
synthesis there are two main search strategies: concrete search, which
systematically or stochastically enumerates all possible solutions, and
symbolic search, which interacts with a constraint solver to solve the
synthesis problem. In this paper, we propose a concolic synthesis framework
which combines the best of the two worlds. Based on a decision tree
representation, our framework works by enumerating tree heights from the
smallest possible one to larger ones. For each fixed height, the framework
symbolically searches a solution through the counterexample-guided inductive
synthesis approach. To compensate the exponential blow-up problem with the
concolic synthesis framework, we identify two fragments of synthesis problems
and develop purely symbolic and more efficient procedures. The two fragments
are decidable as these procedures are terminating and complete. We implemented
our synthesis procedures and compared with state-of-the-art synthesizers on a
range of benchmarks. Experiments show that our algorithms are promising.
| 1 | 0 | 0 | 0 | 0 | 0 |
Periodic solutions of semilinear Duffing equations with impulsive effects | In this paper we are concerned with the existence of periodic solutions for
semilinear Duffing equations with impulsive effects. Firstly for the autonomous
one, basing on Poincaré-Birkhoff twist theorem, we prove the existence of
infinitely many periodic solutions. Secondly, as for the nonautonomous case,
the impulse brings us great challenges for the study, and there are only
finitely many periodic solutions, which is quite different from the
corresponding equation without impulses. Here, taking the autonomous one as an
auxiliary equation, we find the relation between these two equations and then
obtain the result also by Poincaré-Birkhoff twist theorem.
| 0 | 1 | 1 | 0 | 0 | 0 |
Spectral and scattering theory for perturbed block Toeplitz operators | We analyse spectral properties of a class of compact perturbations of block
Toeplitz operators associated with analytic symbols. In particular, a limiting
absorption principle and the absence of singular continuous spectrum are shown.
The existence and the completeness of wave operators are also obtained. Our
study is based on the construction of a conjugate operator in Mourre sense for
the corresponding Laurent operators.
| 0 | 0 | 1 | 0 | 0 | 0 |
Robust adaptive droop control for DC microgrids | There are tradeoffs between current sharing among distributed resources and
DC bus voltage stability when conventional droop control is used in DC
microgrids. As current sharing approaches the setpoint, bus voltage deviation
increases. Previous studies have suggested using secondary control utilizing
linear controllers to overcome drawbacks of droop control. However, linear
control design depends on an accurate model of the system. The derivation of
such a model is challenging because the noise and disturbances caused by the
coupling between sources, loads, and switches in microgrids are
under-represented. This under-representation makes linear modeling and control
insufficient. Hence, in this paper, we propose a robust adaptive control to
adjust droop characteristics to satisfy both current sharing and bus voltage
stability. First, the time-varying models of DC microgrids are derived. Second,
the improvements for the adaptive control method are presented. Third, the
application of the enhanced adaptive method to DC microgrids is presented to
satisfy the system objective. Fourth, simulation and experimental results on a
microgrid show that the adaptive method precisely shares current between two
distributed resources and maintains the nominal bus voltage. Last, the
comparative study validates the effectiveness of the proposed method over the
conventional method.
| 0 | 0 | 1 | 0 | 0 | 0 |
Off-diagonal asymptotic properties of Bergman kernels associated to analytic Kähler potentials | We prove a new off-diagonal asymptotic of the Bergman kernels associated to
tensor powers of a positive line bundle on a compact Kähler manifold. We show
that if the Kähler potential is real analytic, then the Bergman kernel
accepts a complete asymptotic expansion in a neighborhood of the diagonal of
shrinking size $k^{-\frac14}$. These improve the earlier results in the subject
for smooth potentials, where an expansion exists in a $k^{-\frac12}$
neighborhood of the diagonal. We obtain our results by finding upper bounds of
the form $C^m m!^{2}$ for the Bergman coefficients $b_m(x, \bar y)$, which is
an interesting problem on its own. We find such upper bounds using the method
of Berman-Berndtsson-Sjöstrand. We also show that sharpening these upper
bounds would improve the rate of shrinking neighborhoods of the diagonal $x=y$
in our results. In the special case of metrics with local constant holomorphic
sectional curvatures, we obtain off-diagonal asymptotic in a fixed (as $k \to
\infty$) neighborhood of the diagonal, which recovers a result of Berman [Ber]
(see Remark 3.5 of [Ber] for higher dimensions). In this case, we also find an
explicit formula for the Bergman kernel mod $O(e^{-k \delta} )$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Network Backboning with Noisy Data | Networks are powerful instruments to study complex phenomena, but they become
hard to analyze in data that contain noise. Network backbones provide a tool to
extract the latent structure from noisy networks by pruning non-salient edges.
We describe a new approach to extract such backbones. We assume that edge
weights are drawn from a binomial distribution, and estimate the error-variance
in edge weights using a Bayesian framework. Our approach uses a more realistic
null model for the edge weight creation process than prior work. In particular,
it simultaneously considers the propensity of nodes to send and receive
connections, whereas previous approaches only considered nodes as emitters of
edges. We test our model with real world networks of different types (flows,
stocks, co-occurrences, directed, undirected) and show that our Noise-Corrected
approach returns backbones that outperform other approaches on a number of
criteria. Our approach is scalable, able to deal with networks with millions of
edges.
| 1 | 1 | 0 | 0 | 0 | 0 |
The 2017 DAVIS Challenge on Video Object Segmentation | We present the 2017 DAVIS Challenge on Video Object Segmentation, a public
dataset, benchmark, and competition specifically designed for the task of video
object segmentation. Following the footsteps of other successful initiatives,
such as ILSVRC and PASCAL VOC, which established the avenue of research in the
fields of scene classification and semantic segmentation, the DAVIS Challenge
comprises a dataset, an evaluation methodology, and a public competition with a
dedicated workshop co-located with CVPR 2017. The DAVIS Challenge follows up on
the recent publication of DAVIS (Densely-Annotated VIdeo Segmentation), which
has fostered the development of several novel state-of-the-art video object
segmentation techniques. In this paper we describe the scope of the benchmark,
highlight the main characteristics of the dataset, define the evaluation
metrics of the competition, and present a detailed analysis of the results of
the participants to the challenge.
| 1 | 0 | 0 | 0 | 0 | 0 |
How to Ask for Technical Help? Evidence-based Guidelines for Writing Questions on Stack Overflow | Context: The success of Stack Overflow and other community-based
question-and-answer (Q&A) sites depends mainly on the will of their members to
answer others' questions. In fact, when formulating requests on Q&A sites, we
are not simply seeking for information. Instead, we are also asking for other
people's help and feedback. Understanding the dynamics of the participation in
Q&A communities is essential to improve the value of crowdsourced knowledge.
Objective: In this paper, we investigate how information seekers can increase
the chance of eliciting a successful answer to their questions on Stack
Overflow by focusing on the following actionable factors: affect, presentation
quality, and time.
Method: We develop a conceptual framework of factors potentially influencing
the success of questions in Stack Overflow. We quantitatively analyze a set of
over 87K questions from the official Stack Overflow dump to assess the impact
of actionable factors on the success of technical requests. The information
seeker reputation is included as a control factor. Furthermore, to understand
the role played by affective states in the success of questions, we
qualitatively analyze questions containing positive and negative emotions.
Finally, a survey is conducted to understand how Stack Overflow users perceive
the guideline suggestions for writing questions.
Results: We found that regardless of user reputation, successful questions
are short, contain code snippets, and do not abuse with uppercase characters.
As regards affect, successful questions adopt a neutral emotional style.
Conclusion: We provide evidence-based guidelines for writing effective
questions on Stack Overflow that software engineers can follow to increase the
chance of getting technical help. As for the role of affect, we empirically
confirmed community guidelines that suggest avoiding rudeness in question
writing.
| 1 | 0 | 0 | 0 | 0 | 0 |
Advanced Quantizer Designs for FDD-Based FD-MIMO Systems Using Uniform Planar Arrays | Massive multiple-input multiple-output (MIMO) systems, which utilize a large
number of antennas at the base station, are expected to enhance network
throughput by enabling improved multiuser MIMO techniques. To deploy many
antennas in reasonable form factors, base stations are expected to employ
antenna arrays in both horizontal and vertical dimensions, which is known as
full-dimension (FD) MIMO. The most popular two-dimensional array is the uniform
planar array (UPA), where antennas are placed in a grid pattern. To exploit the
full benefit of massive MIMO in frequency division duplexing (FDD), the
downlink channel state information (CSI) should be estimated, quantized, and
fed back from the receiver to the transmitter. However, it is difficult to
accurately quantize the channel in a computationally efficient manner due to
the high dimensionality of the massive MIMO channel. In this paper, we develop
both narrowband and wideband CSI quantizers for FD-MIMO taking the properties
of realistic channels and the UPA into consideration. To improve quantization
quality, we focus on not only quantizing dominant radio paths in the channel,
but also combining the quantized beams. We also develop a hierarchical beam
search approach, which scans both vertical and horizontal domains jointly with
moderate computational complexity. Numerical simulations verify that the
performance of the proposed quantizers is better than that of previous CSI
quantization techniques.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Novel Stretch Energy Minimization Algorithm for Equiareal Parameterizations | Surface parameterizations have been widely applied to computer graphics and
digital geometry processing. In this paper, we propose a novel stretch energy
minimization (SEM) algorithm for the computation of equiareal parameterizations
of simply connected open surfaces with a very small area distortion and a
highly improved computational efficiency. In addition, the existence of
nontrivial limit points of the SEM algorithm is guaranteed under some mild
assumptions of the mesh quality. Numerical experiments indicate that the
efficiency, accuracy, and robustness of the proposed SEM algorithm outperform
other state-of-the-art algorithms. Applications of the SEM on surface remeshing
and surface registration for simply connected open surfaces are demonstrated
thereafter. Thanks to the SEM algorithm, the computations for these
applications can be carried out efficiently and robustly.
| 1 | 0 | 0 | 0 | 0 | 0 |
Time Complexity Analysis of a Distributed Stochastic Optimization in a Non-Stationary Environment | In this paper, we consider a distributed stochastic optimization problem
where the goal is to minimize the time average of a cost function subject to a
set of constraints on the time averages of related stochastic processes called
penalties. We assume that the state of the system is evolving in an independent
and non-stationary fashion and the "common information" available at each node
is distributed and delayed. Such stochastic optimization is an integral part of
many important problems in wireless networks such as scheduling, routing,
resource allocation and crowd sensing. We propose an approximate distributed
Drift- Plus-Penalty (DPP) algorithm, and show that it achieves a time average
cost (and penalties) that is within epsilon > 0 of the optimal cost (and
constraints) with high probability. Also, we provide a condition on the
convergence time t for this result to hold. In particular, for any delay D >= 0
in the common information, we use a coupling argument to prove that the
proposed algorithm converges almost surely to the optimal solution. We use an
application from wireless sensor network to corroborate our theoretical
findings through simulation results.
| 1 | 0 | 1 | 0 | 0 | 0 |
Robust Counterfactual Inferences using Feature Learning and their Applications | In a wide variety of applications, including personalization, we want to
measure the difference in outcome due to an intervention and thus have to deal
with counterfactual inference. The feedback from a customer in any of these
situations is only 'bandit feedback' - that is, a partial feedback based on
whether we chose to intervene or not. Typically randomized experiments are
carried out to understand whether an intervention is overall better than no
intervention. Here we present a feature learning algorithm to learn from a
randomized experiment where the intervention in consideration is most effective
and where it is least effective rather than only focusing on the overall
impact, thus adding a context to our learning mechanism and extract more
information. From the randomized experiment, we learn the feature
representations which divide the population into subpopulations where we
observe statistically significant difference in average customer feedback
between those who were subjected to the intervention and those who were not,
with a level of significance l, where l is a configurable parameter in our
model. We use this information to derive the value of the intervention in
consideration for each instance in the population. With experiments, we show
that using this additional learning, in future interventions, the context for
each instance could be leveraged to decide whether to intervene or not.
| 0 | 0 | 0 | 1 | 0 | 0 |
Regularity of symbolic powers and Arboricity of matroids | Let $\Delta$ be a simplicial complex of a matroid $M$. In this paper, we
explicitly compute the regularity of all the symbolic powers of a
Stanley-Reisner ideal $I_\Delta$ in terms of combinatorial data of the matroid
$M$. In order to do that, we provide a sharp bound between the arboricity of
$M$ and the circumference of its dual $M^*$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Max-Pooling Loss Training of Long Short-Term Memory Networks for Small-Footprint Keyword Spotting | We propose a max-pooling based loss function for training Long Short-Term
Memory (LSTM) networks for small-footprint keyword spotting (KWS), with low
CPU, memory, and latency requirements. The max-pooling loss training can be
further guided by initializing with a cross-entropy loss trained network. A
posterior smoothing based evaluation approach is employed to measure keyword
spotting performance. Our experimental results show that LSTM models trained
using cross-entropy loss or max-pooling loss outperform a cross-entropy loss
trained baseline feed-forward Deep Neural Network (DNN). In addition,
max-pooling loss trained LSTM with randomly initialized network performs better
compared to cross-entropy loss trained LSTM. Finally, the max-pooling loss
trained LSTM initialized with a cross-entropy pre-trained network shows the
best performance, which yields $67.6\%$ relative reduction compared to baseline
feed-forward DNN in Area Under the Curve (AUC) measure.
| 1 | 0 | 0 | 1 | 0 | 0 |
On Packet Scheduling with Adversarial Jamming and Speedup | In Packet Scheduling with Adversarial Jamming packets of arbitrary sizes
arrive over time to be transmitted over a channel in which instantaneous
jamming errors occur at times chosen by the adversary and not known to the
algorithm. The transmission taking place at the time of jamming is corrupt, and
the algorithm learns this fact immediately. An online algorithm maximizes the
total size of packets it successfully transmits and the goal is to develop an
algorithm with the lowest possible asymptotic competitive ratio, where the
additive constant may depend on packet sizes.
Our main contribution is a universal algorithm that works for any speedup and
packet sizes and, unlike previous algorithms for the problem, it does not need
to know these properties in advance. We show that this algorithm guarantees
1-competitiveness with speedup 4, making it the first known algorithm to
maintain 1-competitiveness with a moderate speedup in the general setting of
arbitrary packet sizes. We also prove a lower bound of $\phi+1\approx 2.618$ on
the speedup of any 1-competitive deterministic algorithm, showing that our
algorithm is close to the optimum.
Additionally, we formulate a general framework for analyzing our algorithm
locally and use it to show upper bounds on its competitive ratio for speedups
in $[1,4)$ and for several special cases, recovering some previously known
results, each of which had a dedicated proof. In particular, our algorithm is
3-competitive without speedup, matching both the (worst-case) performance of
the algorithm by Jurdzinski et al. and the lower bound by Anta et al.
| 1 | 0 | 0 | 0 | 0 | 0 |
Coarse-Grid Computational Fluid Dynamic (CG-CFD) Error Prediction using Machine Learning | Despite the progress in high performance computing, Computational Fluid
Dynamics (CFD) simulations are still computationally expensive for many
practical engineering applications such as simulating large computational
domains and highly turbulent flows. One of the major reasons of the high
expense of CFD is the need for a fine grid to resolve phenomena at the relevant
scale, and obtain a grid-independent solution. The fine grid requirements often
drive the computational time step size down, which makes long transient
problems prohibitively expensive. In the research presented, the feasibility of
a Coarse Grid CFD (CG-CFD) approach is investigated by utilizing Machine
Learning (ML) algorithms. Relying on coarse grids increases the discretization
error. Hence, a method is suggested to produce a surrogate model that predicts
the CG-CFD local errors to correct the variables of interest. Given
high-fidelity data, a surrogate model is trained to predict the CG-CFD local
errors as a function of the coarse grid local features. ML regression
algorithms are utilized to construct a surrogate model that relates the local
error and the coarse grid features. This method is applied to a
three-dimensional flow in a lid driven cubic cavity domain. The performance of
the method was assessed by training the surrogate model on the flow full field
spatial data and tested on new data (from flows of different Reynolds number
and/or computed by different grid sizes). The proposed method maximizes the
benefit of the available data and shows potential for a good predictive
capability.
| 0 | 1 | 0 | 0 | 0 | 0 |
EAC-Net: A Region-based Deep Enhancing and Cropping Approach for Facial Action Unit Detection | In this paper, we propose a deep learning based approach for facial action
unit detection by enhancing and cropping the regions of interest. The approach
is implemented by adding two novel nets (layers): the enhancing layers and the
cropping layers, to a pretrained CNN model. For the enhancing layers, we
designed an attention map based on facial landmark features and applied it to a
pretrained neural network to conduct enhanced learning (The E-Net). For the
cropping layers, we crop facial regions around the detected landmarks and
design convolutional layers to learn deeper features for each facial region
(C-Net). We then fuse the E-Net and the C-Net to obtain our Enhancing and
Cropping (EAC) Net, which can learn both feature enhancing and region cropping
functions. Our approach shows significant improvement in performance compared
to the state-of-the-art methods applied to BP4D and DISFA AU datasets.
| 1 | 0 | 0 | 0 | 0 | 0 |
Influence of surface and bulk water ice on the reactivity of a water-forming reaction | On the surface of icy dust grains in the dense regions of the interstellar
medium a rich chemistry can take place. Due to the low temperature, reactions
that proceed via a barrier can only take place through tunneling. The reaction
H + H$_2$O$_2$ $\rightarrow$ H$_2$O + OH is such a case with a gas-phase
barrier of $\sim$26.5 kJ/mol. Still the reaction is known to be involved in
water formation on interstellar grains. Here, we investigate the influence of a
water ice surface and of bulk ice on the reaction rate constant. Rate constants
are calculated using instanton theory down to 74 K. The ice is taken into
account via multiscale modeling, describing the reactants and the direct
surrounding at the quantum mechanical level with density functional theory
(DFT), while the rest of the ice is modeled on the molecular mechanical level
with a force field. We find that H$_2$O$_2$ binding energies cannot be captured
by a single value, but rather depend on the number of hydrogen bonds with
surface molecules. In highly amorphous surroundings the binding site can block
the routes of attack and impede the reaction. Furthermore, the activation
energies do not correlate with the binding energies of the same sites. The
unimolecular rate constants related to the Langmuir-Hinshelwood mechanism
increase as the activation energy decreases. Thus, we provide a lower limit for
the rate constant and argue that rate constants can have values up to two order
of magnitude larger than this limit.
| 0 | 1 | 0 | 0 | 0 | 0 |
Impact of energy dissipation on interface shapes and on rates for dewetting from liquid substrates | We revisit the fundamental problem of liquid-liquid dewetting and perform a
detailed comparison of theoretical predictions based on thin-film models with
experimental measurements obtained by atomic force microscopy (AFM).
Specifically, we consider the dewetting of a liquid polystyrene (PS) layer from
a liquid polymethyl methacrylate (PMMA) layer, where the thicknesses and the
viscosities of PS and PMMA layers are similar. The excellent agreement of
experiment and theory reveals that dewetting rates for such systems follow no
universal power law, in contrast to dewetting scenarios on solid substrates.
Our new energetic approach allows to assess the physical importance of
different contributions to the energy-dissipation mechanism, for which we
analyze the local flow fields and the local dissipation rates.
| 0 | 1 | 0 | 0 | 0 | 0 |
Resonance control of graphene drum resonator in nonlinear regime by standing wave of light | We demonstrate the control of resonance characteristics of a drum type
graphene mechanical resonator in nonlinear oscillation regime by the
photothermal effect, which is induced by a standing wave of light between a
graphene and a substrate. Unlike the conventional Duffing type nonlinearity,
the resonance characteristics in nonlinear oscillation regime is modulated by
the standing wave of light despite a small variation amplitude. From numerical
calculations with a combination of equations of heat and motion with Duffing
type nonlinearity, this can be explained that the photothermal effect causes
delayed modulation of stress or tension of the graphene.
| 0 | 1 | 0 | 0 | 0 | 0 |
Uncertainty quantification of coal seam gas production prediction using Polynomial Chaos | A surrogate model approximates a computationally expensive solver. Polynomial
Chaos is a method to construct surrogate models by summing combinations of
carefully chosen polynomials. The polynomials are chosen to respect the
probability distributions of the uncertain input variables (parameters); this
allows for both uncertainty quantification and global sensitivity analysis.
In this paper we apply these techniques to a commercial solver for the
estimation of peak gas rate and cumulative gas extraction from a coal seam gas
well. The polynomial expansion is shown to honour the underlying geophysics
with low error when compared to a much more complex and computationally slower
commercial solver. We make use of advanced numerical integration techniques to
achieve this accuracy using relatively small amounts of training data.
| 0 | 0 | 1 | 0 | 0 | 0 |
Multi-objective Model-based Policy Search for Data-efficient Learning with Sparse Rewards | The most data-efficient algorithms for reinforcement learning in robotics are
model-based policy search algorithms, which alternate between learning a
dynamical model of the robot and optimizing a policy to maximize the expected
return given the model and its uncertainties. However, the current algorithms
lack an effective exploration strategy to deal with sparse or misleading reward
scenarios: if they do not experience any state with a positive reward during
the initial random exploration, it is very unlikely to solve the problem. Here,
we propose a novel model-based policy search algorithm, Multi-DEX, that
leverages a learned dynamical model to efficiently explore the task space and
solve tasks with sparse rewards in a few episodes. To achieve this, we frame
the policy search problem as a multi-objective, model-based policy optimization
problem with three objectives: (1) generate maximally novel state trajectories,
(2) maximize the expected return and (3) keep the system in state-space regions
for which the model is as accurate as possible. We then optimize these
objectives using a Pareto-based multi-objective optimization algorithm. The
experiments show that Multi-DEX is able to solve sparse reward scenarios (with
a simulated robotic arm) in much lower interaction time than VIME, TRPO,
GEP-PG, CMA-ES and Black-DROPS.
| 1 | 0 | 0 | 1 | 0 | 0 |
Non-Asymptotic Analysis of Robust Control from Coarse-Grained Identification | This work explores the trade-off between the number of samples required to
accurately build models of dynamical systems and the degradation of performance
in various control objectives due to a coarse approximation. In particular, we
show that simple models can be easily fit from input/output data and are
sufficient for achieving various control objectives. We derive bounds on the
number of noisy input/output samples from a stable linear time-invariant system
that are sufficient to guarantee that the corresponding finite impulse response
approximation is close to the true system in the $\mathcal{H}_\infty$-norm. We
demonstrate that these demands are lower than those derived in prior art which
aimed to accurately identify dynamical models. We also explore how different
physical input constraints, such as power constraints, affect the sample
complexity. Finally, we show how our analysis fits within the established
framework of robust control, by demonstrating how a controller designed for an
approximate system provably meets performance objectives on the true system.
| 1 | 0 | 1 | 0 | 0 | 0 |
Construction,sensitivity index, and synchronization speed of optimal networks | The stability (or instability) of synchronization is important in a number of
real world systems, including the power grid, the human brain and biological
cells. For identical synchronization, the synchronizability of a network, which
can be measured by the range of coupling strength that admits stable
synchronization, can be optimized for a given number of nodes and links.
Depending on the geometric degeneracy of the Laplacian eigenvectors, optimal
networks can be classified into different sensitivity levels, which we define
as a network's sensitivity index. We introduce an efficient and explicit way to
construct optimal networks of arbitrary size over a wide range of sensitivity
and link densities. Using coupled chaotic oscillators, we study synchronization
dynamics on optimal networks, showing that cospectral optimal networks can have
drastically different speed of synchronization. Such difference in dynamical
stability is found to be closely related to the different structural
sensitivity of these networks: generally, networks with high sensitivity index
are slower to synchronize, and, surprisingly, may not synchronize at all,
despite being theoretically stable under linear stability analysis.
| 0 | 1 | 1 | 0 | 0 | 0 |
A Vorticity-Preserving Hydrodynamical Scheme for Modeling Accretion Disk Flows | Vortices, turbulence, and unsteady non-laminar flows are likely both
prominent and dynamically important features of astrophysical disks. Such
strongly nonlinear phenomena are often difficult, however, to simulate
accurately, and are generally amenable to analytic treatment only in idealized
form. In this paper, we explore the evolution of compressible two-dimensional
flows using an implicit dual-time hydrodynamical scheme that strictly conserves
vorticity (if applied to simulate inviscid flows for which Kelvin's Circulation
Theorem is applicable). The algorithm is based on the work of Lerat, Falissard
& Side (2007), who proposed it in the context of terrestrial applications such
as the blade-vortex interactions generated by helicopter rotors. We present
several tests of Lerat et al.'s vorticity-preserving approach, which we have
implemented to second-order accuracy, providing side-by-side comparisons with
other algorithms that are frequently used in protostellar disk simulations. The
comparison codes include one based on explicit, second-order van-Leer
advection, one based on spectral methods, and another that implements a
higher-order Godunov solver. Our results suggest that Lerat et al's algorithm
will be useful for simulations of astrophysical environments in which vortices
play a dynamical role, and where strong shocks are not expected.
| 0 | 1 | 0 | 0 | 0 | 0 |
Direct simulation of liquid-gas-solid flow with a free surface lattice Boltzmann method | Direct numerical simulation of liquid-gas-solid flows is uncommon due to the
considerable computational cost. As the grid spacing is determined by the
smallest involved length scale, large grid sizes become necessary -- in
particular if the bubble-particle aspect ratio is on the order of 10 or larger.
Hence, it arises the question of both feasibility and reasonability. In this
paper, we present a fully parallel, scalable method for direct numerical
simulation of bubble-particle interaction at a size ratio of 1-2 orders of
magnitude that makes simulations feasible on currently available
super-computing resources. With the presented approach, simulations of bubbles
in suspension columns consisting of more than $100\,000$ fully resolved
particles become possible. Furthermore, we demonstrate the significance of
particle-resolved simulations by comparison to previous unresolved solutions.
The results indicate that fully-resolved direct numerical simulation is indeed
necessary to predict the flow structure of bubble-particle interaction problems
correctly.
| 0 | 1 | 0 | 0 | 0 | 0 |
TALL: Temporal Activity Localization via Language Query | This paper focuses on temporal localization of actions in untrimmed videos.
Existing methods typically train classifiers for a pre-defined list of actions
and apply them in a sliding window fashion. However, activities in the wild
consist of a wide combination of actors, actions and objects; it is difficult
to design a proper activity list that meets users' needs. We propose to
localize activities by natural language queries. Temporal Activity Localization
via Language (TALL) is challenging as it requires: (1) suitable design of text
and video representations to allow cross-modal matching of actions and language
queries; (2) ability to locate actions accurately given features from sliding
windows of limited granularity. We propose a novel Cross-modal Temporal
Regression Localizer (CTRL) to jointly model text query and video clips, output
alignment scores and action boundary regression results for candidate clips.
For evaluation, we adopt TaCoS dataset, and build a new dataset for this task
on top of Charades by adding sentence temporal annotations, called
Charades-STA. We also build complex sentence queries in Charades-STA for test.
Experimental results show that CTRL outperforms previous methods significantly
on both datasets.
| 1 | 0 | 0 | 0 | 0 | 0 |
2D granular flows with the $μ(I)$ rheology and side walls friction: a well balanced multilayer discretization | We present here numerical modelling of granular flows with the $\mu(I)$
rheology in confined channels. The contribution is twofold: (i) a model to
approximate the Navier-Stokes equations with the $\mu(I)$ rheology through an
asymptotic analysis. Under the hypothesis of a one-dimensional flow, this model
takes into account side walls friction; (ii) a multilayer discretization
following Fernández-Nieto et al. (J. Fluid Mech., vol. 798, 2016, pp.
643-681). In this new numerical scheme, we propose an appropriate treatment of
the rheological terms through a hydrostatic reconstruction which allows this
scheme to be well-balanced and therefore to deal with dry areas. Based on
academic tests, we first evaluate the influence of the width of the channel on
the normal profiles of the downslope velocity thanks to the multilayer approach
that is intrinsically able to describe changes from Bagnold to S-shaped (and
vice versa) velocity profiles. We also check the well balance property of the
proposed numerical scheme. We show that approximating side walls friction using
single-layer models may lead to strong errors. Secondly, we compare the
numerical results with experimental data on granular collapses. We show that
the proposed scheme allows us to qualitatively reproduce the deposit in the
case of a rigid bed (i. e. dry area) and that the error made by replacing the
dry area by a small layer of material may be large if this layer is not thin
enough. The proposed model is also able to reproduce the time evolution of the
free surface and of the flow/no-flow interface. In addition, it reproduces the
effect of erosion for granular flows over initially static material lying on
the bed. This is possible when using a variable friction coefficient $\mu(I)$
but not with a constant friction coefficient.
| 0 | 1 | 0 | 0 | 0 | 0 |
A scientists' view of scientometrics: Not everything that counts can be counted | Like it or not, attempts to evaluate and monitor the quality of academic
research have become increasingly prevalent worldwide. Performance reviews
range from at the level of individuals, through research groups and
departments, to entire universities. Many of these are informed by, or
functions of, simple scientometric indicators and the results of such exercises
impact onto careers, funding and prestige. However, there is sometimes a
failure to appreciate that scientometrics are, at best, very blunt instruments
and their incorrect usage can be misleading. Rather than accepting the rise and
fall of individuals and institutions on the basis of such imprecise measures,
calls have been made for indicators be regularly scrutinised and for
improvements to the evidence base in this area. It is thus incumbent upon the
scientific community, especially the physics, complexity-science and
scientometrics communities, to scrutinise metric indicators. Here, we review
recent attempts to do this and show that some metrics in widespread use cannot
be used as reliable indicators research quality.
| 1 | 1 | 0 | 0 | 0 | 0 |
On analyzing and evaluating privacy measures for social networks under active attack | Widespread usage of complex interconnected social networks such as Facebook,
Twitter and LinkedIn in modern internet era has also unfortunately opened the
door for privacy violation of users of such networks by malicious entities. In
this article we investigate, both theoretically and empirically, privacy
violation measures of large networks under active attacks that was recently
introduced in (Information Sciences, 328, 403-417, 2016). Our theoretical
result indicates that the network manager responsible for prevention of privacy
violation must be very careful in designing the network if its topology does
not contain a cycle. Our empirical results shed light on privacy violation
properties of eight real social networks as well as a large number of synthetic
networks generated by both the classical Erdos-Renyi model and the scale-free
random networks generated by the Barabasi-Albert preferential-attachment model.
| 1 | 0 | 0 | 0 | 0 | 0 |
Mitigation of Policy Manipulation Attacks on Deep Q-Networks with Parameter-Space Noise | Recent developments have established the vulnerability of deep reinforcement
learning to policy manipulation attacks via intentionally perturbed inputs,
known as adversarial examples. In this work, we propose a technique for
mitigation of such attacks based on addition of noise to the parameter space of
deep reinforcement learners during training. We experimentally verify the
effect of parameter-space noise in reducing the transferability of adversarial
examples, and demonstrate the promising performance of this technique in
mitigating the impact of whitebox and blackbox attacks at both test and
training times.
| 0 | 0 | 0 | 1 | 0 | 0 |
Multi-Observation Elicitation | We study loss functions that measure the accuracy of a prediction based on
multiple data points simultaneously. To our knowledge, such loss functions have
not been studied before in the area of property elicitation or in machine
learning more broadly. As compared to traditional loss functions that take only
a single data point, these multi-observation loss functions can in some cases
drastically reduce the dimensionality of the hypothesis required. In
elicitation, this corresponds to requiring many fewer reports; in empirical
risk minimization, it corresponds to algorithms on a hypothesis space of much
smaller dimension. We explore some examples of the tradeoff between
dimensionality and number of observations, give some geometric
characterizations and intuition for relating loss functions and the properties
that they elicit, and discuss some implications for both elicitation and
machine-learning contexts.
| 1 | 0 | 0 | 0 | 0 | 0 |
Next Stop "NoOps": Enabling Cross-System Diagnostics Through Graph-based Composition of Logs and Metrics | Performing diagnostics in IT systems is an increasingly complicated task, and
it is not doable in satisfactory time by even the most skillful operators.
Systems and their architecture change very rapidly in response to business and
user demand. Many organizations see value in the maintenance and management
model of NoOps that stands for No Operations. One of the implementations of
this model is a system that is maintained automatically without any human
intervention. The path to NoOps involves not only precise and fast diagnostics
but also reusing as much knowledge as possible after the system is reconfigured
or changed. The biggest challenge is to leverage knowledge on one IT system and
reuse this knowledge for diagnostics of another, different system. We propose a
framework of weighted graphs which can transfer knowledge, and perform
high-quality diagnostics of IT systems. We encode all possible data in a graph
representation of a system state and automatically calculate weights of these
graphs. Then, thanks to the evaluation of similarity between graphs, we
transfer knowledge about failures from one system to another and use it for
diagnostics. We successfully evaluate the proposed approach on Spark, Hadoop,
Kafka and Cassandra systems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Davenport-Heilbronn Theorems for Quotients of Class Groups | We prove a generalization of the Davenport-Heilbronn theorem to quotients of
ideal class groups of quadratic fields by the primes lying above a fixed set of
rational primes $S$. Additionally, we obtain average sizes for the relaxed
Selmer group $\mathrm{Sel}_3^S(K)$ and for
$\mathcal{O}_{K,S}^\times/(\mathcal{O}_{K,S}^\times)^3$ as $K$ varies among
quadratic fields with a fixed signature ordered by discriminant.
| 0 | 0 | 1 | 0 | 0 | 0 |
Learning Universal Adversarial Perturbations with Generative Models | Neural networks are known to be vulnerable to adversarial examples, inputs
that have been intentionally perturbed to remain visually similar to the source
input, but cause a misclassification. It was recently shown that given a
dataset and classifier, there exists so called universal adversarial
perturbations, a single perturbation that causes a misclassification when
applied to any input. In this work, we introduce universal adversarial
networks, a generative network that is capable of fooling a target classifier
when it's generated output is added to a clean sample from a dataset. We show
that this technique improves on known universal adversarial attacks.
| 1 | 0 | 0 | 1 | 0 | 0 |
Large-Scale Classification using Multinomial Regression and ADMM | We present a novel method for learning the weights in multinomial logistic
regression based on the alternating direction method of multipliers (ADMM). In
each iteration, our algorithm decomposes the training into three steps; a
linear least-squares problem for the weights, a global variable update
involving a separable cross-entropy loss function, and a trivial dual variable
update The least-squares problem can be factorized in the off-line phase, and
the separability in the global variable update allows for efficient
parallelization, leading to faster convergence. We compare our method with
stochastic gradient descent for linear classification as well as for transfer
learning and show that the proposed ADMM-Softmax leads to improved
generalization and convergence.
| 1 | 0 | 0 | 1 | 0 | 0 |
First order dipolar phase transition in the Dicke model with infinitely coordinated frustrating interaction | We found analytically a first order quantum phase transition in the Cooper
pair box array of $N$ low-capacitance Josephson junctions capacitively coupled
to a resonant photon in a microwave cavity. The Hamiltonian of the system maps
on the extended Dicke Hamiltonian of $N$ spins one-half with infinitely
coordinated antiferromagnetic (frustrating) interaction. This interaction
arises from the gauge-invariant coupling of the Josephson junctions phases to
the vector potential of the resonant photon field. In $N \gg 1$ semiclassical
limit, we found a critical coupling at which ground state of the system
switches to the one with a net collective electric dipole moment of the Cooper
pair boxes coupled to superradiant equilibrium photonic condensate. This phase
transition changes from the first to second order if the frustrating
interaction is switched off. A self-consistently `rotating' Holstein-Primakoff
representation for the Cartesian components of the total superspin is proposed,
that enables to trace both the first and the second order quantum phase
transitions in the extended and standard Dicke models respectively.
| 0 | 1 | 0 | 0 | 0 | 0 |
Parsimonious Bayesian deep networks | Combining Bayesian nonparametrics and a forward model selection strategy, we
construct parsimonious Bayesian deep networks (PBDNs) that infer
capacity-regularized network architectures from the data and require neither
cross-validation nor fine-tuning when training the model. One of the two
essential components of a PBDN is the development of a special infinite-wide
single-hidden-layer neural network, whose number of active hidden units can be
inferred from the data. The other one is the construction of a greedy
layer-wise learning algorithm that uses a forward model selection criterion to
determine when to stop adding another hidden layer. We develop both Gibbs
sampling and stochastic gradient descent based maximum a posteriori inference
for PBDNs, providing state-of-the-art classification accuracy and interpretable
data subtypes near the decision boundaries, while maintaining low computational
complexity for out-of-sample prediction.
| 0 | 0 | 0 | 1 | 0 | 0 |
Learning to Play Othello with Deep Neural Networks | Achieving superhuman playing level by AlphaGo corroborated the capabilities
of convolutional neural architectures (CNNs) for capturing complex spatial
patterns. This result was to a great extent due to several analogies between Go
board states and 2D images CNNs have been designed for, in particular
translational invariance and a relatively large board. In this paper, we verify
whether CNN-based move predictors prove effective for Othello, a game with
significantly different characteristics, including a much smaller board size
and complete lack of translational invariance. We compare several CNN
architectures and board encodings, augment them with state-of-the-art
extensions, train on an extensive database of experts' moves, and examine them
with respect to move prediction accuracy and playing strength. The empirical
evaluation confirms high capabilities of neural move predictors and suggests a
strong correlation between prediction accuracy and playing strength. The best
CNNs not only surpass all other 1-ply Othello players proposed to date but
defeat (2-ply) Edax, the best open-source Othello player.
| 1 | 0 | 0 | 1 | 0 | 0 |
A causal modelling framework for reference-based imputation and tipping point analysis | We consider estimating the "de facto" or effectiveness estimand in a
randomised placebo-controlled or standard-of-care-controlled drug trial with
quantitative outcome, where participants who discontinue an investigational
treatment are not followed up thereafter. Carpenter et al (2013) proposed
reference-based imputation methods which use a reference arm to inform the
distribution of post-discontinuation outcomes and hence to inform an imputation
model. However, the reference-based imputation methods were not formally
justified. We present a causal model which makes an explicit assumption in a
potential outcomes framework about the maintained causal effect of treatment
after discontinuation. We show that the "jump to reference", "copy reference"
and "copy increments in reference" reference-based imputation methods, with the
control arm as the reference arm, are special cases of the causal model with
specific assumptions about the causal treatment effect. Results from simulation
studies are presented. We also show that the causal model provides a flexible
and transparent framework for a tipping point sensitivity analysis in which we
vary the assumptions made about the causal effect of discontinued treatment. We
illustrate the approach with data from two longitudinal clinical trials.
| 0 | 0 | 0 | 1 | 0 | 0 |
The paradigm-shift of social spambots: Evidence, theories, and tools for the arms race | Recent studies in social media spam and automation provide anecdotal
argumentation of the rise of a new generation of spambots, so-called social
spambots. Here, for the first time, we extensively study this novel phenomenon
on Twitter and we provide quantitative evidence that a paradigm-shift exists in
spambot design. First, we measure current Twitter's capabilities of detecting
the new social spambots. Later, we assess the human performance in
discriminating between genuine accounts, social spambots, and traditional
spambots. Then, we benchmark several state-of-the-art techniques proposed by
the academic literature. Results show that neither Twitter, nor humans, nor
cutting-edge applications are currently capable of accurately detecting the new
social spambots. Our results call for new approaches capable of turning the
tide in the fight against this raising phenomenon. We conclude by reviewing the
latest literature on spambots detection and we highlight an emerging common
research trend based on the analysis of collective behaviors. Insights derived
from both our extensive experimental campaign and survey shed light on the most
promising directions of research and lay the foundations for the arms race
against the novel social spambots. Finally, to foster research on this novel
phenomenon, we make publicly available to the scientific community all the
datasets used in this study.
| 1 | 0 | 0 | 0 | 0 | 0 |
Weakly-Supervised Spatial Context Networks | We explore the power of spatial context as a self-supervisory signal for
learning visual representations. In particular, we propose spatial context
networks that learn to predict a representation of one image patch from another
image patch, within the same image, conditioned on their real-valued relative
spatial offset. Unlike auto-encoders, that aim to encode and reconstruct
original image patches, our network aims to encode and reconstruct intermediate
representations of the spatially offset patches. As such, the network learns a
spatially conditioned contextual representation. By testing performance with
various patch selection mechanisms we show that focusing on object-centric
patches is important, and that using object proposal as a patch selection
mechanism leads to the highest improvement in performance. Further, unlike
auto-encoders, context encoders [21], or other forms of unsupervised feature
learning, we illustrate that contextual supervision (with pre-trained model
initialization) can improve on existing pre-trained model performance. We build
our spatial context networks on top of standard VGG_19 and CNN_M architectures
and, among other things, show that we can achieve improvements (with no
additional explicit supervision) over the original ImageNet pre-trained VGG_19
and CNN_M models in object categorization and detection on VOC2007.
| 1 | 0 | 0 | 0 | 0 | 0 |
Variation of ionizing continuum: the main driver of Broad Absorption Line Variability | We present a statistical analysis of the variability of broad absorption
lines (BALs) in quasars using the large multi-epoch spectroscopic dataset of
the Sloan Digital Sky Survey Data Release 12 (SDSS DR12). We divide the sample
into two groups according to the pattern of the variation of C iv BAL with
respect to that of continuum: the equivalent widths (EW) of the BAL decreases
(increases) when the continuum brightens (dims) as group T1; and the variation
of EW and continuum in the opposite relation as group T2. We find that T2 has
significantly (P_T<10-6 , Students T Test) higher EW ratios (R) of Si iv to C
iv BAL than T1. Our result agrees with the prediction of photoionization models
that C +3 column density increases (decreases) if there is a (or no) C +3
ionization front while R decreases with the incident continuum. We show that
BAL variabilities in at least 80% quasars are driven by the variation of
ionizing continuum while other models that predict uncorrelated BAL and
continuum variability contribute less than 20%. Considering large uncertainty
in the continuum flux calibration, the latter fraction may be much smaller.
When the sample is binned into different time interval between the two
observations, we find significant difference in the distribution of R between
T1 and T2 in all time-bins down to a deltaT < 6 days, suggesting that BAL
outflow in a fraction of quasars has a recombination time scale of only a few
days.
| 0 | 1 | 0 | 0 | 0 | 0 |
Low Dimensional Atomic Norm Representations in Line Spectral Estimation | The line spectral estimation problem consists in recovering the frequencies
of a complex valued time signal that is assumed to be sparse in the spectral
domain from its discrete observations. Unlike the gridding required by the
classical compressed sensing framework, line spectral estimation reconstructs
signals whose spectral supports lie continuously in the Fourier domain. If
recent advances have shown that atomic norm relaxation produces highly robust
estimates in this context, the computational cost of this approach remains,
however, the major flaw for its application to practical systems.
In this work, we aim to bridge the complexity issue by studying the atomic
norm minimization problem from low dimensional projection of the signal
samples. We derive conditions on the sub-sampling matrix under which the
partial atomic norm can be expressed by a low-dimensional semidefinite program.
Moreover, we illustrate the tightness of this relaxation by showing that it is
possible to recover the original signal in poly-logarithmic time for two
specific sub-sampling patterns.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Trio Identity for Quasi-Monte Carlo Error | Monte Carlo methods approximate integrals by sample averages of integrand
values. The error of Monte Carlo methods may be expressed as a trio identity:
the product of the variation of the integrand, the discrepancy of the sampling
measure, and the confounding. The trio identity has different versions,
depending on whether the integrand is deterministic or Bayesian and whether the
sampling measure is deterministic or random. Although the variation and the
discrepancy are common in the literature, the confounding is relatively unknown
and under-appreciated. Theory and examples are used to show how the cubature
error may be reduced by employing the low discrepancy sampling that defines
quasi-Monte Carlo methods. The error may also be reduced by rewriting the
integral in terms of a different integrand. Finally, the confounding explains
why the cubature error might decay at a rate different from that of the
discrepancy.
| 0 | 0 | 1 | 0 | 0 | 0 |
Weak type operator Lipschitz and commutator estimates for commuting tuples | Let $f: \mathbb{R}^d \to\mathbb{R}$ be a Lipschitz function. If $B$ is a
bounded self-adjoint operator and if $\{A_k\}_{k=1}^d$ are commuting bounded
self-adjoint operators such that $[A_k,B]\in L_1(H),$ then
$$\|[f(A_1,\cdots,A_d),B]\|_{1,\infty}\leq
c(d)\|\nabla(f)\|_{\infty}\max_{1\leq k\leq d}\|[A_k,B]\|_1,$$ where $c(d)$ is
a constant independent of $f$, $\mathcal{M}$ and $A,B$ and
$\|\cdot\|_{1,\infty}$ denotes the weak $L_1$-norm. If $\{X_k\}_{k=1}^d$
(respectively, $\{Y_k\}_{k=1}^d$) are commuting bounded self-adjoint operators
such that $X_k-Y_k\in L_1(H),$ then
$$\|f(X_1,\cdots,X_d)-f(Y_1,\cdots,Y_d)\|_{1,\infty}\leq
c(d)\|\nabla(f)\|_{\infty}\max_{1\leq k\leq d}\|X_k-Y_k\|_1.$$
| 0 | 0 | 1 | 0 | 0 | 0 |
Fast Spectral Ranking for Similarity Search | Despite the success of deep learning on representing images for particular
object retrieval, recent studies show that the learned representations still
lie on manifolds in a high dimensional space. This makes the Euclidean nearest
neighbor search biased for this task. Exploring the manifolds online remains
expensive even if a nearest neighbor graph has been computed offline. This work
introduces an explicit embedding reducing manifold search to Euclidean search
followed by dot product similarity search. This is equivalent to linear graph
filtering of a sparse signal in the frequency domain. To speed up online
search, we compute an approximate Fourier basis of the graph offline. We
improve the state of art on particular object retrieval datasets including the
challenging Instre dataset containing small objects. At a scale of 10^5 images,
the offline cost is only a few hours, while query time is comparable to
standard similarity search.
| 1 | 0 | 0 | 0 | 0 | 0 |
Transition of multi-diffusive states in a biased periodic potential | We study a frequency-dependent damping model of hyper-diffusion within the
generalized Langevin equation. The model allows for the colored noise defined
by its spectral density, assumed to be proportional to $\omega^{\delta-1}$ at
low frequencies with $0<\delta<1$ (sub-Ohmic damping) or $1<\delta<2$
(super-Ohmic damping), where the frequency-dependent damping is deduced from
the noise by means of the fluctuation-dissipation theorem. It is shown that for
super-Ohmic damping and certain parameters, the diffusive process of the
particle in a titled periodic potential undergos sequentially four
time-regimes: thermalization, hyper-diffusion, collapse and asymptotical
restoration. For analysing transition phenomenon of multi-diffusive states, we
demonstrate that the first exist time of the particle escaping from the locked
state into the running state abides by an exponential distribution. The concept
of equivalent velocity trap is introduced in the present model, moreover,
reformation of ballistic diffusive system is also considered as a marginal
situation, however there does not exhibit the collapsed state of diffusion.
| 0 | 1 | 0 | 0 | 0 | 0 |
Deformable Classifiers | Geometric variations of objects, which do not modify the object class, pose a
major challenge for object recognition. These variations could be rigid as well
as non-rigid transformations. In this paper, we design a framework for training
deformable classifiers, where latent transformation variables are introduced,
and a transformation of the object image to a reference instantiation is
computed in terms of the classifier output, separately for each class. The
classifier outputs for each class, after transformation, are compared to yield
the final decision. As a by-product of the classification this yields a
transformation of the input object to a reference pose, which can be used for
downstream tasks such as the computation of object support. We apply a two-step
training mechanism for our framework, which alternates between optimizing over
the latent transformation variables and the classifier parameters to minimize
the loss function. We show that multilayer perceptrons, also known as deep
networks, are well suited for this approach and achieve state of the art
results on the rotated MNIST and the Google Earth dataset, and produce
competitive results on MNIST and CIFAR-10 when training on smaller subsets of
training data.
| 0 | 0 | 0 | 1 | 0 | 0 |
Energy Dissipation in Hamiltonian Chains of Rotators | We discuss, in the context of energy flow in high-dimensional systems and
Kolmogorov-Arnol'd-Moser (KAM) theory, the behavior of a chain of rotators
(rotors) which is purely Hamiltonian, apart from dissipation at just one end.
We derive bounds on the dissipation rate which become arbitrarily small in
certain physical regimes, and we present numerical evidence that these bounds
are sharp. We relate this to the decoupling of non-resonant terms as is known
in KAM problems.
| 0 | 1 | 1 | 0 | 0 | 0 |
Computational complexity, torsion-freeness of homoclinic Floer homology, and homoclinic Morse inequalities | Floer theory was originally devised to estimate the number of 1-periodic
orbits of Hamiltonian systems. In earlier works, we constructed Floer homology
for homoclinic orbits on two dimensional manifolds using combinatorial
techniques. In the present paper, we study theoretic aspects of computational
complexity of homoclinic Floer homology. More precisely, for finding the
homoclinic points and immersions that generate the homology and its boundary
operator, we establish sharp upper bounds in terms of iterations of the
underlying symplectomorphism. This prepares the ground for future numerical
works.
Although originally aimed at numerics, the above bounds provide also purely
algebraic applications, namely
1) Torsion-freeness of primary homoclinic Floer homology.
2) Morse type inequalities for primary homoclinic orbits.
| 0 | 0 | 1 | 0 | 0 | 0 |
Ancillarity-Sufficiency Interweaving Strategy (ASIS) for Boosting MCMC Estimation of Stochastic Volatility Models | Bayesian inference for stochastic volatility models using MCMC methods highly
depends on actual parameter values in terms of sampling efficiency. While draws
from the posterior utilizing the standard centered parameterization break down
when the volatility of volatility parameter in the latent state equation is
small, non-centered versions of the model show deficiencies for highly
persistent latent variable series. The novel approach of
ancillarity-sufficiency interweaving has recently been shown to aid in
overcoming these issues for a broad class of multilevel models. In this paper,
we demonstrate how such an interweaving strategy can be applied to stochastic
volatility models in order to greatly improve sampling efficiency for all
parameters and throughout the entire parameter range. Moreover, this method of
"combining best of different worlds" allows for inference for parameter
constellations that have previously been infeasible to estimate without the
need to select a particular parameterization beforehand.
| 0 | 0 | 0 | 1 | 0 | 0 |
Simple root flows for Hitchin representations | We study simple root flows and Liouville currents for Hitchin
representations. We show that the Liouville current is associated to the
measure of maximal entropy for a simple root flow, derive a Liouville volume
rigidity result, and construct a Liouville pressure metric on the Hitchin
component.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the Origin of Deep Learning | This paper is a review of the evolutionary history of deep learning models.
It covers from the genesis of neural networks when associationism modeling of
the brain is studied, to the models that dominate the last decade of research
in deep learning like convolutional neural networks, deep belief networks, and
recurrent neural networks. In addition to a review of these models, this paper
primarily focuses on the precedents of the models above, examining how the
initial ideas are assembled to construct the early models and how these
preliminary models are developed into their current forms. Many of these
evolutionary paths last more than half a century and have a diversity of
directions. For example, CNN is built on prior knowledge of biological vision
system; DBN is evolved from a trade-off of modeling power and computation
complexity of graphical models and many nowadays models are neural counterparts
of ancient linear models. This paper reviews these evolutionary paths and
offers a concise thought flow of how these models are developed, and aims to
provide a thorough background for deep learning. More importantly, along with
the path, this paper summarizes the gist behind these milestones and proposes
many directions to guide the future research of deep learning.
| 1 | 0 | 0 | 1 | 0 | 0 |
Control of Asynchronous Imitation Dynamics on Networks | Imitation is widely observed in populations of decision-making agents. Using
our recent convergence results for asynchronous imitation dynamics on networks,
we consider how such networks can be efficiently driven to a desired
equilibrium state by offering payoff incentives for using a certain strategy,
either uniformly or targeted to individuals. In particular, if for each
available strategy, agents playing that strategy receive maximum payoff when
their neighbors play that same strategy, we show that providing incentives to
agents in a network that is at equilibrium will result in convergence to a
unique new equilibrium. For the case when a uniform incentive can be offered to
all agents, this result allows the computation of the optimal incentive using a
binary search algorithm. When incentives can be targeted to individual agents,
we propose an algorithm to select which agents should be chosen based on
iteratively maximizing a ratio of the number of agents who adopt the desired
strategy to the payoff incentive required to get those agents to do so.
Simulations demonstrate that the proposed algorithm computes near-optimal
targeted payoff incentives for a range of networks and payoff distributions in
coordination games.
| 1 | 1 | 0 | 0 | 0 | 0 |
Finsler structures on holomorphic Lie algebroids | Complex Finsler vector bundles have been studied mainly by T. Aikou, who
defined complex Finsler structures on holomorphic vector bundles. In this
paper, we consider the more general case of a holomorphic Lie algebroid E and
we introduce Finsler structures, partial and Chern-Finsler connections on it.
First, we recall some basic notions on holomorphic Lie algebroids. Then, using
an idea from E. Martinez, we introduce the concept of complexified prolongation
of such an algebroid. Also, we study nonlinear and linear connections on the
tangent bundle of E and on the prolongation of E and we investigate the
relation between their coefficients. The analogue of the classical
Chern-Finsler connection is defined and studied in the paper for the case of
the holomorphic Lie algebroid.
| 0 | 0 | 1 | 0 | 0 | 0 |
The role of complex analysis in modeling economic growth | Development and growth are complex and tumultuous processes. Modern economic
growth theories identify some key determinants of economic growth. However, the
relative importance of the determinants remains unknown, and additional
variables may help clarify the directions and dimensions of the interactions.
The novel stream of literature on economic complexity goes beyond aggregate
measures of productive inputs, and considers instead a more granular and
structural view of the productive possibilities of countries, i.e. their
capabilities. Different endowments of capabilities are crucial ingredients in
explaining differences in economic performances. In this paper we employ
economic fitness, a measure of productive capabilities obtained through complex
network techniques. Focusing on the combined roles of fitness and some more
traditional drivers of growth, we build a bridge between economic growth
theories and the economic complexity literature. Our findings, in agreement
with other recent empirical studies, show that fitness plays a crucial role in
fostering economic growth and, when it is included in the analysis, can be
either complementary to traditional drivers of growth or can completely
overshadow them.
| 0 | 0 | 0 | 0 | 0 | 1 |
Inputs from Hell: Generating Uncommon Inputs from Common Samples | Generating structured input files to test programs can be performed by
techniques that produce them from a grammar that serves as the specification
for syntactically correct input files. Two interesting scenarios then arise for
effective testing. In the first scenario, software engineers would like to
generate inputs that are as similar as possible to the inputs in common usage
of the program, to test the reliability of the program. More interesting is the
second scenario where inputs should be as dissimilar as possible from normal
usage. This is useful for robustness testing and exploring yet uncovered
behavior. To provide test cases for both scenarios, we leverage a context-free
grammar to parse a set of sample input files that represent the program's
common usage, and determine probabilities for individual grammar production as
they occur during parsing the inputs. Replicating these probabilities during
grammar-based test input generation, we obtain inputs that are close to the
samples. Inverting these probabilities yields inputs that are strongly
dissimilar to common inputs, yet still valid with respect to the grammar. Our
evaluation on three common input formats (JSON, JavaScript, CSS) shows the
effectiveness of these approaches in obtaining instances from both sets of
inputs.
| 1 | 0 | 0 | 0 | 0 | 0 |
On equivariant formal deformation theory | Using the set-up of deformation categories of Talpo and Vistoli, we
re-interpret and generalize, in the context of cartesian morphisms in abstract
categories, some results of Rim concerning obstructions against extensions of
group actions in infinitesimal deformations. Furthermore, we observe that
finite étale coverings can be infinitesimally extended and the resulting
formal scheme is algebraizable. Finally, we show that pre-Tango structures
survive under pullbacks with respect to finite, generically étale surjections
$\pi:X\rightarrow Y$, and record some consequences regarding Kodaira vanishing
in degree one.
| 0 | 0 | 1 | 0 | 0 | 0 |
A self-consistent cloud model for brown dwarfs and young giant exoplanets: comparison with photometric and spectroscopic observations | We developed a simple, physical and self-consistent cloud model for brown
dwarfs and young giant exoplanets. We compared different parametrisations for
the cloud particle size, by either fixing particle radii, or fixing the mixing
efficiency (parameter fsed) or estimating particle radii from simple
microphysics. The cloud scheme with simple microphysics appears as the best
parametrisation by successfully reproducing the observed photometry and spectra
of brown dwarfs and young giant exoplanets. In particular, it reproduces the
L-T transition, due to the condensation of silicate and iron clouds below the
visible/near-IR photosphere. It also reproduces the reddening observed for
low-gravity objects, due to an increase of cloud optical depth for low gravity.
In addition, we found that the cloud greenhouse effect shifts chemical
equilibriums, increasing the abundances of species stable at high temperature.
This effect should significantly contribute to the strong variation of methane
abundance at the L-T transition and to the methane depletion observed on young
exoplanets. Finally, we predict the existence of a continuum of brown dwarfs
and exoplanets for absolute J magnitude=15-18 and J-K color=0-3, due to the
evolution of the L-T transition with gravity. This self-consistent model
therefore provides a general framework to understand the effects of clouds and
appears well-suited for atmospheric retrievals.
| 0 | 1 | 0 | 0 | 0 | 0 |
Consistency Between the Luminosity Function of Resolved Millisecond Pulsars and the Galactic Center Excess | Fermi Large Area Telescope data reveal an excess of GeV gamma rays from the
direction of the Galactic Center and bulge. Several explanations have been
proposed for this excess including an unresolved population of millisecond
pulsars (MSPs) and self-annihilating dark matter. It has been claimed that a
key discriminant for or against the MSP explanation can be extracted from the
properties of the luminosity function describing this source population.
Specifically, is the luminosity function of the putative MSPs in the Galactic
Center consistent with that characterizing the resolved MSPs in the Galactic
disk? To investigate this we have used a Bayesian Markov Chain Monte Carlo to
evaluate the posterior distribution of the parameters of the MSP luminosity
function describing both resolved MSPs and the Galactic Center excess. At
variance with some other claims, our analysis reveals that, within current
uncertainties, both data sets can be well fit with the same luminosity
function.
| 0 | 1 | 0 | 0 | 0 | 0 |
First functionality tests of a 64 x 64 pixel DSSC sensor module connected to the complete ladder readout | The European X-ray Free Electron Laser (XFEL.EU) will provide every 0.1 s a
train of 2700 spatially coherent ultrashort X-ray pulses at 4.5 MHz repetition
rate. The Small Quantum Systems (SQS) instrument and the Spectroscopy and
Coherent Scattering instrument (SCS) operate with soft X-rays between 0.5 keV -
6keV. The DEPFET Sensor with Signal Compression (DSSC) detector is being
developed to meet the requirements set by these two XFEL.EU instruments. The
DSSC imager is a 1 mega-pixel camera able to store up to 800 single-pulse
images per train. The so-called ladder is the basic unit of the DSSC detector.
It is the single unit out of sixteen identical-units composing the
DSSC-megapixel camera, containing all representative electronic components of
the full-size system and allows testing the full electronic chain. Each DSSC
ladder has a focal plane sensor with 128 x 512 pixels. The read-out ASIC
provides full-parallel readout of the sensor pixels. Every read-out channel
contains an amplifier and an analog filter, an up-to 9 bit ADC and the digital
memory. The ASIC amplifier have a double front-end to allow one to use either
DEPFET sensors or Mini-SDD sensors. In the first case, the signal compression
is a characteristic intrinsic of the sensor; in the second case, the
compression is implemented at the first amplification stage. The goal of signal
compression is to meet the requirement of single-photon detection capability
and wide dynamic range. We present the first results of measurements obtained
using a 64 x 64 pixel DEPFET sensor attached to the full final electronic and
data-acquisition chain.
| 0 | 1 | 0 | 0 | 0 | 0 |
Label Propagation on K-partite Graphs with Heterophily | In this paper, for the first time, we study label propagation in
heterogeneous graphs under heterophily assumption. Homophily label propagation
(i.e., two connected nodes share similar labels) in homogeneous graph (with
same types of vertices and relations) has been extensively studied before.
Unfortunately, real-life networks are heterogeneous, they contain different
types of vertices (e.g., users, images, texts) and relations (e.g.,
friendships, co-tagging) and allow for each node to propagate both the same and
opposite copy of labels to its neighbors. We propose a $\mathcal{K}$-partite
label propagation model to handle the mystifying combination of heterogeneous
nodes/relations and heterophily propagation. With this model, we develop a
novel label inference algorithm framework with update rules in near-linear time
complexity. Since real networks change over time, we devise an incremental
approach, which supports fast updates for both new data and evidence (e.g.,
ground truth labels) with guaranteed efficiency. We further provide a utility
function to automatically determine whether an incremental or a re-modeling
approach is favored. Extensive experiments on real datasets have verified the
effectiveness and efficiency of our approach, and its superiority over the
state-of-the-art label propagation methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
On LoRaWAN Scalability: Empirical Evaluation of Susceptibility to Inter-Network Interference | Appearing on the stage quite recently, the Low Power Wide Area Networks
(LPWANs) are currently getting much of attention. In the current paper we study
the susceptibility of one LPWAN technology, namely LoRaWAN, to the
inter-network interferences. By means of excessive empirical measurements
employing the certified commercial transceivers, we characterize the effect of
modulation coding schemes (known for LoRaWAN as data rates (DRs)) of a
transmitter and an interferer on probability of successful packet delivery
while operating in EU 868 MHz band. We show that in reality the transmissions
with different DRs in the same frequency channel can negatively affect each
other and that the high DRs are influenced by interferences more severely than
the low ones. Also, we show that the LoRa-modulated DRs are affected by the
interferences much less than the FSK-modulated one. Importantly, the presented
results provide insight into the network-level operation of the LoRa LPWAN
technology in general, and its scalability potential in particular. The results
can also be used as a reference for simulations and analyses or for defining
the communication parameters for real-life applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
DeepSD: Generating High Resolution Climate Change Projections through Single Image Super-Resolution | The impacts of climate change are felt by most critical systems, such as
infrastructure, ecological systems, and power-plants. However, contemporary
Earth System Models (ESM) are run at spatial resolutions too coarse for
assessing effects this localized. Local scale projections can be obtained using
statistical downscaling, a technique which uses historical climate observations
to learn a low-resolution to high-resolution mapping. Depending on statistical
modeling choices, downscaled projections have been shown to vary significantly
terms of accuracy and reliability. The spatio-temporal nature of the climate
system motivates the adaptation of super-resolution image processing techniques
to statistical downscaling. In our work, we present DeepSD, a generalized
stacked super resolution convolutional neural network (SRCNN) framework for
statistical downscaling of climate variables. DeepSD augments SRCNN with
multi-scale input channels to maximize predictability in statistical
downscaling. We provide a comparison with Bias Correction Spatial
Disaggregation as well as three Automated-Statistical Downscaling approaches in
downscaling daily precipitation from 1 degree (~100km) to 1/8 degrees (~12.5km)
over the Continental United States. Furthermore, a framework using the NASA
Earth Exchange (NEX) platform is discussed for downscaling more than 20 ESM
models with multiple emission scenarios.
| 1 | 0 | 0 | 0 | 0 | 0 |
Statistical analysis of the ambiguities in the asteroid period determinations | Among asteroids there exist ambiguities in their rotation period
determinations. They are due to incomplete coverage of the rotation, noise
and/or aliases resulting from gaps between separate lightcurves. To help to
remove such uncertainties, basic characteristic of the lightcurves resulting
from constraints imposed by the asteroid shapes and geometries of observations
should be identified. We simulated light variations of asteroids which shapes
were modelled as Gaussian random spheres, with random orientations of spin
vectors and phase angles changed every $5^\circ$ from $0^\circ$ to $65^\circ$.
This produced 1.4 mln lightcurves. For each simulated lightcurve Fourier
analysis has been made and the harmonic of the highest amplitude was recorded.
From the statistical point of view, all lightcurves observed at phase angles
$\alpha < 30^\circ$, with peak-to-peak amplitudes $A>0.2$ mag are bimodal.
Second most frequently dominating harmonic is the first one, with the 3rd
harmonic following right after. For 1% of lightcurves with amplitudes $A < 0.1$
mag and phase angles $\alpha < 40^\circ$ 4th harmonic dominates.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Assistive Multi-Armed Bandit | Learning preferences implicit in the choices humans make is a well studied
problem in both economics and computer science. However, most work makes the
assumption that humans are acting (noisily) optimally with respect to their
preferences. Such approaches can fail when people are themselves learning about
what they want. In this work, we introduce the assistive multi-armed bandit,
where a robot assists a human playing a bandit task to maximize cumulative
reward. In this problem, the human does not know the reward function but can
learn it through the rewards received from arm pulls; the robot only observes
which arms the human pulls but not the reward associated with each pull. We
offer sufficient and necessary conditions for successfully assisting the human
in this framework. Surprisingly, better human performance in isolation does not
necessarily lead to better performance when assisted by the robot: a human
policy can do better by effectively communicating its observed rewards to the
robot. We conduct proof-of-concept experiments that support these results. We
see this work as contributing towards a theory behind algorithms for
human-robot interaction.
| 1 | 0 | 0 | 1 | 0 | 0 |
Localized Manifold Harmonics for Spectral Shape Analysis | The use of Laplacian eigenfunctions is ubiquitous in a wide range of computer
graphics and geometry processing applications. In particular, Laplacian
eigenbases allow generalizing the classical Fourier analysis to manifolds. A
key drawback of such bases is their inherently global nature, as the Laplacian
eigenfunctions carry geometric and topological structure of the entire
manifold. In this paper, we introduce a new framework for local spectral shape
analysis. We show how to efficiently construct localized orthogonal bases by
solving an optimization problem that in turn can be posed as the
eigendecomposition of a new operator obtained by a modification of the standard
Laplacian. We study the theoretical and computational aspects of the proposed
framework and showcase our new construction on the classical problems of shape
approximation and correspondence. We obtain significant improvement compared to
classical Laplacian eigenbases as well as other alternatives for constructing
localized bases.
| 1 | 0 | 0 | 0 | 0 | 0 |
How to Search the Internet Archive Without Indexing It | Significant parts of cultural heritage are produced on the web during the
last decades. While easy accessibility to the current web is a good baseline,
optimal access to the past web faces several challenges. This includes dealing
with large-scale web archive collections and lacking of usage logs that contain
implicit human feedback most relevant for today's web search. In this paper, we
propose an entity-oriented search system to support retrieval and analytics on
the Internet Archive. We use Bing to retrieve a ranked list of results from the
current web. In addition, we link retrieved results to the WayBack Machine;
thus allowing keyword search on the Internet Archive without processing and
indexing its raw archived content. Our search system complements existing web
archive search tools through a user-friendly interface, which comes close to
the functionalities of modern web search engines (e.g., keyword search, query
auto-completion and related query suggestion), and provides a great benefit of
taking user feedback on the current web into account also for web archive
search. Through extensive experiments, we conduct quantitative and qualitative
analyses in order to provide insights that enable further research on and
practical applications of web archives.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quantum Dot at a Luttinger liquid edge - Exact solution via Bethe Ansatz | We study a system consisting of a Luttinger liquid coupled to a quantum dot
on the boundary. The Luttinger liquid is expressed in terms of fermions
interacting via density-density coupling and the dot is modeled as an
interacting resonant level on to which the bulk fermions can tunnel. We solve
the Hamiltonian exactly and construct all eigenstates. We study both the zero
and finite temperature properties of the system, in particular we compute the
exact dot occupation as a function of the dot energy in all parameter regimes.
The system is seen to flow from weak to to strong coupling for all values of
the bulk interaction, with the flow characterized by a non-perturbative Kondo
scale. We identify the critical exponents at the weak and strong coupling
regimes.
| 0 | 1 | 0 | 0 | 0 | 0 |
Optimal Jittered Sampling for two Points in the Unit Square | Jittered Sampling is a refinement of the classical Monte Carlo sampling
method. Instead of picking $n$ points randomly from $[0,1]^2$, one partitions
the unit square into $n$ regions of equal measure and then chooses a point
randomly from each partition. Currently, no good rules for how to partition the
space are available. In this paper, we present a solution for the special case
of subdividing the unit square by a decreasing function into two regions so as
to minimize the expected squared $\mathcal{L}_2-$discrepancy. The optimal
partitions are given by a \textit{highly} nonlinear integral equation for which
we determine an approximate solution. In particular, there is a break of
symmetry and the optimal partition is not into two sets of equal measure. We
hope this stimulates further interest in the construction of good partitions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Brain structural connectivity atrophy in Alzheimer's disease | Analysis and quantification of brain structural changes, using Magnetic
resonance imaging (MRI), are increasingly used to define novel biomarkers of
brain pathologies, such as Alzheimer's disease (AD). Network-based models of
the brain have shown that both local and global topological properties can
reveal patterns of disease propagation. On the other hand, intra-subject
descriptions cannot exploit the whole information context, accessible through
inter-subject comparisons. To address this, we developed a novel approach,
which models brain structural connectivity atrophy with a multiplex network and
summarizes it within a classification score. On an independent dataset
multiplex networks were able to correctly segregate, from normal controls (NC),
AD patients and subjects with mild cognitive impairment that will convert to AD
(cMCI) with an accuracy of, respectively, $0.86 \pm 0.01$ and $0.84 \pm 0.01$.
The model also shows that illness effects are maximally detected by parceling
the brain in equal volumes of $3000$ $mm^3$ ("patches"), without any $a$
$priori$ segmentation based on anatomical features. A direct comparison to
standard voxel-based morphometry on the same dataset showed that the multiplex
network approach had higher sensitivity. This method is general and can have
twofold potential applications: providing a reliable tool for clinical trials
and a disease signature of neurodegenerative pathologies.
| 0 | 1 | 0 | 0 | 0 | 0 |
Probabilistic Combination of Noisy Points and Planes for RGB-D Odometry | This work proposes a visual odometry method that combines points and plane
primitives, extracted from a noisy depth camera. Depth measurement uncertainty
is modelled and propagated through the extraction of geometric primitives to
the frame-to-frame motion estimation, where pose is optimized by weighting the
residuals of 3D point and planes matches, according to their uncertainties.
Results on an RGB-D dataset show that the combination of points and planes,
through the proposed method, is able to perform well in poorly textured
environments, where point-based odometry is bound to fail.
| 1 | 0 | 0 | 0 | 0 | 0 |
Autoignition of Butanol Isomers at Low to Intermediate Temperature and Elevated Pressure | Autoignition delay experiments for the isomers of butanol, including n-,
sec-, tert-, and iso-butanol, have been performed using a heated rapid
compression machine. For a compressed pressure of 15 bar, the compressed
temperatures have been varied in the range of 725-855 K for all the
stoichiometric fuel/oxidizer mixtures. Over the conditions investigated in this
study, the ignition delay decreases monotonically as temperature increases and
exhibits single-stage characteristics. Experimental ignition delays are also
compared to simulations computed using three kinetic mechanisms available in
the literature. Reasonable agreement is found for three isomers (tert-, iso-,
and n-butanol).
| 0 | 1 | 0 | 0 | 0 | 0 |
On the $k$-abelian complexity of the Cantor sequence | In this paper, we prove that for every integer $k \geq 1$, the $k$-abelian
complexity function of the Cantor sequence $\mathbf{c} = 101000101\cdots$ is a
$3$-regular sequence.
| 1 | 0 | 1 | 0 | 0 | 0 |
Sharp rates of convergence for accumulated spectrograms | We investigate an inverse problem in time-frequency localization: the
approximation of the symbol of a time-frequency localization operator from
partial spectral information by the method of accumulated spectrograms (the sum
of the spectrograms corresponding to large eigenvalues). We derive a sharp
bound for the rate of convergence of the accumulated spectrogram, improving on
recent results.
| 0 | 0 | 1 | 0 | 0 | 0 |
Putting a Face to the Voice: Fusing Audio and Visual Signals Across a Video to Determine Speakers | In this paper, we present a system that associates faces with voices in a
video by fusing information from the audio and visual signals. The thesis
underlying our work is that an extremely simple approach to generating (weak)
speech clusters can be combined with visual signals to effectively associate
faces and voices by aggregating statistics across a video. This approach does
not need any training data specific to this task and leverages the natural
coherence of information in the audio and visual streams. It is particularly
applicable to tracking speakers in videos on the web where a priori information
about the environment (e.g., number of speakers, spatial signals for
beamforming) is not available. We performed experiments on a real-world dataset
using this analysis framework to determine the speaker in a video. Given a
ground truth labeling determined by human rater consensus, our approach had
~71% accuracy.
| 1 | 0 | 0 | 0 | 0 | 0 |
Topological Interplay between Knots and Entangled Vortex-Membranes | In this paper, the Kelvin wave and knot dynamics are studied on three
dimensional smoothly deformed entangled vortex-membranes in five dimensional
space. Owing to the existence of local Lorentz invariance and diffeomorphism
invariance, in continuum limit gravity becomes an emergent phenomenon on 3+1
dimensional zero-lattice (a lattice of projected zeroes): On the one hand, the
deformed zero-lattice can be denoted by curved space-time for knots; on the
other hand, the knots as topological defect of 3+1 dimensional zero-lattice
indicates matter may curve space-time. This work would help researchers to
understand the mystery in gravity.
| 0 | 1 | 0 | 0 | 0 | 0 |
The first and second fundamental theorems of invariant theory for the quantum general linear supergroup | We develop the non-commutative polynomial version of the invariant theory for
the quantum general linear supergroup ${\rm{ U}}_q(\mathfrak{gl}_{m|n})$. A
non-commutative ${\rm{ U}}_q(\mathfrak{gl}_{m|n})$-module superalgebra
$\mathcal{P}^{k|l}_{\,r|s}$ is constructed, which is the quantum analogue of
the supersymmetric algebra over $\mathbb{C}^{k|l}\otimes \mathbb{C}^{m|n}\oplus
\mathbb{C}^{r|s}\otimes (\mathbb{C}^{m|n})^{\ast}$. We analyse the structure of
the subalgebra of ${\rm{ U}}_q(\mathfrak{gl}_{m|n})$-invariants in
$\mathcal{P}^{k|l}_{\,r|s}$ by using the quantum super analogue of Howe
duality.
The subalgebra of ${\rm{ U}}_q(\mathfrak{gl}_{m|n})$-invariants in
$\mathcal{P}^{k|l}_{\,r|s}$ is shown to be finitely generated. We determine its
generators and establish a surjective superalgebra homomorphism from a braided
supersymmetric algebra onto it. This establishes the first fundamental theorem
of invariant theory for ${\rm{ U}}_q(\mathfrak{gl}_{m|n})$.
We show that the above mentioned superalgebra homomorphism is an isomorphism
if and only if $m\geq \min\{k,r\}$ and $n\geq \min\{l,s\}$, and obtain a
monomial basis for the subalgebra of invariants in this case. When the
homomorphism is not injective, we give a representation theoretical description
of the generating elements of the kernel associated to the partition
$((m+1)^{n+1})$, producing the second fundamental theorem of invariant theory
for ${\rm{ U}}_q(\mathfrak{gl}_{m|n})$.
We consider two applications of our results. A complete treatment of the
non-commutative polynomial version of invariant theory for ${\rm{
U}}_q(\mathfrak{gl}_{m})$ is obtained as the special case with $n=0$, where an
explicit SFT is proved, which we believe to be new. The FFT and SFT of the
invariant theory for the general linear superalgebra are recovered from the
classical (i.e., $q\to 1$) limit of our results.
| 0 | 0 | 1 | 0 | 0 | 0 |
Poly-Spline Finite Element Method | We introduce an integrated meshing and finite element method pipeline
enabling black-box solution of partial differential equations in the volume
enclosed by a boundary representation. We construct a hybrid
hexahedral-dominant mesh, which contains a small number of star-shaped
polyhedra, and build a set of high-order basis on its elements, combining
triquadratic B-splines, triquadratic hexahedra (27 degrees of freedom), and
harmonic elements. We demonstrate that our approach converges cubically under
refinement, while requiring around 50% of the degrees of freedom than a
similarly dense hexahedral mesh composed of triquadratic hexahedra. We validate
our approach solving Poisson's equation on a large collection of models, which
are automatically processed by our algorithm, only requiring the user to
provide boundary conditions on their surface.
| 1 | 0 | 0 | 0 | 0 | 0 |
Turing Completeness of Finite, Epistemic Programs | In this note, we show the class of finite, epistemic programs to be Turing
complete. Epistemic programs is a widely used update mechanism used in
epistemic logic, where it such are a special type of action models: One which
does not contain postconditions.
| 1 | 0 | 0 | 0 | 0 | 0 |
Spontaneous domain formation in disordered copolymers as a mechanism for chromosome structuring | Motivated by the problem of domain formation in chromosomes, we studied a
co--polymer model where only a subset of the monomers feel attractive
interactions. These monomers are displaced randomly from a regularly-spaced
pattern, thus introducing some quenched disorder in the system. Previous work
has shown that in the case of regularly-spaced interacting monomers this chain
can fold into structures characterized by multiple distinct domains of
consecutive segments. In each domain, attractive interactions are balanced by
the entropy cost of forming loops. We show by advanced replica-exchange
simulations that adding disorder in the position of the interacting monomers
further stabilizes these domains. The model suggests that the partitioning of
the chain into well-defined domains of consecutive monomers is a spontaneous
property of heteropolymers. In the case of chromosomes, evolution could have
acted on the spacing of interacting monomers to modulate in a simple way the
underlying domains for functional reasons.
| 0 | 0 | 0 | 0 | 1 | 0 |
Reconstruction of Correlated Sources with Energy Harvesting Constraints in Delay-constrained and Delay-tolerant Communication Scenarios | In this paper, we investigate the reconstruction of time-correlated sources
in a point-to-point communications scenario comprising an energy-harvesting
sensor and a Fusion Center (FC). Our goal is to minimize the average distortion
in the reconstructed observations by using data from previously encoded sources
as side information. First, we analyze a delay-constrained scenario, where the
sources must be reconstructed before the next time slot. We formulate the
problem in a convex optimization framework and derive the optimal transmission
(i.e., power and rate allocation) policy. To solve this problem, we propose an
iterative algorithm based on the subgradient method. Interestingly, the
solution to the problem consists of a coupling between a two-dimensional
directional water-filling algorithm (for power allocation) and a reverse
water-filling algorithm (for rate allocation). Then we find a more general
solution to this problem in a delay-tolerant scenario where the time horizon
for source reconstruction is extended to multiple time slots. Finally, we
provide some numerical results that illustrate the impact of delay and
correlation in the power and rate allocation policies, and in the resulting
reconstruction distortion. We also discuss the performance gap exhibited by a
heuristic online policy derived from the optimal (offline) one.
| 1 | 0 | 0 | 0 | 0 | 0 |
Future of Flexible Robotic Endoscopy Systems | Robotics enables a variety of unconventional actuation strategies to be used
for endoscopes, resulting in reduced trauma to the GI tract. For transmission
of force to distally mounted endoscopic instruments, robotically actuated
tendon sheath mechanisms are the current state of the art. Robotics in surgical
endoscopy enables an ergonomic mapping of the surgeon movements to remotely
controlled slave arms, facilitating tissue manipulation. The learning curve for
difficult procedures such as endoscopic submucosal dissection and
full-thickness resection can be significantly reduced. Improved surgical
outcomes are also observed from clinical and pre-clinical trials. The
technology behind master-slave surgical robotics will continue to mature, with
the addition of position and force sensors enabling better control and tactile
feedback. More robotic assisted GI luminal and NOTES surgeries are expected to
be conducted in future, and gastroenterologists will have a key collaborative
role to play.
| 1 | 1 | 0 | 0 | 0 | 0 |
Shear-driven parametric instability in a precessing sphere | The present numerical study aims at shedding light on the mechanism
underlying the precessional instability in a sphere. Precessional instabilities
in the form of parametric resonance due to topographic coupling have been
reported in a spheroidal geometry both analytically and numerically. We show
that such parametric resonances can also develop in spherical geometry due to
the conical shear layers driven by the Ekman pumping singularities at the
critical latitudes. Scaling considerations lead to a stability criterion of the
form, $|P_o|>O(E^{4/5})$, where $P_o$ represents the Poincaré number and $E$
the Ekman number. The predicted threshold is consistent with our numerical
simulations as well as previous experimental results. When the precessional
forcing is supercriticial, our simulations show evidence of an inverse cascade,
i.e. small scale flows merging into large scale cyclones with a retrograde
drift. Finally, it is shown that this instability mechanism may be relevant to
precessing celestial bodies such as the Earth and Earth's moon.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits