ID
int64 1
21k
| TITLE
stringlengths 7
239
| ABSTRACT
stringlengths 7
2.76k
| Computer Science
int64 0
1
| Physics
int64 0
1
| Mathematics
int64 0
1
| Statistics
int64 0
1
| Quantitative Biology
int64 0
1
| Quantitative Finance
int64 0
1
|
---|---|---|---|---|---|---|---|---|
17,801 | Linear theory for single and double flap wavemakers | In this paper, we are concerned with deterministic wave generation in a
hydrodynamic laboratory. A linear wavemaker theory is developed based on the
fully dispersive water wave equations. The governing field equation is the
Laplace equation for potential flow with several boundary conditions: the
dynamic and kinematic boundary condition at the free surface, the lateral
boundary condition at the wavemaker and the bottom boundary condition. In this
work, we consider both single-flap and double-flap wavemakers. The velocity
potential and surface wave elevation are derived, and the relation between the
propagating wave height and wavemaker stroke is formulated. This formulation is
then used to find how to operate the wavemaker in an efficient way to generate
the desired propagating waves with minimal disturbances near the wavemaker.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,802 | Image-domain multi-material decomposition for dual-energy CT based on correlation and sparsity of material images | Dual energy CT (DECT) enhances tissue characterization because it can produce
images of basis materials such as soft-tissue and bone. DECT is of great
interest in applications to medical imaging, security inspection and
nondestructive testing. Theoretically, two materials with different linear
attenuation coefficients can be accurately reconstructed using DECT technique.
However, the ability to reconstruct three or more basis materials is clinically
and industrially important. Under the assumption that there are at most three
materials in each pixel, there are a few methods that estimate multiple
material images from DECT measurements by enforcing sum-to-one and a box
constraint ([0 1]) derived from both the volume and mass conservation
assumption. The recently proposed image-domain multi-material decomposition
(MMD) method introduces edge-preserving regularization for each material image
which neglects the relations among material images, and enforced the assumption
that there are at most three materials in each pixel using a time-consuming
loop over all possible material-triplet in each iteration of optimizing its
cost function. We propose a new image-domain MMD method for DECT that considers
the prior information that different material images have common edges and
encourages sparsity of material composition in each pixel using regularization.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,803 | A tutorial on the synthesis and validation of a closed-loop wind farm controller using a steady-state surrogate model | In wind farms, wake interaction leads to losses in power capture and
accelerated structural degradation when compared to freestanding turbines. One
method to reduce wake losses is by misaligning the rotor with the incoming flow
using its yaw actuator, thereby laterally deflecting the wake away from
downstream turbines. However, this demands an accurate and computationally
tractable model of the wind farm dynamics. This problem calls for a closed-loop
solution. This tutorial paper fills the scientific gap by demonstrating the
full closed-loop controller synthesis cycle using a steady-state surrogate
model. Furthermore, a novel, computationally efficient and modular
communication interface is presented that enables researchers to
straight-forwardly test their control algorithms in large-eddy simulations.
High-fidelity simulations of a 9-turbine farm show a power production increase
of up to 11% using the proposed closed-loop controller compared to traditional,
greedy wind farm operation.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,804 | UCB Exploration via Q-Ensembles | We show how an ensemble of $Q^*$-functions can be leveraged for more
effective exploration in deep reinforcement learning. We build on well
established algorithms from the bandit setting, and adapt them to the
$Q$-learning setting. We propose an exploration strategy based on
upper-confidence bounds (UCB). Our experiments show significant gains on the
Atari benchmark.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,805 | Towards a Deep Improviser: a prototype deep learning post-tonal free music generator | Two modest-sized symbolic corpora of post-tonal and post-metric keyboard
music have been constructed, one algorithmic, the other improvised. Deep
learning models of each have been trained and largely optimised. Our purpose is
to obtain a model with sufficient generalisation capacity that in response to a
small quantity of separate fresh input seed material, it can generate outputs
that are distinctive, rather than recreative of the learned corpora or the seed
material. This objective has been first assessed statistically, and as judged
by k-sample Anderson-Darling and Cramer tests, has been achieved. Music has
been generated using the approach, and informal judgements place it roughly on
a par with algorithmic and composed music in related forms. Future work will
aim to enhance the model such that it can be evaluated in relation to
expression, meaning and utility in real-time performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,806 | Depicting urban boundaries from a mobility network of spatial interactions: A case study of Great Britain with geo-located Twitter data | Existing urban boundaries are usually defined by government agencies for
administrative, economic, and political purposes. Defining urban boundaries
that consider socio-economic relationships and citizen commute patterns is
important for many aspects of urban and regional planning. In this paper, we
describe a method to delineate urban boundaries based upon human interactions
with physical space inferred from social media. Specifically, we depicted the
urban boundaries of Great Britain using a mobility network of Twitter user
spatial interactions, which was inferred from over 69 million geo-located
tweets. We define the non-administrative anthropographic boundaries in a
hierarchical fashion based on different physical movement ranges of users
derived from the collective mobility patterns of Twitter users in Great
Britain. The results of strongly connected urban regions in the form of
communities in the network space yield geographically cohesive, non-overlapping
urban areas, which provide a clear delineation of the non-administrative
anthropographic urban boundaries of Great Britain. The method was applied to
both national (Great Britain) and municipal scales (the London metropolis).
While our results corresponded well with the administrative boundaries, many
unexpected and interesting boundaries were identified. Importantly, as the
depicted urban boundaries exhibited a strong instance of spatial proximity, we
employed a gravity model to understand the distance decay effects in shaping
the delineated urban boundaries. The model explains how geographical distances
found in the mobility patterns affect the interaction intensity among different
non-administrative anthropographic urban areas, which provides new insights
into human spatial interactions with urban space.
| 1 | 1 | 0 | 0 | 0 | 0 |
17,807 | Avoiding a Tragedy of the Commons in the Peer Review Process | Peer review is the foundation of scientific publication, and the task of
reviewing has long been seen as a cornerstone of professional service. However,
the massive growth in the field of machine learning has put this community
benefit under stress, threatening both the sustainability of an effective
review process and the overall progress of the field. In this position paper,
we argue that a tragedy of the commons outcome may be avoided by emphasizing
the professional aspects of this service. In particular, we propose a rubric to
hold reviewers to an objective standard for review quality. In turn, we also
propose that reviewers be given appropriate incentive. As one possible such
incentive, we explore the idea of financial compensation on a per-review basis.
We suggest reasonable funding models and thoughts on long term effects.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,808 | Partially Recursive Acceptance Rejection | Generating random variates from high-dimensional distributions is often done
approximately using Markov chain Monte Carlo. In certain cases, perfect
simulation algorithms exist that allow one to draw exactly from the stationary
distribution, but most require $O(n \ln(n))$ time, where $n$ measures the size
of the input. In this work a new protocol for creating perfect simulation
algorithms that runs in $O(n)$ time for a wider range of parameters on several
models (such as Strauss, Ising, and random cluster) than was known previously.
This work represents an extension of the popping algorithms due to Wilson.
| 1 | 0 | 1 | 0 | 0 | 0 |
17,809 | Reifenberg Flatness and Oscillation of the Unit Normal Vector | We show (under mild topological assumptions) that small oscillation of the
unit normal vector implies Reifenberg flatness. We then apply this observation
to the study of chord-arc domains and to a quantitative version of a two-phase
free boundary problem for harmonic measure previously studied by Kenig-Toro.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,810 | Stability and optimality of distributed secondary frequency control schemes in power networks | We present a systematic method for designing distributed generation and
demand control schemes for secondary frequency regulation in power networks
such that stability and an economically optimal power allocation can be
guaranteed. A dissipativity condition is imposed on net power supply variables
to provide stability guarantees. Furthermore, economic optimality is achieved
by explicit decentralized steady state conditions on the generation and
controllable demand. We discuss how various classes of dynamics used in recent
studies fit within our framework and give examples of higher order generation
and controllable demand dynamics that can be included within our analysis. In
case of linear dynamics, we discuss how the proposed dissipativity condition
can be efficiently verified using an appropriate linear matrix inequality.
Moreover, it is shown how the addition of a suitable observer layer can relax
the requirement for demand measurements in the employed controller. The
efficiency and practicality of the proposed results are demonstrated with a
simulation on the Northeast Power Coordinating Council (NPCC) 140-bus system.
| 1 | 0 | 1 | 0 | 0 | 0 |
17,811 | A Conjoint Application of Data Mining Techniques for Analysis of Global Terrorist Attacks -- Prevention and Prediction for Combating Terrorism | Terrorism has become one of the most tedious problems to deal with and a
prominent threat to mankind. To enhance counter-terrorism, several research
works are developing efficient and precise systems, data mining is not an
exception. Immense data is floating in our lives, though the scarce
availability of authentic terrorist attack data in the public domain makes it
complicated to fight terrorism. This manuscript focuses on data mining
classification techniques and discusses the role of United Nations in
counter-terrorism. It analyzes the performance of classifiers such as Lazy
Tree, Multilayer Perceptron, Multiclass and Naïve Bayes classifiers for
observing the trends for terrorist attacks around the world. The database for
experiment purpose is created from different public and open access sources for
years 1970-2015 comprising of 156,772 reported attacks causing massive losses
of lives and property. This work enumerates the losses occurred, trends in
attack frequency and places more prone to it, by considering the attack
responsibilities taken as evaluation class.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,812 | OVI 6830Å Imaging Polarimetry of Symbiotic Stars | I present here the first results from an ongoing pilot project with the 1.6 m
telescope at the OPD, Brasil, aimed at the detection of the OVI $\lambda$6830
line via linear polarization in symbiotic stars. The main goal is to
demonstrate that OVI imaging polarimetry is an efficient technique for
discovering new symbiotic stars. The OVI $\lambda$6830 line is found in 5 out
of 9 known symbiotic stars, in which the OVI line has already been
spectroscopically confirmed, with at least 3-$\sigma$ detection. Three new
symbiotic star candidates have also been found.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,813 | MMGAN: Manifold Matching Generative Adversarial Network | It is well-known that GANs are difficult to train, and several different
techniques have been proposed in order to stabilize their training. In this
paper, we propose a novel training method called manifold-matching, and a new
GAN model called manifold-matching GAN (MMGAN). MMGAN finds two manifolds
representing the vector representations of real and fake images. If these two
manifolds match, it means that real and fake images are statistically
identical. To assist the manifold-matching task, we also use i) kernel tricks
to find better manifold structures, ii) moving-averaged manifolds across
mini-batches, and iii) a regularizer based on correlation matrix to suppress
mode collapse.
We conduct in-depth experiments with three image datasets and compare with
several state-of-the-art GAN models. 32.4% of images generated by the proposed
MMGAN are recognized as fake images during our user study (16% enhancement
compared to other state-of-the-art model). MMGAN achieved an unsupervised
inception score of 7.8 for CIFAR-10.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,814 | Domain Adaptation for Infection Prediction from Symptoms Based on Data from Different Study Designs and Contexts | Acute respiratory infections have epidemic and pandemic potential and thus
are being studied worldwide, albeit in many different contexts and study
formats. Predicting infection from symptom data is critical, though using
symptom data from varied studies in aggregate is challenging because the data
is collected in different ways. Accordingly, different symptom profiles could
be more predictive in certain studies, or even symptoms of the same name could
have different meanings in different contexts. We assess state-of-the-art
transfer learning methods for improving prediction of infection from symptom
data in multiple types of health care data ranging from clinical, to home-visit
as well as crowdsourced studies. We show interesting characteristics regarding
six different study types and their feature domains. Further, we demonstrate
that it is possible to use data collected from one study to predict infection
in another, at close to or better than using a single dataset for prediction on
itself. We also investigate in which conditions specific transfer learning and
domain adaptation methods may perform better on symptom data. This work has the
potential for broad applicability as we show how it is possible to transfer
learning from one public health study design to another, and data collected
from one study may be used for prediction of labels for another, even collected
through different study designs, populations and contexts.
| 0 | 0 | 0 | 1 | 1 | 0 |
17,815 | Constrained empirical Bayes priors on regression coefficients | Under model uncertainty, empirical Bayes (EB) procedures can have undesirable
properties such as extreme estimates of inclusion probabilities (Scott &
Berger, 2010) or inconsistency under the null model (Liang et al., 2008). To
avoid these issues, we define empirical Bayes priors with constraints that
ensure that the estimates of the hyperparameters are at least as "vague" as
those of proper default priors. In our examples, we observe that constrained EB
procedures are better behaved than their unconstrained counterparts and that
the Bayesian Information Criterion (BIC) is similar to an intuitively appealing
constrained EB procedure.
| 0 | 0 | 1 | 1 | 0 | 0 |
17,816 | A finite field analogue for Appell series F_3 | In this paper we introduce a finite field analogue for the Appell series F_3
and give some reduction formulae and certain generating functions for this
function over finite fields.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,817 | Two-way Two-tape Automata | In this article we consider two-way two-tape (alternating) automata accepting
pairs of words and we study some closure properties of this model. Our main
result is that such alternating automata are not closed under complementation
for non-unary alphabets. This improves a similar result of Kari and Moore for
picture languages. We also show that these deterministic, non-deterministic and
alternating automata are not closed under composition.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,818 | Making 360$^{\circ}$ Video Watchable in 2D: Learning Videography for Click Free Viewing | 360$^{\circ}$ video requires human viewers to actively control "where" to
look while watching the video. Although it provides a more immersive experience
of the visual content, it also introduces additional burden for viewers;
awkward interfaces to navigate the video lead to suboptimal viewing
experiences. Virtual cinematography is an appealing direction to remedy these
problems, but conventional methods are limited to virtual environments or rely
on hand-crafted heuristics. We propose a new algorithm for virtual
cinematography that automatically controls a virtual camera within a
360$^{\circ}$ video. Compared to the state of the art, our algorithm allows
more general camera control, avoids redundant outputs, and extracts its output
videos substantially more efficiently. Experimental results on over 7 hours of
real "in the wild" video show that our generalized camera control is crucial
for viewing 360$^{\circ}$ video, while the proposed efficient algorithm is
essential for making the generalized control computationally tractable.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,819 | Zero-Shot Learning by Generating Pseudo Feature Representations | Zero-shot learning (ZSL) is a challenging task aiming at recognizing novel
classes without any training instances. In this paper we present a simple but
high-performance ZSL approach by generating pseudo feature representations
(GPFR). Given the dataset of seen classes and side information of unseen
classes (e.g. attributes), we synthesize feature-level pseudo representations
for novel concepts, which allows us access to the formulation of unseen class
predictor. Firstly we design a Joint Attribute Feature Extractor (JAFE) to
acquire understandings about attributes, then construct a cognitive repository
of attributes filtered by confidence margins, and finally generate pseudo
feature representations using a probability based sampling strategy to
facilitate subsequent training process of class predictor. We demonstrate the
effectiveness in ZSL settings and the extensibility in supervised recognition
scenario of our method on a synthetic colored MNIST dataset (C-MNIST). For
several popular ZSL benchmark datasets, our approach also shows compelling
results on zero-shot recognition task, especially leading to tremendous
improvement to state-of-the-art mAP on zero-shot retrieval task.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,820 | Hierarchical RNN with Static Sentence-Level Attention for Text-Based Speaker Change Detection | Speaker change detection (SCD) is an important task in dialog modeling. Our
paper addresses the problem of text-based SCD, which differs from existing
audio-based studies and is useful in various scenarios, for example, processing
dialog transcripts where speaker identities are missing (e.g., OpenSubtitle),
and enhancing audio SCD with textual information. We formulate text-based SCD
as a matching problem of utterances before and after a certain decision point;
we propose a hierarchical recurrent neural network (RNN) with static
sentence-level attention. Experimental results show that neural networks
consistently achieve better performance than feature-based approaches, and that
our attention-based model significantly outperforms non-attention neural
networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,821 | Bivariate Discrete Generalized Exponential Distribution | In this paper we develop a bivariate discrete generalized exponential
distribution, whose marginals are discrete generalized exponential distribution
as proposed by Nekoukhou, Alamatsaz and Bidram ("Discrete generalized
exponential distribution of a second type", Statistics, 47, 876 - 887, 2013).
It is observed that the proposed bivariate distribution is a very flexible
distribution and the bivariate geometric distribution can be obtained as a
special case of this distribution. The proposed distribution can be seen as a
natural discrete analogue of the bivariate generalized exponential distribution
proposed by Kundu and Gupta ("Bivariate generalized exponential distribution",
Journal of Multivariate Analysis, 100, 581 - 593, 2009). We study different
properties of this distribution and explore its dependence structures. We
propose a new EM algorithm to compute the maximum likelihood estimators of the
unknown parameters which can be implemented very efficiently, and discuss some
inferential issues also. The analysis of one data set has been performed to
show the effectiveness of the proposed model. Finally we propose some open
problems and conclude the paper.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,822 | Optimization of Ensemble Supervised Learning Algorithms for Increased Sensitivity, Specificity, and AUC of Population-Based Colorectal Cancer Screenings | Over 150,000 new people in the United States are diagnosed with colorectal
cancer each year. Nearly a third die from it (American Cancer Society). The
only approved noninvasive diagnosis tools currently involve fecal blood count
tests (FOBTs) or stool DNA tests. Fecal blood count tests take only five
minutes and are available over the counter for as low as \$15. They are highly
specific, yet not nearly as sensitive, yielding a high percentage (25%) of
false negatives (Colon Cancer Alliance). Moreover, FOBT results are far too
generalized, meaning that a positive result could mean much more than just
colorectal cancer, and could just as easily mean hemorrhoids, anal fissure,
proctitis, Crohn's disease, diverticulosis, ulcerative colitis, rectal ulcer,
rectal prolapse, ischemic colitis, angiodysplasia, rectal trauma, proctitis
from radiation therapy, and others. Stool DNA tests, the modern benchmark for
CRC screening, have a much higher sensitivity and specificity, but also cost
\$600, take two weeks to process, and are not for high-risk individuals or
people with a history of polyps. To yield a cheap and effective CRC screening
alternative, a unique ensemble-based classification algorithm is put in place
that considers the FIT result, BMI, smoking history, and diabetic status of
patients. This method is tested under ten-fold cross validation to have a .95
AUC, 92% specificity, 89% sensitivity, .88 F1, and 90% precision. Once
clinically validated, this test promises to be cheaper, faster, and potentially
more accurate when compared to a stool DNA test.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,823 | More or Less? Predict the Social Influence of Malicious URLs on Social Media | Users of Online Social Networks (OSNs) interact with each other more than
ever. In the context of a public discussion group, people receive, read, and
write comments in response to articles and postings. In the absence of access
control mechanisms, OSNs are a great environment for attackers to influence
others, from spreading phishing URLs, to posting fake news. Moreover, OSN user
behavior can be predicted by social science concepts which include conformity
and the bandwagon effect. In this paper, we show how social recommendation
systems affect the occurrence of malicious URLs on Facebook. We exploit
temporal features to build a prediction framework, having greater than 75%
accuracy, to predict whether the following group users' behavior will increase
or not. Included in this work, we demarcate classes of URLs, including those
malicious URLs classified as creating critical damage, as well as those of a
lesser nature which only inflict light damage such as aggressive commercial
advertisements and spam content. It is our hope that the data and analyses in
this paper provide a better understanding of OSN user reactions to different
categories of malicious URLs, thereby providing a way to mitigate the influence
of these malicious URL attacks.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,824 | Path-Following through Control Funnel Functions | We present an approach to path following using so-called control funnel
functions. Synthesizing controllers to "robustly" follow a reference trajectory
is a fundamental problem for autonomous vehicles. Robustness, in this context,
requires our controllers to handle a specified amount of deviation from the
desired trajectory. Our approach considers a timing law that describes how fast
to move along a given reference trajectory and a control feedback law for
reducing deviations from the reference. We synthesize both feedback laws using
"control funnel functions" that jointly encode the control law as well as its
correctness argument over a mathematical model of the vehicle dynamics. We
adapt a previously described demonstration-based learning algorithm to
synthesize a control funnel function as well as the associated feedback law. We
implement this law on top of a 1/8th scale autonomous vehicle called the
Parkour car. We compare the performance of our path following approach against
a trajectory tracking approach by specifying trajectories of varying lengths
and curvatures. Our experiments demonstrate the improved robustness obtained
from the use of control funnel functions.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,825 | Tunable Anomalous Andreev Reflection and Triplet Pairings in Spin Orbit Coupled Graphene | We theoretically study scattering process and superconducting triplet
correlations in a graphene junction comprised of ferromagnet-RSO-superconductor
in which RSO stands for a region with Rashba spin orbit interaction. Our
results reveal spin-polarized subgap transport through the system due to an
anomalous equal-spin Andreev reflection in addition to conventional back
scatterings. We calculate equal- and opposite-spin pair correlations near the
F-RSO interface and demonstrate direct link of the anomalous Andreev reflection
and equal-spin pairings arised due to the proximity effect in the presence of
RSO interaction. Moreover, we show that the amplitude of anomalous Andreev
reflection, and thus the triplet pairings, are experimentally controllable when
incorporating the influences of both tunable strain and Fermi level in the
nonsuperconducting region. Our findings can be confirmed by a conductance
spectroscopy experiment and provide better insights into the proximity-induced
RSO coupling in graphene layers reported in recent experiments.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,826 | Conditional Model Selection in Mixed-Effects Models with cAIC4 | Model selection in mixed models based on the conditional distribution is
appropriate for many practical applications and has been a focus of recent
statistical research. In this paper we introduce the R-package cAIC4 that
allows for the computation of the conditional Akaike Information Criterion
(cAIC). Computation of the conditional AIC needs to take into account the
uncertainty of the random effects variance and is therefore not
straightforward. We introduce a fast and stable implementation for the
calculation of the cAIC for linear mixed models estimated with lme4 and
additive mixed models estimated with gamm4 . Furthermore, cAIC4 offers a
stepwise function that allows for a fully automated stepwise selection scheme
for mixed models based on the conditional AIC. Examples of many possible
applications are presented to illustrate the practical impact and easy handling
of the package.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,827 | Inequalities for the lowest magnetic Neumann eigenvalue | We study the ground state energy of the Neumann magnetic Laplacian on planar
domains. For a constant magnetic field we consider the question whether, under
an assumption of fixed area, the disc maximizes this eigenvalue. More
generally, we discuss old and new bounds obtained on this problem.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,828 | Automated optimization of large quantum circuits with continuous parameters | We develop and implement automated methods for optimizing quantum circuits of
the size and type expected in quantum computations that outperform classical
computers. We show how to handle continuous gate parameters and report a
collection of fast algorithms capable of optimizing large-scale quantum
circuits. For the suite of benchmarks considered, we obtain substantial
reductions in gate counts. In particular, we provide better optimization in
significantly less time than previous approaches, while making minimal
structural changes so as to preserve the basic layout of the underlying quantum
algorithms. Our results help bridge the gap between the computations that can
be run on existing hardware and those that are expected to outperform classical
computers.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,829 | Testing the simplifying assumption in high-dimensional vine copulas | Testing the simplifying assumption in high-dimensional vine copulas is a
difficult task because tests must be based on estimated observations and amount
to checking constraints on high-dimensional distributions. So far,
corresponding tests have been limited to single conditional copulas with a
low-dimensional set of conditioning variables. We propose a novel testing
procedure that is computationally feasible for high-dimensional data sets and
that exhibits a power that decreases only slightly with the dimension. By
discretizing the support of the conditioning variables and incorporating a
penalty in the test statistic, we mitigate the curse of dimensions by looking
for the possibly strongest deviation from the simplifying assumption. The use
of a decision tree renders the test computationally feasible for large
dimensions. We derive the asymptotic distribution of the test and analyze its
finite sample performance in an extensive simulation study. The utility of the
test is demonstrated by its application to 10 data sets with up to 49
dimensions.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,830 | TSP With Locational Uncertainty: The Adversarial Model | In this paper we study a natural special case of the Traveling Salesman
Problem (TSP) with point-locational-uncertainty which we will call the {\em
adversarial TSP} problem (ATSP). Given a metric space $(X, d)$ and a set of
subsets $R = \{R_1, R_2, ... , R_n\} : R_i \subseteq X$, the goal is to devise
an ordering of the regions, $\sigma_R$, that the tour will visit such that when
a single point is chosen from each region, the induced tour over those points
in the ordering prescribed by $\sigma_R$ is as short as possible. Unlike the
classical locational-uncertainty-TSP problem, which focuses on minimizing the
expected length of such a tour when the point within each region is chosen
according to some probability distribution, here, we focus on the {\em
adversarial model} in which once the choice of $\sigma_R$ is announced, an
adversary selects a point from each region in order to make the resulting tour
as long as possible. In other words, we consider an offline problem in which
the goal is to determine an ordering of the regions $R$ that is optimal with
respect to the "worst" point possible within each region being chosen by an
adversary, who knows the chosen ordering. We give a $3$-approximation when $R$
is a set of arbitrary regions/sets of points in a metric space. We show how
geometry leads to improved constant factor approximations when regions are
parallel line segments of the same lengths, and a polynomial-time approximation
scheme (PTAS) for the important special case in which $R$ is a set of disjoint
unit disks in the plane.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,831 | Exact solutions to three-dimensional generalized nonlinear Schrodinger equations with varying potential and nonlinearities | It is shown that using the similarity transformations, a set of
three-dimensional p-q nonlinear Schrodinger (NLS) equations with inhomogeneous
coefficients can be reduced to one-dimensional stationary NLS equation with
constant or varying coefficients, thus allowing for obtaining exact localized
and periodic wave solutions. In the suggested reduction the original
coordinates in the (1+3)-space are mapped into a set of one-parametric
coordinate surfaces, whose parameter plays the role of the coordinate of the
one-dimensional equation. We describe the algorithm of finding solutions and
concentrate on power (linear and nonlinear) potentials presenting a number of
case examples. Generalizations of the method are also discussed.
| 0 | 1 | 1 | 0 | 0 | 0 |
17,832 | A Study of Reinforcement Learning for Neural Machine Translation | Recent studies have shown that reinforcement learning (RL) is an effective
approach for improving the performance of neural machine translation (NMT)
system. However, due to its instability, successfully RL training is
challenging, especially in real-world systems where deep models and large
datasets are leveraged. In this paper, taking several large-scale translation
tasks as testbeds, we conduct a systematic study on how to train better NMT
models using reinforcement learning. We provide a comprehensive comparison of
several important factors (e.g., baseline reward, reward shaping) in RL
training. Furthermore, to fill in the gap that it remains unclear whether RL is
still beneficial when monolingual data is used, we propose a new method to
leverage RL to further boost the performance of NMT systems trained with
source/target monolingual data. By integrating all our findings, we obtain
competitive results on WMT14 English- German, WMT17 English-Chinese, and WMT17
Chinese-English translation tasks, especially setting a state-of-the-art
performance on WMT17 Chinese-English translation task.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,833 | Weighted Data Normalization Based on Eigenvalues for Artificial Neural Network Classification | Artificial neural network (ANN) is a very useful tool in solving learning
problems. Boosting the performances of ANN can be mainly concluded from two
aspects: optimizing the architecture of ANN and normalizing the raw data for
ANN. In this paper, a novel method which improves the effects of ANN by
preprocessing the raw data is proposed. It totally leverages the fact that
different features should play different roles. The raw data set is firstly
preprocessed by principle component analysis (PCA), and then its principle
components are weighted by their corresponding eigenvalues. Several aspects of
analysis are carried out to analyze its theory and the applicable occasions.
Three classification problems are launched by an active learning algorithm to
verify the proposed method. From the empirical results, conclusion comes to the
fact that the proposed method can significantly improve the performance of ANN.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,834 | Maximum likelihood estimation of determinantal point processes | Determinantal point processes (DPPs) have wide-ranging applications in
machine learning, where they are used to enforce the notion of diversity in
subset selection problems. Many estimators have been proposed, but surprisingly
the basic properties of the maximum likelihood estimator (MLE) have received
little attention. The difficulty is that it is a non-concave maximization
problem, and such functions are notoriously difficult to understand in high
dimensions, despite their importance in modern machine learning. Here we study
both the local and global geometry of the expected log-likelihood function. We
prove several rates of convergence for the MLE and give a complete
characterization of the case where these are parametric. We also exhibit a
potential curse of dimensionality where the asymptotic variance of the MLE
scales exponentially with the dimension of the problem. Moreover, we exhibit an
exponential number of saddle points, and give evidence that these may be the
only critical points.
| 0 | 0 | 1 | 1 | 0 | 0 |
17,835 | Orthogonal Machine Learning: Power and Limitations | Double machine learning provides $\sqrt{n}$-consistent estimates of
parameters of interest even when high-dimensional or nonparametric nuisance
parameters are estimated at an $n^{-1/4}$ rate. The key is to employ
Neyman-orthogonal moment equations which are first-order insensitive to
perturbations in the nuisance parameters. We show that the $n^{-1/4}$
requirement can be improved to $n^{-1/(2k+2)}$ by employing a $k$-th order
notion of orthogonality that grants robustness to more complex or
higher-dimensional nuisance parameters. In the partially linear regression
setting popular in causal inference, we show that we can construct second-order
orthogonal moments if and only if the treatment residual is not normally
distributed. Our proof relies on Stein's lemma and may be of independent
interest. We conclude by demonstrating the robustness benefits of an explicit
doubly-orthogonal estimation procedure for treatment effect.
| 1 | 0 | 1 | 1 | 0 | 0 |
17,836 | Numerical investigation of supersonic shock-wave/boundary-layer interaction in transitional and turbulent regime | We perform direct numerical simulations of shock-wave/boundary-layer
interactions (SBLI) at Mach number M = 1.7 to investigate the influence of the
state of the incoming boundary layer on the interaction properties. We
reproduce and extend the flow conditions of the experiments performed by
Giepman et al., in which a spatially evolving laminar boundary layer over a
flat plate is initially tripped by an array of distributed roughness elements
and impinged further downstream by an oblique shock wave. Four SBLI cases are
considered, based on two different shock impingement locations along the
streamwise direction, corresponding to transitional and turbulent interactions,
and two different shock strengths, corresponding to flow deflection angles 3
degreees and 6 degrees. We find that, for all flow cases, shock induced
separation is not observed, the boundary layer remains attached for the 3
degrees case and close to incipient separation for the 6 degrees case,
independent of the state of the incoming boundary layer. The findings of this
work suggest that a transitional interaction might be the optimal solution for
practical SBLI applications, as it removes the large separation bubble typical
of laminar interactions and reduces the extent of the high-friction region
associated with an incoming turbulent boundary layer.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,837 | Privacy and Fairness in Recommender Systems via Adversarial Training of User Representations | Latent factor models for recommender systems represent users and items as low
dimensional vectors. Privacy risks of such systems have previously been studied
mostly in the context of recovery of personal information in the form of usage
records from the training data. However, the user representations themselves
may be used together with external data to recover private user information
such as gender and age. In this paper we show that user vectors calculated by a
common recommender system can be exploited in this way. We propose the
privacy-adversarial framework to eliminate such leakage of private information,
and study the trade-off between recommender performance and leakage both
theoretically and empirically using a benchmark dataset. An advantage of the
proposed method is that it also helps guarantee fairness of results, since all
implicit knowledge of a set of attributes is scrubbed from the representations
used by the model, and thus can't enter into the decision making. We discuss
further applications of this method towards the generation of deeper and more
insightful recommendations.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,838 | Asymptotics for high-dimensional covariance matrices and quadratic forms with applications to the trace functional and shrinkage | We establish large sample approximations for an arbitray number of bilinear
forms of the sample variance-covariance matrix of a high-dimensional vector
time series using $ \ell_1$-bounded and small $\ell_2$-bounded weighting
vectors. Estimation of the asymptotic covariance structure is also discussed.
The results hold true without any constraint on the dimension, the number of
forms and the sample size or their ratios. Concrete and potential applications
are widespread and cover high-dimensional data science problems such as tests
for large numbers of covariances, sparse portfolio optimization and projections
onto sparse principal components or more general spanning sets as frequently
considered, e.g. in classification and dictionary learning. As two specific
applications of our results, we study in greater detail the asymptotics of the
trace functional and shrinkage estimation of covariance matrices. In shrinkage
estimation, it turns out that the asymptotics differs for weighting vectors
bounded away from orthogonaliy and nearly orthogonal ones in the sense that
their inner product converges to 0.
| 0 | 0 | 1 | 1 | 0 | 0 |
17,839 | Bayesian Paragraph Vectors | Word2vec (Mikolov et al., 2013) has proven to be successful in natural
language processing by capturing the semantic relationships between different
words. Built on top of single-word embeddings, paragraph vectors (Le and
Mikolov, 2014) find fixed-length representations for pieces of text with
arbitrary lengths, such as documents, paragraphs, and sentences. In this work,
we propose a novel interpretation for neural-network-based paragraph vectors by
developing an unsupervised generative model whose maximum likelihood solution
corresponds to traditional paragraph vectors. This probabilistic formulation
allows us to go beyond point estimates of parameters and to perform Bayesian
posterior inference. We find that the entropy of paragraph vectors decreases
with the length of documents, and that information about posterior uncertainty
improves performance in supervised learning tasks such as sentiment analysis
and paraphrase detection.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,840 | Subset Synchronization in Monotonic Automata | We study extremal and algorithmic questions of subset and careful
synchronization in monotonic automata. We show that several synchronization
problems that are hard in general automata can be solved in polynomial time in
monotonic automata, even without knowing a linear order of the states preserved
by the transitions. We provide asymptotically tight bounds on the maximum
length of a shortest word synchronizing a subset of states in a monotonic
automaton and a shortest word carefully synchronizing a partial monotonic
automaton. We provide a complexity framework for dealing with problems for
monotonic weakly acyclic automata over a three-letter alphabet, and use it to
prove NP-completeness and inapproximability of problems such as {\sc Finite
Automata Intersection} and the problem of computing the rank of a subset of
states in this class. We also show that checking whether a monotonic partial
automaton over a four-letter alphabet is carefully synchronizing is NP-hard.
Finally, we give a simple necessary and sufficient condition when a strongly
connected digraph with a selected subset of vertices can be transformed into a
deterministic automaton where the corresponding subset of states is
synchronizing.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,841 | Architecture of Text Mining Application in Analyzing Public Sentiments of West Java Governor Election using Naive Bayes Classification | The selection of West Java governor is one event that seizes the attention of
the public is no exception to social media users. Public opinion on a
prospective regional leader can help predict electability and tendency of
voters. Data that can be used by the opinion mining process can be obtained
from Twitter. Because the data is very varied form and very unstructured, it
must be managed and uninformed using data pre-processing techniques into
semi-structured data. This semi-structured information is followed by a
classification stage to categorize the opinion into negative or positive
opinions. The research methodology uses a literature study where the research
will examine previous research on a similar topic. The purpose of this study is
to find the right architecture to develop it into the application of twitter
opinion mining to know public sentiments toward the election of the governor of
west java. The result of this research is that Twitter opinion mining is part
of text mining where opinions in Twitter if they want to be classified, must go
through the preprocessing text stage first. The preprocessing step required
from twitter data is cleansing, case folding, POS Tagging and stemming. The
resulting text mining architecture is an architecture that can be used for text
mining research with different topics.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,842 | Ray: A Distributed Framework for Emerging AI Applications | The next generation of AI applications will continuously interact with the
environment and learn from these interactions. These applications impose new
and demanding systems requirements, both in terms of performance and
flexibility. In this paper, we consider these requirements and present Ray---a
distributed system to address them. Ray implements a unified interface that can
express both task-parallel and actor-based computations, supported by a single
dynamic execution engine. To meet the performance requirements, Ray employs a
distributed scheduler and a distributed and fault-tolerant store to manage the
system's control state. In our experiments, we demonstrate scaling beyond 1.8
million tasks per second and better performance than existing specialized
systems for several challenging reinforcement learning applications.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,843 | A blockchain-based Decentralized System for proper handling of temporary Employment contracts | Temporary work is an employment situation useful and suitable in all
occasions in which business needs to adjust more easily and quickly to workload
fluctuations or maintain staffing flexibility. Temporary workers play therefore
an important role in many companies, but this kind of activity is subject to a
special form of legal protections and many aspects and risks must be taken into
account both employers and employees. In this work we propose a
blockchain-based system that aims to ensure respect for the rights for all
actors involved in a temporary employment, in order to provide employees with
the fair and legal remuneration (including taxes) of work performances and a
protection in the case employer becomes insolvent. At the same time, our system
wants to assist the employer in processing contracts with a fully automated and
fast procedure. To resolve these problems we propose the D-ES (Decentralized
Employment System). We first model the employment relationship as a state
system. Then we describe the enabling technology that makes us able to realize
the D-ES. In facts, we propose the implementation of a DLT (Decentralized
Ledger Technology) based system, consisting in a blockchain system and of a
web-based environment. Thanks the decentralized application platforms that
makes us able to develop smart contracts, we define a discrete event control
system that works inside the blockchain. In addition, we discuss the temporary
work in agriculture as a interesting case of study.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,844 | Valley polarized relaxation and upconversion luminescence from Tamm-Plasmon Trion-Polaritons with a MoSe2 monolayer | Transition metal dichalcogenides represent an ideal testbed to study
excitonic effects, spin-related phenomena and fundamental light-matter coupling
in nanoscopic condensed matter systems. In particular, the valley degree of
freedom, which is unique to such direct band gap monolayers with broken
inversion symmetry, adds fundamental interest in these materials. Here, we
implement a Tamm-plasmon structure with an embedded MoSe2 monolayer and study
the formation of polaritonic quasi-particles. Strong coupling conditions
between the Tamm-mode and the trion resonance of MoSe2 are established,
yielding bright luminescence from the polaritonic ground state under
non-resonant optical excitation. We demonstrate, that tailoring the
electrodynamic environment of the monolayer results in a significantly
increased valley polarization. This enhancement can be related to change in
recombination dynamics shown in time-resolved photoluminescence measurements.
We furthermore observe strong upconversion luminescence from resonantly excited
polariton states in the lower polariton branch. This upconverted polariton
luminescence is shown to preserve the valley polarization of the
trion-polariton, which paves the way towards combining spin-valley physics and
exciton scattering experiments.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,845 | Machine learning based localization and classification with atomic magnetometers | We demonstrate identification of position, material, orientation and shape of
objects imaged by an $^{85}$Rb atomic magnetometer performing electromagnetic
induction imaging supported by machine learning. Machine learning maximizes the
information extracted from the images created by the magnetometer,
demonstrating the use of hidden data. Localization 2.6 times better than the
spatial resolution of the imaging system and successful classification up to
97$\%$ are obtained. This circumvents the need of solving the inverse problem,
and demonstrates the extension of machine learning to diffusive systems such as
low-frequency electrodynamics in media. Automated collection of task-relevant
information from quantum-based electromagnetic imaging will have a relevant
impact from biomedicine to security.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,846 | On the difficulty of finding spines | We prove that the set of symplectic lattices in the Siegel space
$\mathfrak{h}_g$ whose systoles generate a subspace of dimension at least 3 in
$\mathbb{R}^{2g}$ does not contain any $\mathrm{Sp}(2g,\mathbb{Z})$-equivariant
deformation retract of $\mathfrak{h}_g$.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,847 | Distributed Decoding of Convolutional Network Error Correction Codes | A Viterbi-like decoding algorithm is proposed in this paper for generalized
convolutional network error correction coding. Different from classical Viterbi
algorithm, our decoding algorithm is based on minimum error weight rather than
the shortest Hamming distance between received and sent sequences. Network
errors may disperse or neutralize due to network transmission and convolutional
network coding. Therefore, classical decoding algorithm cannot be employed any
more. Source decoding was proposed by multiplying the inverse of network
transmission matrix, where the inverse is hard to compute. Starting from the
Maximum A Posteriori (MAP) decoding criterion, we find that it is equivalent to
the minimum error weight under our model. Inspired by Viterbi algorithm, we
propose a Viterbi-like decoding algorithm based on minimum error weight of
combined error vectors, which can be carried out directly at sink nodes and can
correct any network errors within the capability of convolutional network error
correction codes (CNECC). Under certain situations, the proposed algorithm can
realize the distributed decoding of CNECC.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,848 | Temperature induced phase transition from cycloidal to collinear antiferromagnetism in multiferroic Bi$_{0.9}$Sm$_{0.1}$FeO$_3$ driven by $f$-$d$ induced magnetic anisotropy | In multiferroic BiFeO$_3$ a cycloidal antiferromagnetic structure is coupled
to a large electric polarization at room temperature, giving rise to
magnetoelectric functionality that may be exploited in novel multiferroic-based
devices. In this paper, we demonstrate that by substituting samarium for 10% of
the bismuth ions the periodicity of the room temperature cycloid is increased,
and by cooling below $\sim15$ K the magnetic structure tends towards a simple
G-type antiferromagnet, which is fully established at 1.5 K. We show that this
transition results from $f-d$ exchange coupling, which induces a local
anisotropy on the iron magnetic moments that destroys the cycloidal order - a
result of general significance regarding the stability of non-collinear
magnetic structures in the presence of multiple magnetic sublattices.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,849 | POSEYDON - Converting the DAFNE Collider into a double Positron Facility: a High Duty-Cycle pulse stretcher and a storage ring | This project proposes to reuse the DAFNE accelerator complex for producing a
high intensity (up to 10^10), high-quality beam of high-energy (up to 500 MeV)
positrons for HEP experiments, mainly - but not only - motivated by light dark
particles searches. Such a facility would provide a unique source of
ultra-relativistic, narrow-band and low-emittance positrons, with a high duty
factor, without employing a cold technology, that would be an ideal facility
for exploring the existence of light dark matter particles, produced in
positron-on-target annihilations into a photon+missing mass, and using the
bump-hunt technique. The PADME experiment, that will use the extracted beam
from the DAFNE BTF, is indeed limited by the low duty-factor (10^-5=200 ns/20
ms). The idea is to use a variant of the third of integer resonant extraction,
with the aim of getting a <10^-6 m rad emittance and, at the same time,
tailoring the scheme to the peculiar optics of the DAFNE machine. In
alternative, the possibility of kicking the positrons by means of channelling
effects in crystals can be evaluated. This would not only increase the
extraction efficiency but also improve the beam quality, thanks to the high
collimation of channelled particles.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,850 | Phase diagram of hydrogen and a hydrogen-helium mixture at planetary conditions by Quantum Monte Carlo simulations | Understanding planetary interiors is directly linked to our ability of
simulating exotic quantum mechanical systems such as hydrogen (H) and
hydrogen-helium (H-He) mixtures at high pressures and temperatures. Equations
of State (EOSs) tables based on Density Functional Theory (DFT), are commonly
used by planetary scientists, although this method allows only for a
qualitative description of the phase diagram, due to an incomplete treatment of
electronic interactions. Here we report Quantum Monte Carlo (QMC) molecular
dynamics simulations of pure H and H-He mixture. We calculate the first QMC EOS
at 6000 K for an H-He mixture of a proto-solar composition, and show the
crucial influence of He on the H metallization pressure. Our results can be
used to calibrate other EOS calculations and are very timely given the accurate
determination of Jupiter's gravitational field from the NASA Juno mission and
the effort to determine its structure.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,851 | Identifying exogenous and endogenous activity in social media | The occurrence of new events in a system is typically driven by external
causes and by previous events taking place inside the system. This is a general
statement, applying to a range of situations including, more recently, to the
activity of users in Online social networks (OSNs). Here we develop a method
for extracting from a series of posting times the relative contributions of
exogenous, e.g. news media, and endogenous, e.g. information cascade. The
method is based on the fitting of a generalized linear model (GLM) equipped
with a self-excitation mechanism. We test the method with synthetic data
generated by a nonlinear Hawkes process, and apply it to a real time series of
tweets with a given hashtag. In the empirical dataset, the estimated
contributions of exogenous and endogenous volumes are close to the amounts of
original tweets and retweets respectively. We conclude by discussing the
possible applications of the method, for instance in online marketing.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,852 | Water flow in Carbon and Silicon Carbide nanotubes | In this work the conduction of ion-water solution through two discrete
bundles of armchair carbon and silicon carbide nanotubes, as useful membranes
for water desalination, is studied. In order that studies on different types of
nanotubes be comparable, the chiral vectors of C and Si-C nanotubes are
selected as (7,7) and (5,5), respectively, so that a similar volume of fluid is
investigated flowing through two similar dimension membranes. Different
hydrostatic pressures are applied and the flow rates of water and ions are
calculated through molecular dynamics simulations. Consequently, according to
conductance of water per each nanotube, per nanosecond, it is perceived that at
lower pressures (below 150 MPa) the Si-C nanotubes seem to be more applicable,
while higher hydrostatic pressures make carbon nanotube membranes more suitable
for water desalination.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,853 | Multidimensional extremal dependence coefficients | Extreme values modeling has attracting the attention of researchers in
diverse areas such as the environment, engineering, or finance. Multivariate
extreme value distributions are particularly suitable to model the tails of
multidimensional phenomena. The analysis of the dependence among multivariate
maxima is useful to evaluate risk. Here we present new multivariate extreme
value models, as well as, coefficients to assess multivariate extremal
dependence.
| 0 | 0 | 1 | 1 | 0 | 0 |
17,854 | A general framework for data-driven uncertainty quantification under complex input dependencies using vine copulas | Systems subject to uncertain inputs produce uncertain responses. Uncertainty
quantification (UQ) deals with the estimation of statistics of the system
response, given a computational model of the system and a probabilistic model
of its inputs. In engineering applications it is common to assume that the
inputs are mutually independent or coupled by a Gaussian or elliptical
dependence structure (copula). In this paper we overcome such limitations by
modelling the dependence structure of multivariate inputs as vine copulas. Vine
copulas are models of multivariate dependence built from simpler pair-copulas.
The vine representation is flexible enough to capture complex dependencies.
This paper formalises the framework needed to build vine copula models of
multivariate inputs and to combine them with virtually any UQ method. The
framework allows for a fully automated, data-driven inference of the
probabilistic input model on available input data. The procedure is exemplified
on two finite element models of truss structures, both subject to inputs with
non-Gaussian dependence structures. For each case, we analyse the moments of
the model response (using polynomial chaos expansions), and perform a
structural reliability analysis to calculate the probability of failure of the
system (using the first order reliability method and importance sampling).
Reference solutions are obtained by Monte Carlo simulation. The results show
that, while the Gaussian assumption yields biased statistics, the vine copula
representation achieves significantly more precise estimates, even when its
structure needs to be fully inferred from a limited amount of observations.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,855 | Pachinko Prediction: A Bayesian method for event prediction from social media data | The combination of large open data sources with machine learning approaches
presents a potentially powerful way to predict events such as protest or social
unrest. However, accounting for uncertainty in such models, particularly when
using diverse, unstructured datasets such as social media, is essential to
guarantee the appropriate use of such methods. Here we develop a Bayesian
method for predicting social unrest events in Australia using social media
data. This method uses machine learning methods to classify individual postings
to social media as being relevant, and an empirical Bayesian approach to
calculate posterior event probabilities. We use the method to predict events in
Australian cities over a period in 2017/18.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,856 | A Digital Hardware Fast Algorithm and FPGA-based Prototype for a Novel 16-point Approximate DCT for Image Compression Applications | The discrete cosine transform (DCT) is the key step in many image and video
coding standards. The 8-point DCT is an important special case, possessing
several low-complexity approximations widely investigated. However, 16-point
DCT transform has energy compaction advantages. In this sense, this paper
presents a new 16-point DCT approximation with null multiplicative complexity.
The proposed transform matrix is orthogonal and contains only zeros and ones.
The proposed transform outperforms the well-know Walsh-Hadamard transform and
the current state-of-the-art 16-point approximation. A fast algorithm for the
proposed transform is also introduced. This fast algorithm is experimentally
validated using hardware implementations that are physically realized and
verified on a 40 nm CMOS Xilinx Virtex-6 XC6VLX240T FPGA chip for a maximum
clock rate of 342 MHz. Rapid prototypes on FPGA for 8-bit input word size shows
significant improvement in compressed image quality by up to 1-2 dB at the cost
of only eight adders compared to the state-of-art 16-point DCT approximation
algorithm in the literature [S. Bouguezel, M. O. Ahmad, and M. N. S. Swamy. A
novel transform for image compression. In {\em Proceedings of the 53rd IEEE
International Midwest Symposium on Circuits and Systems (MWSCAS)}, 2010].
| 1 | 0 | 0 | 1 | 0 | 0 |
17,857 | Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice | It is well known that the initialization of weights in deep neural networks
can have a dramatic impact on learning speed. For example, ensuring the mean
squared singular value of a network's input-output Jacobian is $O(1)$ is
essential for avoiding the exponential vanishing or explosion of gradients. The
stronger condition that all singular values of the Jacobian concentrate near
$1$ is a property known as dynamical isometry. For deep linear networks,
dynamical isometry can be achieved through orthogonal weight initialization and
has been shown to dramatically speed up learning; however, it has remained
unclear how to extend these results to the nonlinear setting. We address this
question by employing powerful tools from free probability theory to compute
analytically the entire singular value distribution of a deep network's
input-output Jacobian. We explore the dependence of the singular value
distribution on the depth of the network, the weight initialization, and the
choice of nonlinearity. Intriguingly, we find that ReLU networks are incapable
of dynamical isometry. On the other hand, sigmoidal networks can achieve
isometry, but only with orthogonal weight initialization. Moreover, we
demonstrate empirically that deep nonlinear networks achieving dynamical
isometry learn orders of magnitude faster than networks that do not. Indeed, we
show that properly-initialized deep sigmoidal networks consistently outperform
deep ReLU networks. Overall, our analysis reveals that controlling the entire
distribution of Jacobian singular values is an important design consideration
in deep learning.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,858 | Singular perturbation for abstract elliptic equations and application | Boundary value problem for complete second order elliptic equation is
considered in Banach space. The equation and boundary conditions involve a
small and spectral parameter. The uniform L_{p}-regularity properties with
respect to space variable and parameters are established. Here, the explicit
formula for the solution is given and behavior of solution is derived when the
small parameter approaches zero. It used to obtain singular perturbation result
for abstract elliptic equation
| 0 | 0 | 1 | 0 | 0 | 0 |
17,859 | Pruning and Nonparametric Multiple Change Point Detection | Change point analysis is a statistical tool to identify homogeneity within
time series data. We propose a pruning approach for approximate nonparametric
estimation of multiple change points. This general purpose change point
detection procedure `cp3o' applies a pruning routine within a dynamic program
to greatly reduce the search space and computational costs. Existing
goodness-of-fit change point objectives can immediately be utilized within the
framework. We further propose novel change point algorithms by applying cp3o to
two popular nonparametric goodness of fit measures: `e-cp3o' uses E-statistics,
and `ks-cp3o' uses Kolmogorov-Smirnov statistics. Simulation studies highlight
the performance of these algorithms in comparison with parametric and other
nonparametric change point methods. Finally, we illustrate these approaches
with climatological and financial applications.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,860 | Context encoding enables machine learning-based quantitative photoacoustics | Real-time monitoring of functional tissue parameters, such as local blood
oxygenation, based on optical imaging could provide groundbreaking advances in
the diagnosis and interventional therapy of various diseases. While
photoacoustic (PA) imaging is a novel modality with great potential to measure
optical absorption deep inside tissue, quantification of the measurements
remains a major challenge. In this paper, we introduce the first machine
learning based approach to quantitative PA imaging (qPAI), which relies on
learning the fluence in a voxel to deduce the corresponding optical absorption.
The method encodes relevant information of the measured signal and the
characteristics of the imaging system in voxel-based feature vectors, which
allow the generation of thousands of training samples from a single simulated
PA image. Comprehensive in silico experiments suggest that context encoding
(CE)-qPAI enables highly accurate and robust quantification of the local
fluence and thereby the optical absorption from PA images.
| 1 | 1 | 0 | 0 | 0 | 0 |
17,861 | Simulating the interaction between a falling super-quadric object and a soap film | The interaction that occurs between a light solid object and a horizontal
soap film of a bamboo foam contained in a cylindrical tube is simulated in 3D.
We vary the shape of the falling object from a sphere to a cube by changing a
single shape parameter as well as varying the initial orientation and position
of the object. We investigate in detail how the soap film deforms in all these
cases, and determine the network and pressure forces that a foam exerts on a
falling object, due to surface tension and bubble pressure respectively. We
show that a cubic particle in a particular orientation experiences the largest
drag force, and that this orientation is also the most likely outcome of
dropping a cube from an arbitrary orientation through a bamboo foam.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,862 | Power Flow Analysis Using Graph based Combination of Iterative Methods and Vertex Contraction Approach | Compared with relational database (RDB), graph database (GDB) is a more
intuitive expression of the real world. Each node in the GDB is a both storage
and logic unit. Since it is connected to its neighboring nodes through edges,
and its neighboring information could be easily obtained in one-step graph
traversal. It is able to conduct local computation independently and all nodes
can do their local work in parallel. Then the whole system can be maximally
analyzed and assessed in parallel to largely improve the computation
performance without sacrificing the precision of final results. This paper
firstly introduces graph database, power system graph modeling and potential
graph computing applications in power systems. Two iterative methods based on
graph database and PageRank are presented and their convergence are discussed.
Vertex contraction is proposed to improve the performance by eliminating
zero-impedance branch. A combination of the two iterative methods is proposed
to make use of their advantages. Testing results based on a provincial 1425-bus
system demonstrate that the proposed comprehensive approach is a good candidate
for power flow analysis.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,863 | A GAMP Based Low Complexity Sparse Bayesian Learning Algorithm | In this paper, we present an algorithm for the sparse signal recovery problem
that incorporates damped Gaussian generalized approximate message passing
(GGAMP) into Expectation-Maximization (EM)-based sparse Bayesian learning
(SBL). In particular, GGAMP is used to implement the E-step in SBL in place of
matrix inversion, leveraging the fact that GGAMP is guaranteed to converge with
appropriate damping. The resulting GGAMP-SBL algorithm is much more robust to
arbitrary measurement matrix $\boldsymbol{A}$ than the standard damped GAMP
algorithm while being much lower complexity than the standard SBL algorithm. We
then extend the approach from the single measurement vector (SMV) case to the
temporally correlated multiple measurement vector (MMV) case, leading to the
GGAMP-TSBL algorithm. We verify the robustness and computational advantages of
the proposed algorithms through numerical experiments.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,864 | Sub-Nanometer Channels Embedded in Two-Dimensional Materials | Two-dimensional (2D) materials are among the most promising candidates for
next-generation electronics due to their atomic thinness, allowing for flexible
transparent electronics and ultimate length scaling. Thus far, atomically-thin
p-n junctions, metal-semiconductor contacts, and metal-insulator barriers have
been demonstrated. While 2D materials achieve the thinnest possible devices,
precise nanoscale control over the lateral dimensions is also necessary. Here,
we report the direct synthesis of sub-nanometer-wide 1D MoS2 channels embedded
within WSe2 monolayers, using a dislocation-catalyzed approach. The 1D channels
have edges free of misfit dislocations and dangling bonds, forming a coherent
interface with the embedding 2D matrix. Periodic dislocation arrays produce 2D
superlattices of coherent MoS2 1D channels in WSe2. Using molecular dynamics
simulations, we have identified other combinations of 2D materials where 1D
channels can also be formed. The electronic band structure of these 1D channels
offer the promise of carrier confinement in a direct-gap material and charge
separation needed to access the ultimate length scales necessary for future
electronic applications.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,865 | Emotion in Reinforcement Learning Agents and Robots: A Survey | This article provides the first survey of computational models of emotion in
reinforcement learning (RL) agents. The survey focuses on agent/robot emotions,
and mostly ignores human user emotions. Emotions are recognized as functional
in decision-making by influencing motivation and action selection. Therefore,
computational emotion models are usually grounded in the agent's decision
making architecture, of which RL is an important subclass. Studying emotions in
RL-based agents is useful for three research fields. For machine learning (ML)
researchers, emotion models may improve learning efficiency. For the
interactive ML and human-robot interaction (HRI) community, emotions can
communicate state and enhance user investment. Lastly, it allows affective
modelling (AM) researchers to investigate their emotion theories in a
successful AI agent class. This survey provides background on emotion theory
and RL. It systematically addresses 1) from what underlying dimensions (e.g.,
homeostasis, appraisal) emotions can be derived and how these can be modelled
in RL-agents, 2) what types of emotions have been derived from these
dimensions, and 3) how these emotions may either influence the learning
efficiency of the agent or be useful as social signals. We also systematically
compare evaluation criteria, and draw connections to important RL sub-domains
like (intrinsic) motivation and model-based RL. In short, this survey provides
both a practical overview for engineers wanting to implement emotions in their
RL agents, and identifies challenges and directions for future emotion-RL
research.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,866 | Tensor Networks in a Nutshell | Tensor network methods are taking a central role in modern quantum physics
and beyond. They can provide an efficient approximation to certain classes of
quantum states, and the associated graphical language makes it easy to describe
and pictorially reason about quantum circuits, channels, protocols, open
systems and more. Our goal is to explain tensor networks and some associated
methods as quickly and as painlessly as possible. Beginning with the key
definitions, the graphical tensor network language is presented through
examples. We then provide an introduction to matrix product states. We conclude
the tutorial with tensor contractions evaluating combinatorial counting
problems. The first one counts the number of solutions for Boolean formulae,
whereas the second is Penrose's tensor contraction algorithm, returning the
number of $3$-edge-colorings of $3$-regular planar graphs.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,867 | Cheryl's Birthday | We present four logic puzzles and after that their solutions. Joseph Yeo
designed 'Cheryl's Birthday'. Mike Hartley came up with a novel solution for
'One Hundred Prisoners and a Light Bulb'. Jonathan Welton designed 'A Blind
Guess' and 'Abby's Birthday'. Hans van Ditmarsch and Barteld Kooi authored the
puzzlebook 'One Hundred Prisoners and a Light Bulb' that contains other
knowledge puzzles, and that can also be found on the webpage
this http URL dedicated to the book.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,868 | Never Forget: Balancing Exploration and Exploitation via Learning Optical Flow | Exploration bonus derived from the novelty of the states in an environment
has become a popular approach to motivate exploration for deep reinforcement
learning agents in the past few years. Recent methods such as curiosity-driven
exploration usually estimate the novelty of new observations by the prediction
errors of their system dynamics models. Due to the capacity limitation of the
models and difficulty of performing next-frame prediction, however, these
methods typically fail to balance between exploration and exploitation in
high-dimensional observation tasks, resulting in the agents forgetting the
visited paths and exploring those states repeatedly. Such inefficient
exploration behavior causes significant performance drops, especially in large
environments with sparse reward signals. In this paper, we propose to introduce
the concept of optical flow estimation from the field of computer vision to
deal with the above issue. We propose to employ optical flow estimation errors
to examine the novelty of new observations, such that agents are able to
memorize and understand the visited states in a more comprehensive fashion. We
compare our method against the previous approaches in a number of experimental
experiments. Our results indicate that the proposed method appears to deliver
superior and long-lasting performance than the previous methods. We further
provide a set of comprehensive ablative analysis of the proposed method, and
investigate the impact of optical flow estimation on the learning curves of the
DRL agents.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,869 | Lenient Multi-Agent Deep Reinforcement Learning | Much of the success of single agent deep reinforcement learning (DRL) in
recent years can be attributed to the use of experience replay memories (ERM),
which allow Deep Q-Networks (DQNs) to be trained efficiently through sampling
stored state transitions. However, care is required when using ERMs for
multi-agent deep reinforcement learning (MA-DRL), as stored transitions can
become outdated because agents update their policies in parallel [11]. In this
work we apply leniency [23] to MA-DRL. Lenient agents map state-action pairs to
decaying temperature values that control the amount of leniency applied towards
negative policy updates that are sampled from the ERM. This introduces optimism
in the value-function update, and has been shown to facilitate cooperation in
tabular fully-cooperative multi-agent reinforcement learning problems. We
evaluate our Lenient-DQN (LDQN) empirically against the related Hysteretic-DQN
(HDQN) algorithm [22] as well as a modified version we call scheduled-HDQN,
that uses average reward learning near terminal states. Evaluations take place
in extended variations of the Coordinated Multi-Agent Object Transportation
Problem (CMOTP) [8] which include fully-cooperative sub-tasks and stochastic
rewards. We find that LDQN agents are more likely to converge to the optimal
policy in a stochastic reward CMOTP compared to standard and scheduled-HDQN
agents.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,870 | A New Framework for Synthetic Aperture Sonar Micronavigation | Synthetic aperture imaging systems achieve constant azimuth resolution by
coherently summating the observations acquired along the aperture path. At this
aim, their locations have to be known with subwavelength accuracy. In
underwater Synthetic Aperture Sonar (SAS), the nature of propagation and
navigation in water makes the retrieval of this information challenging.
Inertial sensors have to be employed in combination with signal processing
techniques, which are usually referred to as micronavigation. In this paper we
propose a novel micronavigation approach based on the minimization of an error
function between two contiguous pings having some mutual information. This
error is obtained by comparing the vector space intersections between the pings
orthogonal projectors. The effectiveness and generality of the proposed
approach is demonstrated by means of simulations and by means of an experiment
performed in a controlled environment.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,871 | Emotionalism within People-Oriented Software Design | In designing most software applications, much effort is placed upon the
functional goals, which make a software system useful. However, the failure to
consider emotional goals, which make a software system pleasurable to use, can
result in disappointment and system rejection even if utilitarian goals are
well implemented. Although several studies have emphasized the importance of
people's emotional goals in developing software, there is little advice on how
to address these goals in the software system development process. This paper
proposes a theoretically-sound and practical method by combining the theories
and techniques of software engineering, requirements engineering, and decision
making. The outcome of this study is the Emotional Goal Systematic Analysis
Technique (EG-SAT), which facilitates the process of finding software system
capabilities to address emotional goals in software design. EG-SAT is easy to
learn and easy to use technique that helps analysts to gain insights into how
to address people's emotional goals. To demonstrate the method in use, a
two-part evaluation is conducted. First, EG-SAT is used to analyze the
emotional goals of potential users of a mobile learning application that
provides information about low carbon living for tradespeople and professionals
in the building industry in Australia. The results of using EG-SAT in this case
study are compared with a professionally-developed baseline. Second, we ran a
semi-controlled experiment in which 12 participants were asked to apply EG-SAT
and another technique on part of our case study. The outcomes show that EG-SAT
helped participants to both analyse emotional goals and gain valuable insights
about the functional and non-functional goals for addressing people's emotional
goals.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,872 | Quantifying the Estimation Error of Principal Components | Principal component analysis is an important pattern recognition and
dimensionality reduction tool in many applications. Principal components are
computed as eigenvectors of a maximum likelihood covariance $\widehat{\Sigma}$
that approximates a population covariance $\Sigma$, and these eigenvectors are
often used to extract structural information about the variables (or
attributes) of the studied population. Since PCA is based on the
eigendecomposition of the proxy covariance $\widehat{\Sigma}$ rather than the
ground-truth $\Sigma$, it is important to understand the approximation error in
each individual eigenvector as a function of the number of available samples.
The recent results of Kolchinskii and Lounici yield such bounds. In the present
paper we sharpen these bounds and show that eigenvectors can often be
reconstructed to a required accuracy from a sample of strictly smaller size
order.
| 0 | 0 | 1 | 1 | 0 | 0 |
17,873 | On Convex Programming Relaxations for the Permanent | In recent years, several convex programming relaxations have been proposed to
estimate the permanent of a non-negative matrix, notably in the works of
Gurvits and Samorodnitsky. However, the origins of these relaxations and their
relationships to each other have remained somewhat mysterious. We present a
conceptual framework, implicit in the belief propagation literature, to
systematically arrive at these convex programming relaxations for estimating
the permanent -- as approximations to an exponential-sized max-entropy convex
program for computing the permanent. Further, using standard convex programming
techniques such as duality, we establish equivalence of these aforementioned
relaxations to those based on capacity-like quantities studied by Gurvits and
Anari et al.
| 1 | 0 | 1 | 0 | 0 | 0 |
17,874 | Many cubic surfaces contain rational points | Building on recent work of Bhargava--Elkies--Schnidman and Kriz--Li, we
produce infinitely many smooth cubic surfaces defined over the field of
rational numbers that contain rational points.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,875 | Representation learning of drug and disease terms for drug repositioning | Drug repositioning (DR) refers to identification of novel indications for the
approved drugs. The requirement of huge investment of time as well as money and
risk of failure in clinical trials have led to surge in interest in drug
repositioning. DR exploits two major aspects associated with drugs and
diseases: existence of similarity among drugs and among diseases due to their
shared involved genes or pathways or common biological effects. Existing
methods of identifying drug-disease association majorly rely on the information
available in the structured databases only. On the other hand, abundant
information available in form of free texts in biomedical research articles are
not being fully exploited. Word-embedding or obtaining vector representation of
words from a large corpora of free texts using neural network methods have been
shown to give significant performance for several natural language processing
tasks. In this work we propose a novel way of representation learning to obtain
features of drugs and diseases by combining complementary information available
in unstructured texts and structured datasets. Next we use matrix completion
approach on these feature vectors to learn projection matrix between drug and
disease vector spaces. The proposed method has shown competitive performance
with state-of-the-art methods. Further, the case studies on Alzheimer's and
Hypertension diseases have shown that the predicted associations are matching
with the existing knowledge.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,876 | Analysis of a remarkable singularity in a nonlinear DDE | In this work we investigate the dynamics of the nonlinear DDE
(delay-differential equation)
x''(t)+x(t-T)+x(t)^3=0
where T is the delay. For T=0 this system is conservative and exhibits no
limit cycles. For T>0, no matter how small, an infinite number of limit cycles
exist, their amplitudes going to infinity in the limit as T approaches zero.
We investigate this situation in three ways: 1) Harmonic Balance, 2)
Melnikov's integral, and 3) Adding damping to regularize the singularity.
| 0 | 1 | 1 | 0 | 0 | 0 |
17,877 | Characterization of Lipschitz functions in terms of variable exponent Lebesgue spaces | Our aim is to characterize the Lipschitz functions by variable exponent
Lebesgue spaces. We give some characterizations of the boundedness of the
maximal or nonlinear commutators of the Hardy-Littlewood maximal function and
sharp maximal function in variable exponent Lebesgue spaces when the symbols
$b$ belong to the Lipschitz spaces, by which some new characterizations of
Lipschitz spaces and nonnegative Lipschitz functions are obtained. Some
equivalent relations between the Lipschitz norm and the variable exponent
Lebesgue norm are also given.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,878 | Superconductivity Induced by Interfacial Coupling to Magnons | We consider a thin normal metal sandwiched between two ferromagnetic
insulators. At the interfaces, the exchange coupling causes electrons within
the metal to interact with magnons in the insulators. This electron-magnon
interaction induces electron-electron interactions, which, in turn, can result
in p-wave superconductivity. In the weak-coupling limit, we solve the gap
equation numerically and estimate the critical temperature. In YIG-Au-YIG
trilayers, superconductivity sets in at temperatures somewhere in the interval
between 1 and 10 K. EuO-Au-EuO trilayers require a lower temperature, in the
range from 0.01 to 1 K.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,879 | Privacy Assessment of De-identified Opal Data: A report for Transport for NSW | We consider the privacy implications of public release of a de-identified
dataset of Opal card transactions. The data was recently published at
this https URL. It
consists of tap-on and tap-off counts for NSW's four modes of public transport,
collected over two separate week-long periods. The data has been further
treated to improve privacy by removing small counts, aggregating some stops and
routes, and perturbing the counts. This is a summary of our findings.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,880 | On the Importance of Correlations in Rational Choice: A Case for Non-Nashian Game Theory | The Nash equilibrium paradigm, and Rational Choice Theory in general, rely on
agents acting independently from each other. This note shows how this
assumption is crucial in the definition of Rational Choice Theory. It explains
how a consistent Alternate Rational Choice Theory, as suggested by Jean-Pierre
Dupuy, can be built on the exact opposite assumption, and how it provides a
viable account for alternate, actually observed behavior of rational agents
that is based on correlations between their decisions.
The end goal of this note is three-fold: (i) to motivate that the Perfect
Prediction Equilibrium, implementing Dupuy's notion of projected time and
previously called "projected equilibrium", is a reasonable approach in certain
real situations and a meaningful complement to the Nash paradigm, (ii) to
summarize common misconceptions about this equilibrium, and (iii) to give a
concise motivation for future research on non-Nashian game theory.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,881 | Reliable estimation of prediction uncertainty for physico-chemical property models | The predictions of parameteric property models and their uncertainties are
sensitive to systematic errors such as inconsistent reference data, parametric
model assumptions, or inadequate computational methods. Here, we discuss the
calibration of property models in the light of bootstrapping, a sampling method
akin to Bayesian inference that can be employed for identifying systematic
errors and for reliable estimation of the prediction uncertainty. We apply
bootstrapping to assess a linear property model linking the 57Fe Moessbauer
isomer shift to the contact electron density at the iron nucleus for a diverse
set of 44 molecular iron compounds. The contact electron density is calculated
with twelve density functionals across Jacob's ladder (PWLDA, BP86, BLYP, PW91,
PBE, M06-L, TPSS, B3LYP, B3PW91, PBE0, M06, TPSSh). We provide systematic-error
diagnostics and reliable, locally resolved uncertainties for isomer-shift
predictions. Pure and hybrid density functionals yield average prediction
uncertainties of 0.06-0.08 mm/s and 0.04-0.05 mm/s, respectively, the latter
being close to the average experimental uncertainty of 0.02 mm/s. Furthermore,
we show that both model parameters and prediction uncertainty depend
significantly on the composition and number of reference data points.
Accordingly, we suggest that rankings of density functionals based on
performance measures (e.g., the coefficient of correlation, r2, or the
root-mean-square error, RMSE) should not be inferred from a single data set.
This study presents the first statistically rigorous calibration analysis for
theoretical Moessbauer spectroscopy, which is of general applicability for
physico-chemical property models and not restricted to isomer-shift
predictions. We provide the statistically meaningful reference data set MIS39
and a new calibration of the isomer shift based on the PBE0 functional.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,882 | Rethinking Split Manufacturing: An Information-Theoretic Approach with Secure Layout Techniques | Split manufacturing is a promising technique to defend against fab-based
malicious activities such as IP piracy, overbuilding, and insertion of hardware
Trojans. However, a network flow-based proximity attack, proposed by Wang et
al. (DAC'16) [1], has demonstrated that most prior art on split manufacturing
is highly vulnerable. Here in this work, we present two practical layout
techniques towards secure split manufacturing: (i) gate-level graph coloring
and (ii) clustering of same-type gates. Our approach shows promising results
against the advanced proximity attack, lowering its success rate by 5.27x,
3.19x, and 1.73x on average compared to the unprotected layouts when splitting
at metal layers M1, M2, and M3, respectively. Also, it largely outperforms
previous defense efforts; we observe on average 8x higher resilience when
compared to representative prior art. At the same time, extensive simulations
on ISCAS'85 and MCNC benchmarks reveal that our techniques incur an acceptable
layout overhead. Apart from this empirical study, we provide---for the first
time---a theoretical framework for quantifying the layout-level resilience
against any proximity-induced information leakage. Towards this end, we
leverage the notion of mutual information and provide extensive results to
validate our model.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,883 | Unsupervised robotic sorting: Towards autonomous decision making robots | Autonomous sorting is a crucial task in industrial robotics which can be very
challenging depending on the expected amount of automation. Usually, to decide
where to sort an object, the system needs to solve either an instance retrieval
(known object) or a supervised classification (predefined set of classes)
problem. In this paper, we introduce a new decision making module, where the
robotic system chooses how to sort the objects in an unsupervised way. We call
this problem Unsupervised Robotic Sorting (URS) and propose an implementation
on an industrial robotic system, using deep CNN feature extraction and standard
clustering algorithms. We carry out extensive experiments on various standard
datasets to demonstrate the efficiency of the proposed image clustering
pipeline. To evaluate the robustness of our URS implementation, we also
introduce a complex real world dataset containing images of objects under
various background and lighting conditions. This dataset is used to fine tune
the design choices (CNN and clustering algorithm) for URS. Finally, we propose
a method combining our pipeline with ensemble clustering to use multiple images
of each object. This redundancy of information about the objects is shown to
increase the clustering results.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,884 | Beyond Whittle: Nonparametric correction of a parametric likelihood with a focus on Bayesian time series analysis | The Whittle likelihood is widely used for Bayesian nonparametric estimation
of the spectral density of stationary time series. However, the loss of
efficiency for non-Gaussian time series can be substantial. On the other hand,
parametric methods are more powerful if the model is well-specified, but may
fail entirely otherwise. Therefore, we suggest a nonparametric correction of a
parametric likelihood taking advantage of the efficiency of parametric models
while mitigating sensitivities through a nonparametric amendment. Using a
Bernstein-Dirichlet prior for the nonparametric spectral correction, we show
posterior consistency and illustrate the performance of our procedure in a
simulation study and with LIGO gravitational wave data.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,885 | A consistent approach to unstructured mesh generation for geophysical models | Geophysical model domains typically contain irregular, complex fractal-like
boundaries and physical processes that act over a wide range of scales.
Constructing geographically constrained boundary-conforming spatial
discretizations of these domains with flexible use of anisotropically, fully
unstructured meshes is a challenge. The problem contains a wide range of scales
and a relatively large, heterogeneous constraint parameter space. Approaches
are commonly ad hoc, model or application specific and insufficiently
described. Development of new spatial domains is frequently time-consuming,
hard to repeat, error prone and difficult to ensure consistent due to the
significant human input required. As a consequence, it is difficult to
reproduce simulations, ensure a provenance in model data handling and
initialization, and a challenge to conduct model intercomparisons rigorously.
Moreover, for flexible unstructured meshes, there is additionally a greater
potential for inconsistencies in model initialization and forcing parameters.
This paper introduces a consistent approach to unstructured mesh generation for
geophysical models, that is automated, quick-to-draft and repeat, and provides
a rigorous and robust approach that is consistent to the source data
throughout. The approach is enabling further new research in complex
multi-scale domains, difficult or not possible to achieve with existing
methods. Examples being actively pursued in a range of geophysical modeling
efforts are presented alongside the approach, together with the implementation
library Shingle and a selection of its verification test cases.
| 1 | 1 | 0 | 0 | 0 | 0 |
17,886 | Metric Map Merging using RFID Tags & Topological Information | A map merging component is crucial for the proper functionality of a
multi-robot system performing exploration, since it provides the means to
integrate and distribute the most important information carried by the agents:
the explored-covered space and its exact (depending on the SLAM accuracy)
morphology. Map merging is a prerequisite for an intelligent multi-robot team
aiming to deploy a smart exploration technique. In the current work, a metric
map merging approach based on environmental information is proposed, in
conjunction with spatially scattered RFID tags localization. This approach is
divided into the following parts: the maps approximate rotation calculation via
the obstacles poses and localized RFID tags, the translation employing the best
localized common RFID tag and finally the transformation refinement using an
ICP algorithm.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,887 | Learning Local Feature Aggregation Functions with Backpropagation | This paper introduces a family of local feature aggregation functions and a
novel method to estimate their parameters, such that they generate optimal
representations for classification (or any task that can be expressed as a cost
function minimization problem). To achieve that, we compose the local feature
aggregation function with the classifier cost function and we backpropagate the
gradient of this cost function in order to update the local feature aggregation
function parameters. Experiments on synthetic datasets indicate that our method
discovers parameters that model the class-relevant information in addition to
the local feature space. Further experiments on a variety of motion and visual
descriptors, both on image and video datasets, show that our method outperforms
other state-of-the-art local feature aggregation functions, such as Bag of
Words, Fisher Vectors and VLAD, by a large margin.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,888 | Sub-harmonic Injection Locking in Metronomes | In this paper, we demonstrate sub-harmonic injection locking (SHIL) in
mechanical metronomes. To do so, we first formulate metronome's physical
compact model, focusing on its nonlinear terms for friction and the escapement
mechanism. Then we analyze metronomes using phase-macromodel-based techniques
and show that the phase of their oscillation is in fact very immune to periodic
perturbation at twice its natural frequency, making SHIL difficult. Guided by
the phase-macromodel-based analysis, we are able to modify the escapement
mechanism of metronomes such that SHIL can happen more easily. Then we verify
the occurrence of SHIL in experiments. To our knowledge, this is the first
demonstration of SHIL in metronomes; As such, it provides many valuable
insights into the modelling, simulation, analysis and design of nonlinear
oscillators. The demonstration is also suitable to use for teaching the subject
of injection locking and SHIL.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,889 | Spin Seebeck effect in a polar antiferromagnet $α$-Cu$_{2}$V$_{2}$O$_{7}$ | We have studied the longitudinal spin Seebeck effect in a polar
antiferromagnet $\alpha$-Cu$_{2}$V$_{2}$O$_{7}$ in contact with a Pt film.
Below the antiferromagnetic transition temperature of
$\alpha$-Cu$_{2}$V$_{2}$O$_{7}$, spin Seebeck voltages whose magnetic field
dependence is similar to that reported in antiferromagnetic MnF$_{2}$$\mid$Pt
bilayers are observed. Though a small weak-ferromagnetic moment appears owing
to the Dzyaloshinskii-Moriya interaction in $\alpha$-Cu$_{2}$V$_{2}$O$_{7}$,
the magnetic field dependence of spin Seebeck voltages is found to be
irrelevant to the weak ferromagnetic moments. The dependences of the spin
Seebeck voltages on magnetic fields and temperature are analyzed by a magnon
spin current theory. The numerical calculation of spin Seebeck voltages using
magnetic parameters of $\alpha$-Cu$_{2}$V$_{2}$O$_{7}$ determined by previous
neutron scattering studies reveals that the magnetic-field and temperature
dependences of the spin Seebeck voltages for
$\alpha$-Cu$_{2}$V$_{2}$O$_{7}$$\mid$Pt are governed by the changes in magnon
lifetimes with magnetic fields and temperature.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,890 | Mackey algebras which are Gorenstein | We complete the picture available in the literature by showing that the
integral Mackey algebra is Gorenstein if and only if the group order is
square-free, in which case it must have Gorenstein dimension one. We illustrate
this result by looking in details at the examples of the cyclic group of order
four and the Klein four group.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,891 | Efficient acquisition rules for model-based approximate Bayesian computation | Approximate Bayesian computation (ABC) is a method for Bayesian inference
when the likelihood is unavailable but simulating from the model is possible.
However, many ABC algorithms require a large number of simulations, which can
be costly. To reduce the computational cost, Bayesian optimisation (BO) and
surrogate models such as Gaussian processes have been proposed. Bayesian
optimisation enables one to intelligently decide where to evaluate the model
next but common BO strategies are not designed for the goal of estimating the
posterior distribution. Our paper addresses this gap in the literature. We
propose to compute the uncertainty in the ABC posterior density, which is due
to a lack of simulations to estimate this quantity accurately, and define a
loss function that measures this uncertainty. We then propose to select the
next evaluation location to minimise the expected loss. Experiments show that
the proposed method often produces the most accurate approximations as compared
to common BO strategies.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,892 | Active Learning for Regression Using Greedy Sampling | Regression problems are pervasive in real-world applications. Generally a
substantial amount of labeled samples are needed to build a regression model
with good generalization ability. However, many times it is relatively easy to
collect a large number of unlabeled samples, but time-consuming or expensive to
label them. Active learning for regression (ALR) is a methodology to reduce the
number of labeled samples, by selecting the most beneficial ones to label,
instead of random selection. This paper proposes two new ALR approaches based
on greedy sampling (GS). The first approach (GSy) selects new samples to
increase the diversity in the output space, and the second (iGS) selects new
samples to increase the diversity in both input and output spaces. Extensive
experiments on 12 UCI and CMU StatLib datasets from various domains, and on 15
subjects on EEG-based driver drowsiness estimation, verified their
effectiveness and robustness.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,893 | Centroid estimation based on symmetric KL divergence for Multinomial text classification problem | We define a new method to estimate centroid for text classification based on
the symmetric KL-divergence between the distribution of words in training
documents and their class centroids. Experiments on several standard data sets
indicate that the new method achieves substantial improvements over the
traditional classifiers.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,894 | Rapid Near-Neighbor Interaction of High-dimensional Data via Hierarchical Clustering | Calculation of near-neighbor interactions among high dimensional, irregularly
distributed data points is a fundamental task to many graph-based or
kernel-based machine learning algorithms and applications. Such calculations,
involving large, sparse interaction matrices, expose the limitation of
conventional data-and-computation reordering techniques for improving space and
time locality on modern computer memory hierarchies. We introduce a novel
method for obtaining a matrix permutation that renders a desirable sparsity
profile. The method is distinguished by the guiding principle to obtain a
profile that is block-sparse with dense blocks. Our profile model and measure
capture the essential properties affecting space and time locality, and permit
variation in sparsity profile without imposing a restriction to a fixed
pattern. The second distinction lies in an efficient algorithm for obtaining a
desirable profile, via exploring and exploiting multi-scale cluster structure
hidden in but intrinsic to the data. The algorithm accomplishes its task with
key components for lower-dimensional embedding with data-specific principal
feature axes, hierarchical data clustering, multi-level matrix compression
storage, and multi-level interaction computations. We provide experimental
results from case studies with two important data analysis algorithms. The
resulting performance is remarkably comparable to the BLAS performance for the
best-case interaction governed by a regularly banded matrix with the same
sparsity.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,895 | Evolution and Recent Developments of the Gaseous Photon Detectors Technologies | The evolution and the present status of the gaseous photon detectors
technologies are reviewed. The most recent developments in several branches of
the field are described, in particular the installation and commissioning of
the first large area MPGD-based detectors of single photons on COMPASS RICH-1.
Investigation of novel detector architectures, different materials and various
applications are reported, and the quest for visible light gaseous photon
detectors is discussed. The progress on the use of gaseous photon detector
related techniques in the field of cryogenic applications and gaseous or liquid
scintillation imaging are presented.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,896 | Superconducting Qubit-Resonator-Atom Hybrid System | We propose a hybrid quantum system, where an $LC$ resonator inductively
interacts with a flux qubit and is capacitively coupled to a Rydberg atom.
Varying the external magnetic flux bias controls the flux-qubit flipping and
the flux qubit-resonator interface. The atomic spectrum is tuned via an
electrostatic field, manipulating the qubit-state transition of atom and the
atom-resonator coupling. Different types of entanglement of superconducting,
photonic, and atomic qubits can be prepared via simply tuning the flux bias and
electrostatic field, leading to the implementation of three-qubit Toffoli logic
gate.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,897 | Heroes and Zeroes: Predicting the Impact of New Video Games on Twitch.tv | Video games and the playing thereof have been a fixture of American culture
since their introduction in the arcades of the 1980s. However, it was not until
the recent proliferation of broadband connections robust and fast enough to
handle live video streaming that players of video games have transitioned from
a content consumer role to a content producer role. Simultaneously, the rise of
social media has revealed how interpersonal connections drive user engagement
and interest. In this work, we discuss the recent proliferation of video game
streaming, particularly on Twitch.tv, analyze trends and patterns in video game
viewing, and develop predictive models for determining if a new game will have
substantial impact on the streaming ecosystem.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,898 | Variational Autoencoders for Learning Latent Representations of Speech Emotion: A Preliminary Study | Learning the latent representation of data in unsupervised fashion is a very
interesting process that provides relevant features for enhancing the
performance of a classifier. For speech emotion recognition tasks, generating
effective features is crucial. Currently, handcrafted features are mostly used
for speech emotion recognition, however, features learned automatically using
deep learning have shown strong success in many problems, especially in image
processing. In particular, deep generative models such as Variational
Autoencoders (VAEs) have gained enormous success for generating features for
natural images. Inspired by this, we propose VAEs for deriving the latent
representation of speech signals and use this representation to classify
emotions. To the best of our knowledge, we are the first to propose VAEs for
speech emotion classification. Evaluations on the IEMOCAP dataset demonstrate
that features learned by VAEs can produce state-of-the-art results for speech
emotion classification.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,899 | Insense: Incoherent Sensor Selection for Sparse Signals | Sensor selection refers to the problem of intelligently selecting a small
subset of a collection of available sensors to reduce the sensing cost while
preserving signal acquisition performance. The majority of sensor selection
algorithms find the subset of sensors that best recovers an arbitrary signal
from a number of linear measurements that is larger than the dimension of the
signal. In this paper, we develop a new sensor selection algorithm for sparse
(or near sparse) signals that finds a subset of sensors that best recovers such
signals from a number of measurements that is much smaller than the dimension
of the signal. Existing sensor selection algorithms cannot be applied in such
situations. Our proposed Incoherent Sensor Selection (Insense) algorithm
minimizes a coherence-based cost function that is adapted from recent results
in sparse recovery theory. Using six datasets, including two real-world
datasets on microbial diagnostics and structural health monitoring, we
demonstrate the superior performance of Insense for sparse-signal sensor
selection.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,900 | Discrete Distribution for a Wiener Process Range and its Properties | We introduce the discrete distribution of a Wiener process range. Rather than
finding some basic distributional properties including hazard rate function,
moments, Stress-strength parameter and order statistics of this distribution,
this work studies some basic properties of the truncated version of this
distribution. The effectiveness of this distribution is established using a
data set.
| 0 | 0 | 1 | 1 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.